title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Large, Fragmented, and Stratified: Venture Opportunities in WealthTech
This post is the first in a series on WealthTech written with Charge by me, Renee Li, a full-time MBA student at Columbia Business School. Prior to B-school, I was an Investment Associate at BlackRock. By building this investment thesis publicly, I want to create a resource for others like me who are trying to break into VC. The wealth-management industry, a market with $487 billion in value in 2019, is large and fragmented. UBS, the global leader, has a 3% market share. There were more than 270,000 personal financial advisors working in the US in 2018 according to the Bureau of Labor Statistics. Despite the abundance of wealth advisors, the vast majority of the market is underserved. The industry stratifies customers in a manner similar to airlines. The result is that mass-market consumers, the ones that need advice the most, get little to no service. “Affluent” clients, with between $300,000 and $1 million in assets, get premium-economy treatment i.e. may talk to advisers by phone and choose from ready-made funds. “High-net-worth” clients, with up to $15 million, fly business class, picking stocks and getting in-person advice from advisers. Flying private are the “ultra-high-net-worth” individuals, who have access to alternative assets, currency hedges, dinners and golf outings. While high-net-worth individuals typically pay no more than 1% of assets in fees each year, the mass affluent often pay over 2% for inferior service[1]. Cattle class gets no service at all. The Business of Wealth Management and its Discontents The majority of wealth advising firms operate under the “brokerage” model, where licensed financial advisors are paid on a commission basis and are responsible for all aspects of serving their clients, including selling, onboarding, advising, as well as portfolio management. The brokerage model contrasts with the “banking” model, where financial advisors are paid on a salary basis and are part of a large wealth management company with segregated Relationship Managers, Investment Specialists, and Support Teams who serve clients collectively. While the brokerage model has its obvious benefits (e.g. clearer incentives and revenue ownership), it presents many challenges for wealth advisors. It burdens wealth advisors with many non-core tasks that take up a lot of their time. From my conversations with wealth advisors, many complain about the amount of time and hurdles that they have to jump through to serve their clients. For example, when a certain client becomes identified as a “high risk” individual, advisors spend hours on obtaining the relevant checks in order to serve these clients. According to a survey conducted by Forwardlane, wealth advisors spend less than 50% of their time on servicing clients (see chart below). Processes in the Wealth Management Business Source: Oliver Wyman, Winning at All Costs — Cost Management As Key Success Driver The advisors take a “commission”, or “asset-based” fee, although the industry is transitioning to “fee” based. Fee-based revenue now contributes 69% of overall gross production, up from 49% in 2015. The industry is challenged by declining margins. Annual fees for new accounts (for households with $1 million to $1.5 million invested) averaged 1.01% in 2019 down from 1.07% in 2015[2]. Dwindling margin is another reason why the wealth management industry must adapt to increase their operational efficiency and client engagement. Another trend is the proliferation of Registered Investment Advisors (RIAs). These are independent, fee-only firms regulated by the Investment Advisers Act of 1940 that are required to be fiduciary. This contrasts with the traditional “wirehouse” full-service brokerage firms such as Morgan Stanley, Bank of America’s Merrill Lynch, UBS, and Wells Fargo, who are incentivized to sell their own products and services. As consumers become savvier and realize the conflict of interest of these non-fiduciary investment advisors, they migrate to RIAs, resulting in the number of RIA clients soaring 85% over the past eight years to 43 million[3]. Source: McKinsey, The state of North American retail wealth management Given the demographic profile (high net worth, ~65 years old) of the clientele as well as the advisors themselves (~50 years old), wealth management is a highly relationship-driven business. Most clients are acquired through referrals of families served. Other channels include conferences, networking events and centers of influence (accountants and attorneys). Advising is typically conducted through in-person conversations, dinners, or golf outings. However, the high-touch, relationship-driven, business model is fundamentally unfit for today’s digitally native millennials. Millennials are defined by DIY culture, prefer to compare features before making decisions, and have little patience for inefficient service. While some LeadGen startups have emerged to serve millennials, much more can be done in this space. Moreover, millennials view their banks as transactional and not relational. On the other hand, financial advisors have traditionally preferred serving clients more holistically (rather than serve clients who may be spreading their assets across different advice providers). From 2015 to 2019, average accounts per household served have climbed up to 3.1 in 2019 from 2.7 in 2015[4], showing a deepening of client relationships. Therefore, we believe that there are opportunities for lighter-touch wealth advising business models to serve the millennial segment.
https://medium.com/chargevc/large-fragmented-and-stratified-venture-opportunities-in-wealthtech-ef44cc6f0f07
[]
2020-12-04 14:56:15.199000+00:00
['Investing', 'Startup', 'Wealth', 'Venture Capital', 'Finance']
I Wrote About My Toxic Family, Then My Toxic Family Found Out
Then my dad found my Medium and a hell-storm rained down. I received pages upon pages of accusations and attacks on my character. False assumption after false assumption. Narcissistic rage carefully crafted into polished emails and text messages. I received thinly veiled threats, with promises many members of my family shared his beliefs. He vilified anyone who trusted me at my word, either I had manipulated them into supporting me or they were feeding me appalling advice. Apparently, all of my articles were elaborate lies to gain fame, notoriety, pity, or money. I was compared the Rachel Dolezal’s of the world. So I stopped writing anything meaningful. I kept up with the publication I owned and penned a few pieces about movies or films, but I avoided writing anything personal. I couldn’t find it in myself to write for myself. I couldn’t even convince myself to write for people who looked for articles like mine. I couldn’t cope with more emails or letters than I was already receiving, I knew writing would just add fuel to the fire. I was scared, on some level I’m still scared. My family is a parasitic force, the negativity is astounding. After a year of actively healing, I found myself sucked back in. Even though I kept away and stayed true to my therapists’ instructions to stay grounded, my thoughts were consumed by my father’s cruel words and the promise that other members of my family felt the same way. I spent hours doubting my truth, doubting events I 100% knew happened. And then I spent hours guilty and ashamed of backtracking on the hard-earned progress I had made. I grieved, thoughts constantly swirling around my tired brain. How could someone who’s supposed to love and care for me treat me like this…how could anyone treat anyone like this? I have loved my family with conviction and compassion, this is my pay off? I regretted I hadn’t written more about my family, after all, I had written about milder events from my youth and young adulthood. If I was going to be attacked, I might as well be attacked for more than 15/160 articles. I was bitter and sad. After he read my pain and triumphs alike, the only response worth sharing was one of selfish spite? I was lost. Medium and writing online was no longer a place of comfort, healing, and community. Just the thought of publishing something made me nauseous, I didn’t even try. My therapy sessions consumed with ways to find my footing, find myself again.
https://medium.com/fearless-she-wrote/i-wrote-about-my-toxic-family-then-my-toxic-family-found-out-a73295b6fe8e
['Faith Ann']
2020-12-19 19:22:36.924000+00:00
['Family', 'Self', 'PTSD', 'Relationships', 'Storytelling']
A Better Way to Skill Up
The Problem with Education as a Service Today Today’s education as a service offerings primarily focus on up-skilling. They try to teach you the in-demand skills of the moment so that you move in the direction of the gold arrow below — from unemployed and inexperienced hermit to knowledgable and employable working fellow. Education as a service tries to move you in this direction The problem with moving in this direction is that it’s really hard. It’s not enough to learn the stuff but you must also: Obtain a credential that signals to employers that you are knowledgeable — and it must be a credential that they have heard of and respect. Demonstrate the ability (preferably through actual paid experience like contracting or an internship) to practically apply your knowledge towards solving business problems. This becomes especially important if your credential is not top tier. Do you see the problem? Everything is about earning credibility and there are currently only two primary ways to earn it — universities and companies. Since I write often about data science, let’s use it as an example. An aspiring data scientist trying to break into the field can earn some street cred in one of two ways: Attend a respectable university and obtain a degree in data science (preferably an advanced degree). Convince a respectable company to pay you to work in a data related role. MOOCs, bootcamps, and other alternative offerings, on the other hand, are unable to earn you the necessary credibility. They may teach you the necessary skills (in my experience, they do a pretty decent job at this). But it’s kind of like that old philosophy cliche: “If a tree falls in a forest and no one is around to hear it, does it make a sound?” Even if I developed a deep understanding of machine learning and statistics through reading books, blogs, a MOOC or two, and a bootcamp, I would still probably get rejected from the job I want. Employers just don’t put much stock in a certificate of completion. And it is a gamble from the employer’s perspective: given that you have no data science work history and no one to vouch for you that you truly know your stuff, they can only take you at your word. And unfortunately not every employer is willing to do that — people, including hiring managers, are generally risk averse and willing to pay a premium for the safe, “proven” candidate (hiring the wrong person is painful from both a productivity and financial perspective). Yes I know. If you are full of drive and initiative, you can place in Kaggle competitions, meet industry insiders at conferences and meetups, build up an awesome portfolio of data science projects on your own, and work pro-bono for an early-stage startup (all while interviewing). But that is not for everyone. Bootcamps and their education as a service competitors, while cheaper than a traditional college degree, still cost a fair bit in terms of time and money ($15,000 to $20,000 in tuition and at least months of your time). They also emphasize in their marketing how employable you will be upon graduation. But without connections or a prestigious enough certification (which even the best bootcamps lack), your completion of the program is just like a tree falling in a deserted forest with no employers around to see it or care about it.
https://towardsdatascience.com/a-better-way-to-skill-up-b2e5ee87dd0a
['Tony Yiu']
2019-08-29 20:26:25.910000+00:00
['Careers', 'Startup', 'Data Science', 'Technology', 'Education']
Getting Started with Core Data
Photo by Samuel Zeller on Unsplash Overview Core Data is one of frameworks that is provided for iOS to manage the mapping of your objects to a storage. The first step in working with Core Data is to create a data model file. Here you define the structure of your application’s objects, including their object types, properties, and relationships. You can add a Core Data model file to your Xcode project when you create the project. Or you can add it to an existing project. So, let’s get it started! Core Data Creation If you create a new project for the first time, you can use this way: In the dialog for creating a new project, do not forget to select the Use Core Data checkbox. New Project Option Dialog For existing project, you can use this way: Choose File > New > File then this dialog will be showed. New File Option Dialog After naming your Core Data model, it will appear like this in your project: Core Data Model Data Model In data model, there are some properties that help you to create your own database. You can see it in this picture below. JSdb.xcdatamodeld Panel Add Entity is to create some kind of table in your database. Add Attribute is to add some attribute to your entities. Editor Style is the UI that shows how your database look like. If grid style is selected, you can see every attribute type such as String, Int, Bool, or else. For example, I will create JSdb. There are entities such Jadwal & Display Jadwal. Jadwal Entity Entities Editor How to Store Data Setup Your AppDelegate.swift First, go to your AppDelegate.swift then add variables and functions like the example below. The Directory that the Application Uses to Store the Core Data // MARK: — Core Data stack lazy var applicationDocumentsDirectory: URL = { let urls = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask) return urls[urls.count-1] }() The Managed Object Model for the Application This property is mandatory. It is a fatal error for the application if cannot find and load its model. lazy var managedObjectModel: NSManagedObjectModel = { let modelURL = Bundle.main.url(forResource: “JSdb”, withExtension: “momd”)! return NSManagedObjectModel(contentsOf: modelURL)! }() The Persistent Store Coordinator for the Application This implementation creates and returns a coordinator, having added the store for the application to it. This property is optional since there are legitimate error conditions that could cause the creation of the store to fail. @available(iOS 10.0, *) lazy var persistentContainer: NSPersistentContainer = { let container = NSPersistentContainer(name: “JSdb”) container.loadPersistentStores(completionHandler: { (storeDescription, error) in if let error = error as NSError? { fatalError(“Unresolved error \(error), \(error.userInfo)”) } }) return container }() lazy var persistentStoreCoordinator: NSPersistentStoreCoordinator? = { // Create the coordinator and store let coordinator = NSPersistentStoreCoordinator(managedObjectModel: managedObjectModel) let url = applicationDocumentsDirectory.appendingPathComponent(“JSdb.sqlite”) var failureReason = “There was an error creating or loading the application’s saved data.” let options = [NSMigratePersistentStoresAutomaticallyOption: NSNumber(value: true as Bool), NSInferMappingModelAutomaticallyOption: NSNumber(value: true as Bool)] do { try coordinator.addPersistentStore(ofType: NSSQLiteStoreType, configurationName: nil, at: url, options: options) } catch { // Report any error we got. var dict = [String: AnyObject]() dict[NSLocalizedDescriptionKey] = “Failed to initialize the application’s saved data” as AnyObject dict[NSLocalizedFailureReasonErrorKey] = failureReason as AnyObject dict[NSUnderlyingErrorKey] = error as NSErro NSLog(“\(dict)”) abort() } return coordinator }() The Managed Object Context for the Application This property is optional since there are legitimate error conditions that could cause the creation of the context to fail. This return managed object context for the application (which is already bound to the persistent store coordinator for the application). lazy var managedObjectContext: NSManagedObjectContext? = { let coordinator = persistentStoreCoordinator var managedObjectContext = NSManagedObjectContext(concurrencyType: .mainQueueConcurrencyType) managedObjectContext.persistentStoreCoordinator = coordinator return managedObjectContext }() Setup Your ViewController.swift Next, we need to setup the ViewController file. You will need to implement this file to use Core Data. For example is ViewController.swift. var context: NSManagedObjectContext? var appDelegate: AppDelegate? override func viewDidLoad() { super.viewDidLoad() appDelegate = UIApplication.shared.delegate as! AppDelegate //… } } Here is the function to set value of context. func getContext() { if #available(iOS 10.0, *) { self.context = appDelegate.persistentContainer.viewContext } else { self.context = appDelegate.managedObjectContext } } Now, you can call function getContext every time you need to create, update, or delete on your Core Data. Entity Creation First, we need to initialize model variable as Jadwal entity. self.getContext() let model = NSEntityDescription.insertNewObject(forEntityName: “Jadwal”, into: self.context!) as! Jadwal To set the value of attribute model you can simply use this: model.id = Int16(1) model.subuh = “04:30” model.dzuhur = “11:49” model.ashar = “15:30” model.dhuha = “06:55” model.imsak = “04:20” model.maghrib = “17:59” model.place_id = “aXjdsakkjd12” model.isya = “19:00” model.terbit = “05:00” model.tanggal = “07–05–2019” The model will automatically has attribute id, subuh, dzuhur, etc. from the attribute that is already created in Core Data model. Next step is do store into database: do { try self.context?.save() print(“Model Saved : \(model)”) } catch let error as NSError { print(“Could not save. \(error), \(error.userInfo)”) } That is all the basic about Core Data. Good luck and have fun with Core Data! :) Reference
https://medium.com/gits-apps-insight/getting-started-with-core-data-18e840f670ac
['Tri Rejeki']
2019-05-15 07:05:35.705000+00:00
['Core Data', 'Xcode', 'iOS', 'Mobile App Development', 'Developer Stories']
Not All “Successful” Medium Articles Go Viral
Many writers on Medium tend to aim for viral success within Medium’s platform. They focus on how to get curators to select their articles, how to get published in major publications, and how to optimize their article key words. There is nothing wrong with this strategy, and I have even written about my experience with articles gaining massive Medium views. But for many writers, it can be discouraging to see your Medium article An Intro to Stochastic Calculus receive just 25 views while the article featured on the Medium’s home page, Five Unexpected Truths I Discovered From Bedazzling My Vajayjay with Swarovski Crystals, received 25,000 views. Sample Clickbait/Buzzfeed-like Article However, many of these attention grabbing/click-bait articles only thrive for a relatively short time, and most of the traffic is generated through the Medium platform itself. Even if the Medium curators/internal search algorithm “fails” your article, it is still possible to obtain large amount of views and have your content shared with a large audience. To illustrate this point, I’m going to show you the analytics behind two Medium articles I published. An article that essentially went viral and had a very brief, but massive surge in views, Uber’s Latest Scandal: Vomit Fraud and an article that grew in momentum months after being published, How to Bypass Virtually Every News Paywall: The viral article achieved massive traffic over a few days, and I imagine this is the trajectory of many articles that are selected by curators on Medium. And this makes sense because Medium’s algorithm places an extremely high emphasis on the recency of articles. The Medium analytics for this article underscore this trajectory:
https://medium.com/blogging-guide/not-all-successful-medium-articles-go-viral-50adc25695d
['Casey Botticello']
2020-04-04 01:04:00.346000+00:00
['Viral Marketing', 'Google Search', 'Writing', 'Content Marketing', 'SEO']
Simplify Calculus for Machine Learning with SymPy
Photo by Kevin Ku on Unsplash Simplify Calculus for Machine Learning with SymPy A quick look at calculus for Machine learning and how to add it to your code with SymPy Machine learning requires some calculus. Many of the online Machine learning courses don’t always cover the basics of calculus assuming the user already has a foundation. If you are anything like me, you might need a bit of a refresher. Let’s take a look at a few basic calculus concepts and how to write them in your code using SymPy. Most of the time, we need calculus to find the derivatives in optimization problems. This helps us to decide whether to increase or decrease weights. Our end goal is to find an extreme point(s) that will be the local minimum or maximum point(s) in a function. Let’s go through the process of finding an extreme point by taking the following steps. Installing and learning the basics of Sympy. Finding the slope of a linear function. Discovering tangent and secant lines. Using our slope and tangent line knowledge to find limits. Understanding what the derivative of a function is. Using the derivative to find the extreme point. Deciding whether the extreme point is a local minimum or a maximum point. Getting Started With SymPy SymPy is a Python library that lets you use symbols to compute various mathematic equations. It includes functions to calculate calculus equations. It also includes many other functions for some higher-level mathematics. Installing SymPy is simple you can find full installation instructions here. If you are already using Anaconda, SymPy is included. With Anacona you can make sure your SymPy is up to date with a simple: conda update sympy If you aren’t using Anaconda, pip is a great way to install new Python libraries. pip install sympy SymPy depends on the mpmath library so you’ll need that installed too. conda install mpmath #or pip install mpmath With SymPy we can create variables like we would in a math equation. We need to set these variables as symbols so SymPy knows to treat them differently than regular Python variables. This is simple and accomplished using the symbols() function. import sympy x2, y = sympy.symbols('x2 y') Now that we have SymPy installed let’s take a step back and look at the foundations of calculus. Linear Equations and the Slope As mentioned above one of the main reasons we need calculus is to find the extreme point(s). To illustrate this, let’s pretend every year you enter the Annual Potato Cannon Contest. Every year you lose to the terrible Danny MacDougal. This year you hire a coach to help you beat Danny. To beat Danny, the coach needs you to give him three things. 1. When Danny’s potato is at it’s highest point. 2. How long it takes the potato to get to the highest point. 3. The slope of the potato at its highest point. Let’s talk first about the slope. If Danny built a magic cannon where the potato ascended forever finding the slope would be easy but there wouldn’t be a maximum height. This type of potato flight path is would be a linear equation like y = 3x + 2 (in y-intercept form). We can visualize this linear function potato flight using NumPy and Matplotlib. If you don’t have NumPy and Matplotlib installed the process is like the SymPy installation above. See here and here for more details. import matplotlib.pyplot as plt import numpy as np #create 100 values for x ranging from 0 to 6 x = np.linspace(0,6,100) #our linear function y = 3*x + 2 #add some aesthetics to out plot plt.grid(color='b', linestyle='--', linewidth=.5) plt.plot(x,y, label="potato flight") plt.xlim(0, 6) plt.ylim(0,20) plt.legend(loc='best') plt.xlabel("Seconds") plt.ylabel("Feet(x10)") #show the plot we created plt.show() We want to examine the slope so we will add to marks on two random points: (1,5) and (4,14). plt.plot(1, 5, 'x', color='red') plt.plot(4, 14, 'x', color='red') potato flying in a linear function When we have a linear function our slope is constant and we can calculate it by looking at any two points and calculating the change in y divided by the change x. Let’s look at the two points we marked earlier. equation to find the slope In y-intercept form (y = mx + b) the m is always the slope. This checks out from our previous function of y = 3x + 2. Potatoes Must Come Down — Nonlinear Functions The slope of a linear function is easy but the potato must come down. We need a way to calculate a slope that changes with each point. Let’s start by visualizing a more realistic potato flight path. Our coach is amazing and he knows the function that represents the flight of the potato is f(x) = -(x²) +4x. Once again let’s visualize this potato flight path with Matplotlib. x = np.linspace(0,5,100) y = -(x**2) + 4*x plt.xlim(0, 4.5) plt.ylim(0,4.5) plt.xlabel("Seconds") plt.ylabel("Height in Feet(x10)") plt.grid(color='b', linestyle='--', linewidth=.5) plt.plot(x, y, label="potato") plt.legend(loc='best') plt.show() non-linear flight of a potato From the visualization of the function, we see that in around 2 seconds the potato is at its height of around 40 feet. The graph is helpful but we still need the slope of the maximum height. Plus we need some hard evidence to bring back to the coach. Let’s move forward proving this height and time with calculus. Secant Lines A secant of a curve is a line that intersects the curve at a minimum of two distinct points. When we have nonlinear functions we can still find the slope between two points, or a secant line. Since (2,4) looks like the top of our potato path, let’s look at 2 points on our new nonlinear potato path: (1,3) and (2,4). #adding this code to our above plot x2 = np.linspace(1,2,100) y2 = x2 + 2 plt.plot(x2,y2, color='green') secant line x + 2 We can see that the slope from (1,3) and (2,4) is 1. Let’s take this a step further and see what happens when we try to find the slope of just the point(2,4). To do so we need a line that will be able to represent going through one point. Tangent Lines Let’s see what happens to the slope of our lines the closer we get to (2,4). To so we’ll draw a few more secant lines. We’ll keep the endpoint at (2,4) but one line will start at (1.5, 3,75) and the other one at (1.95, 3.9975). This gives us two more secant lines y3 = .5x + 3 and y4 = .05x + 3.9. two more secant lines When we finally reach (2,4) as our starting point, a line that forms, which is y = 0x +4. This is the tangent line for (2,4). The tangent line is calculated by solving the limit and plugging it into the y-intercept linear equation. More on the limit in a little bit. Every point in a function has a tangent line, which is how we can calculate the slope for every point in a function. The tangent line for the point(2,4) Remember, our main goal is to find an extreme point, the maximum height of the potato. Extreme points will be when the slope of the tangent line is zero as this will signify when a function changes directions. For instance, let’s look at the slope for the tangent line going through (1,3) and one going through (3,3). 3 tangent lines Let’s look at what happened to the slope of the tangent line for these three points. point (1,3), the tangent line is y = 2x + 1, slope is 2 point (2,4), the tangent line is y = 0x + 4, slope is 0 point (3,3), the tangent line is y = -2x + 9, slope is -2 After 0 the slope changes directions. Perfect, we need to find the point or points that have a tangent line with a slope of zero. These points will tell us the maximum or minimum points as the direction of the tangent line slopes will always change after 0. Great! So we need to find a point on a function where the slope is zero. If only there was a to create a function that would give us the slope of any point in our original function. There is, and that is what the derivative is! Before we jump into derivatives let’s look at Limits. Limits Limits allow us to find the slope of a function as it approaches a certain x value. If we are solving the limit as a secant line it’s easy. We plug in our two x values and two y values and we can solve the slope equation. This sort of limit defined. But, we want to find an undefined limit because we want the slope of the tangent line to the point (2,4). As we approach (2,4) we need the slope. We can’t get the slope of the secant line from (2,4) to (2,4) because if we plug those numbers into our slope equation we get 0/0. This won’t work because as a math teacher once said: “ There are two things in life you can’t do, nail jello to a wall and divide by zero”. Since we can’t divide by zero we need to find the undefined limit. Limits are solved by plugging our function into the slope equation and factoring it out. We can use our slope formula from before and substitute our point(2,4) for the x1/y1 values and then substitute the f(x) and x for the x2/y2 values. Our limit value is the x value we are approaching, which in this case is 2. Our new slope equation for limit 2 of the function -(x²) +4x is : Our goal is to get rid of the x on the denominator, so let’s expand the -(x²) and cancel out the x on the denominator. And now that we have removed the x from the denominator we can put plug 2 back into x. Now that we know how a limit is solved let’s fire up SymPy so we can solve the limit in our code. SymPy has a function named limit() which has 3 parameters. the function we’re finding the limit for the input variable the number the input variable is approaching Since our limit is undefined we need to substitute our x and y values as we did above. import sympy x2, y = sympy.symbols('x2 y') #store our substituted function as a y variable y = (-(x2**2) + 4 * x2 -4) / (x2 -2) limit = sympy.limit(y, x2, 2) // 0 Our limit is 0! Derivatives The derivative is a function that will give us the slope of a tangent for any point in our function. Now we understand what the limit and tangent lines are we can head towards our end goal of using derivatives to find the extreme point(s) in the function. The process of finding a function’s derivative is differentiation. Solving the derivative uses some algebra and our slope formula from above. Because we aren’t solving for a specific point we won’t substitute any values. For this example let’s also replace x1 and x2 with the more common form, which is to use x and x +h. This will give us the following formula to solve for the derivative with f′(x) meaning the derivative of the function of x: If we plug our function into this we get: We can then solve this similarly to how we solved for the limit. We can also use the power rule to solve to easily find our derivative using the following equation. Using that rule we can see the derivative of our function -(x²) +4x is -2x +4. Instead of going through the steps to find the derivative by hand let’s find the derivative using SymPy. SymPy gives us a function called diff() which will perform the differentiation process and returns the derivative. The diff function takes two parameters: the function we’re finding the derivative for the input variable Let’s give this function a try using our original function, -(x²) +4x. # set x as the variable x = sympy.symbols('x') #help make the out easier to read sympy.init_printing(use_unicode=True) #enter the our argumnets to th diff funtion d= sympy.diff(-(x**2) + 4*x, x) #print our derivative print(d) //-2*x + 4 Our extreme points in a function will be where our derivative is equal to zero. This is because when the derivative is equal to 0 the direction of the function has changed, as we explored above. To find the x value we set our derivative to equal 0 and solve for x, -2x + 4 = 0. This is solved with SymPy by using the function solveset(). Solvest takes two parameters: the Eq function which takes two parameters: the equation and the value the equation needs to equal the variable we are trying to solve Solvset will return a set for all numbers that solve the equation. Using solvset to find the x value when the derivative is equal to 0 will look like this: answer = sympy.solveset(sympy.Eq(d, 0),x) print(answer) //{2} Perfect! Our x value is 2 and if we plug that into the original function we get 4 as our y value. Now we are certain that at 2 seconds MacDougal’s potato is exactly 40 feet in the air and the slope at that point is 0! We can take that to the coach and we are sure to win the next potato cannon contest! Is our Extreme Point a Min or a Max? We know our potato’s extreme point must be a max point because these potato cannons aren’t designed to shoot the potato down. But if we didn’t have a graph or know the direction of the function, how would we see if the extreme point(s) is a local minimum value or a local maximum. We already know that setting the x value of the derivative to 2 results in a slope of 0. What happens if we plug two more numbers into our derivative function, one number will be larger than 2 and the other will be less than 2. We will try this with 1 and 3. test1 = -2*1 + 4 test2 = -2*3 + 4 print(test1) //2 print(test2) //-2 We see that before our extreme point the slope is positive and then after the extreme point our slope is negative. A change in slope from positive to negative tells us the extreme point is a maximum value. If the slope had gone from negative to positive we would know the extreme point is a minimum point. If there are multiple extreme points we would want to choose a value between each of the points. For instance, if we had extreme points 1 and 5, we would repeat this process three times: choose a random number less than 1 choose a random number greater than 1 but less than 5 choose a random number greater than 5 Most likely nonlinear functions won’t have only one extreme point. In this case, all of the steps are the same but when we solve for the derivative being equal to zero we get multiple solutions. Conclusion SymPy vast library and is a great way to find the derivative and the local extreme point(s) in a function. SymPy is easy to use and simple to read adding simplicity and readability to any Machine learning project that requires calculus. We’ve only touched on a few of the many functions available with this library. I encourage you to explore it further and see what else can be used to incorporate mathematics into your data science projects.
https://towardsdatascience.com/simplify-calculus-for-machine-learning-with-sympy-8a84e57b30bb
['Jeremiah Lutes']
2020-12-10 23:25:07.518000+00:00
['Python', 'Data Science', 'Calculus', 'Code', 'Machine Learning']
The Time I Came In Last
Pexel Image JD, who is fifty, is our special needs volunteer. He arrives faithfully at the charity thrift store every day, dropped off by a van that ushers people like him from group homes to volunteer jobs or day care facilities. It’s his responsibility to greet donors and sign donation receipts. He’s one of the most gregarious people I’ve ever met. He knows so many people that I tell him he should run for mayor. Names and faces don’t slip through the cracks of his memory like they do mine. He recognizes everyone who has ever crossed his path. “I know this man,” JD says, introducing me to a fellow church member or volunteer with his softball league. “He’s my friend.” “Is there anyone you don’t know?” I always reply, which never fails to elicit a broad grin. JD participates in every activity available to him, so it wasn’t surprising that he signed up for our 5K Rise and Run, a fundraiser put on by the charity. Since his van doesn’t transport people on Saturday mornings at 6 AM, he asked me if I would give him a ride to the race. I was certain somebody at the event would be ready and willing to escort him around the course once we got there. Knowing JD, he probably had a posse of friends scheduled to meet him at the gate. What I discovered instead was a field of lean, fit runners in spandex stretching, jogging in place and looking very serious. These people meant business. No one came over and said, “JD, you’re running with us.” JD and I went from group to group, hoping to spot a familiar face. I had looked forward to running my first 5K and had practiced to increase my endurance. Slim and fit, I thought I had a good chance of doing well in my age group. Driving halfway across town to pick up JD at 6 AM had not been in the plan, but I felt good about myself for making the sacrifice. I hadn’t planned on throwing away the whole race to babysit him around the course, though. But nobody volunteered to walk with him. Casting one last futile look at the crowd of runners, I made my way to the registration table, JD shuffling beside me. “We get a free pancake breakfast when we finish the race,” JD said. He had already mentioned this aspect of the race several times and we both viewed it as a nice way to wrap up the morning. The restaurant providing free pancakes was setting up shop under an awning that covered several large, steel griddles and a delicious aroma had already begun to waft over the field. We pinned our numbers to our shirts and barely made it in time to join the throng of runners poised and ready for takeoff. When the whistle blew, several hundred people surged forward. The sun had risen, streaking the cool September sky with red, and a high, bright moon was still visible, sailing serenely above a distant line of trees. As the crowd of runners pounded around the first bend, it became apparent that no one was going to come forward to walk with JD. Energized and ready to run, I sped up a little, but slowed down when I realized he wasn’t keeping up. The other runners thundered further along the trail, kicking up a small swirl of dust. JD strutted proudly, arms swinging at his sides, puffing a little as we lurched up a small incline. I wondered if I might be able to persuade him to stop at the one-mile mark where a few families with kids called it quits and headed for a nearby park. Maybe I could park him with the families and manage to catch up with some of the others, but JD, determined to continue, shambled along with his awkward, jerky gait. The front runners had long since disappeared. We finally fell way behind everyone except two women who plodded indifferently along, absorbed in a conversation about yard sales. Soon even keeping up with them was difficult. I tried urging JD to go a little faster, but when we stopped for a water break the two yard sale women trudged beyond us out of sight. Luckily, volunteers were posted along the route to keep us from getting lost. The path stretched emptily ahead except for the occasional volunteer. Pixabay image “I can’t wait to get my pancakes,” JD said. We stopped for three more water breaks, two rest stops, and twice because JD’s shoelaces had come untied and I needed to tie them. Even the volunteers were beginning to drift from their stations. One hour and 47 minutes later JD and I finally crossed the finish line. Somebody called out, “Bebe, what did you do, go to sleep along the way? We started to send out a search party.” “We made it!” JD cried excitedly. “Let’s get pancakes!” But they had run out of pancakes. “Sorry,” the caterer said as he folded his tent and packed up his truck. “You should have gotten here earlier.” JD took it well, especially when the Executive Director of our charity hung a medallion around his neck and took his picture. “I won!” He exclaimed proudly, fingering the medallion. When I got home and told my husband about the race, he insisted on making me pancakes.
https://bknicholson.medium.com/the-time-i-came-in-last-54338c5b857e
['Bebe Nicholson']
2018-12-03 13:13:13.731000+00:00
['Volunteering', 'Life Lessons', 'Personal Development', 'Running', 'Storytelling']
Three Useful Computer Keyboard Tricks You Can Use While Typing a Story
Three Useful Computer Keyboard Tricks You Can Use While Typing a Story What happens when you touch this one? Photo by Aryan Dhiman on Unsplash Maybe you know this already, but it was news to me. Hitting various keys while typing a Medium story will magically produce useful options you can use to spice up your story. Who knew? Keep in mind we’re not just talking hit the “a” key and an “a” will appear. Even I know that, and I’m a techno-moron. @ Typing the @ symbol followed by typing a name allows you to insert Medium member’s names. Most of you know this. Sarah Paris or Aimée Gramblin are two people who won’t mind if I “tag” them for illustrative purposes. Right, you two? BTW — don’t abuse this privilege. Tagging a million people just so they’ll read your stuff isn’t recommended unless of course you’re the next Shakespeare and you’re just getting started. But if perchance thou art the next immortal Bard of Avon, taggeth not one’s fellow scriveners. They shall likely knoweth of your acclaim in a freakin’ trice. : and a letter : followed immediately by a letter brings up a vast emoji menu. For example, I inserted this one 😫, the tired-face emoji, by hitting “:” then a lowercase “f.” Many other options were presented to me by doing that simple action. I chose the tired-face from the drop-down menu. Cool, and easy. 😁 This one came my way similarly. I hit “:” then followed it immediately with a lowercase “g.” Again a drop-down menu presented itself and I chose “grin” over “grimace” and a host of others I could have selected. Please note that I work on a PC. If you’re a Mac user you may have to play with this one a bit. Actually here’s the Mac answer courtesy of a writer friend. Control + Command + Space bar. Holy level up, Batman! 🤩 The mysterious little “chain” thingy This one’s not a computer keyboard trick but a Medium formatting trick. Highlight some text in a story you’re writing and a short menu of options will appear. From left to right you’ll be given the option to bold or italicize or “chain” your highlighted text. If you select the “chain” thingy (I don’t know what else to call it and feel like an idiot doing so, but I’m still dazed from my discoveries, so please forgive me.) your selected text will be underlined. Then, once your story is published, readers can click on the underlined text and be automatically taken to a vast array of web links about the underlined text. (Maybe “link” is the correct term, but I still like “chain thingy” so I’m sticking with it.) You can also edit stories you’ve published to include the “chain” thingy by going to those stories, using the edit mode, highlighting the desired text and selecting the “chain” thingy icon.
https://medium.com/illumination-curated/what-happens-when-you-touch-this-one-f5086f18f059
['Michael Burg']
2020-12-30 00:48:28.538000+00:00
['Humor', 'Writing', 'Médium', 'Funny', 'Writing Tips']
The Curious Rationalist Magic of Essential Oils
My wife Buffy loved essential oils. They were a big part of her life before she passed on, and I never really understood how big until she was gone. I think that was my fault. But in parsing her things after she passed, I learned something about my wife I didn’t know, and my skepticism of the efficacy of essential oils changed as a result. Soccer I played adult soccer for many years. It’s a violent sport, with a lot of collisions and physical play, especially in the league I chose. As soccer players grow old, their coordination goes down and their weight goes up, but their instincts are still to try the things they could do when they were eighteen. Additionally, adult contact sports lend themselves to the players venting their “adulting” frustrations, personally or professionally, on their opponents. Which leads to injuries. I got my first sports concussion and blew my ACL out in the same game, versus a team of Uruguayans. Uruguayans are tough. Even prior to that, the injuries were weekly, and persistent. I tell people when they turn forty that their warranties on their bodies expire. I noticed right around forty that a week simply wasn’t enough time to heal the dings, strains, and bruises from the prior week’s game. And they would add up. Buffy made me a little glass container with an essential oil blend in it for my pain, with a rolling applicator. She called a “roller.” I didn’t use it. I told her very politely that that wasn’t the sort of thing I was going to use. In my mind at the time, I thought they were just “perfume placebos,” although I never told her that directly. I told her I was skeptical that they did anything, but they didn’t seem to be hurting anything, so I told her to go for it. She went for it. Hobbies By the time we were forty, she’d amassed quite an apothecary station, and was always mixing things for herself and for our kids. Our rule was anything that went on a kid was voluntary — if they didn’t want it they didn’t have to have it. Sometimes they wanted it. Rollers and dabbers and other thingies, I didn’t pay too much attention. I had my hobbies then too, soccer, music, nerdier stuff. I wasn’t writing again yet. When she was diagnosed with stage four colon cancer I bailed on almost all my hobbies, to take care of her and the family, but she kept doing the essential oil thing. For her it was a creative outlet. As best as I understand it, the essential oil hobbyist world has certain levels of involvement. Some people just have an oil or two they like, or that they read does something positive. Some people buy blends of oils. Some people mix their own blends of oils based on recipes. And some people make their own recipes, with one part research, one part experimentation, and one part intuition. Buffy was this last level, and she was meticulous. I didn’t realize to what extent until after she died. As I was cleaning the house, organizing, boxing, and distributing her belongings, I discovered over a hundred vials, each labeled. Oils. Blends. Salts. And several of her closest friends, whom I didn’t know were into the same hobby, asked me for her recipes. Her notes. Eventually I found them. I don’t know whether these recipes are copied from another source, or they were her creations. I suspect many are the latter. I do know that some of her friends coveted them. I discovered from talking to her social circle, she would make special recipes for her essential oil pals to help them with their troubles at the time, like the one she made for me. Rollers, bath salts, and other such creations. They were an expression of her love, and part of her outreach. Did they “work?” These creations certainly enriched their lives. Some part of me worries that my skepticism kept me from sharing this with her, or prevented her from sharing this with me. It feels like an opportunity lost. Placebos Much has been said in the pop media about the power of positive thinking. Much has been said about the power of prayer. Mind over matter. Deep mental connections between our attitudes and our physiology. There are raging arguments about the efficacy of such things, and I think the discussions around essential oils land in the same space. All of these things in my opinion are connected by belief, and by hope. But what drives me crazy in the discussion, is the medical community already knows the answer to the efficacy of hope. They have studies. It’s called the placebo effect, and everyone’s heard of it, they just don’t think about it in the way that they should. The highly vague and fictionalized version goes like this. In the early days of medical research, scientists would give a trial drug, see a benefit, and presume it worked. Then let’s say one day they give the wrong drug, or a bum set of pills that has no medicine in it, and they still see a benefit. The benefit, the story goes, came from the patient’s belief that there was a drug, and the hope that the drug would work. Belief and hope. And now, all the best drug trials are placebo trials, where permissible. We give one group the real drug, and the other group a fake, and the fake becomes the benchmark against which we measure how good the real drug actually is. The actual measure of drug efficacy isn’t how good the results are, it’s how much better they are than belief and hope alone. How much of an effect is this? Quite a bit, actually, and in the United States it’s growing over time, for antidepressants, antipsychotics, and now even pain killers. In 1996, approved pain killers relieved pain 27% more than placebos, but in 2013, the gap shrunk to 9%. Not because the drugs got worse, but because somehow the placebos are getting better. Are we more hopeful the drugs will work? Do we have more faith in them? I wrote before about our cancer journey, and how the project in part revolved around giving Buffy hope. And only after she passed on did the parallels between that and essential oils come to me. Essential oils may have some medical benefit and they may not, but even if they don’t, they’re still the world’s most perfect placebo.
https://medium.com/handwaving-freakoutery/the-curious-rationalist-magic-of-essential-oils-8b78f35595c5
['Bj Campbell']
2019-08-16 16:13:21.007000+00:00
['Essential Oils', 'Medicine', 'Random', 'Science', 'Alternative Medicine']
Design In Sweden
In this podcast episode we talk with Johan Berndtsson about design and business in Sweden. Johan invites listeners to submit suggestions for speakers for his next “From Business To Buttons Conference” and also invites people to come do design work in Sweden. Your browser does not support the audio element. Throughout the podcast we refer to videos, give web addesses and so on. Here is the list of links and recommendations from Johan: Check out Europe’s greatest Business, Service, and UX-design conference, From Business to Buttons at https://frombusinesstobuttons.com/, and the videos from past conferences at https://frombusinesstobuttons.com/archive . All of them are excellent, but be sure to watch: Jared Spool Mike Monteiro (both talks) Kim Goodwin (both talks) Eric Meyer Golden Krishna Patricia Moore, and of course Susan Weinschenk Also, if you want to learn more about inUse check us out at http://www.inuseexperience.com, and e-mail [email protected] if you have questions or if you’re interested in joining. =) Further reading: The story behind the conference: http://www.inuseexperience.com/blog/story-behind-business-buttons/ Thoughts behind UX and Service Design moving out into the physical world: http://www.inuseexperience.com/blog/experiences-services-and-space/ A template for the Impact Map, our perhaps best tool to connect business goals to user behavior and design: http://www.inuseexperience.com/blog/template-impact-maps-here/ The history behind the Impact Map (http://www.inuseexperience.com/blog/evolution-impact-mapping/) and how it has evolved over the years (http://www.inuseexperience.com/blog/evolution-impact-mapping/). And… The invitation for designers to come to Sweden: http://www.inuseexperience.com/blog/dear-us-designers-welcome-sweden/ Human Tech is a podcast at the intersection of humans, brain science, and technology. Your hosts Guthrie and Dr. Susan Weinschenk explore how behavioral and brain science affects our technologies and how technologies affect our brains. You can subscribe to the HumanTech podcast through iTunes, Stitcher, or where ever you listen to podcasts.
https://medium.com/theteamw/design-in-sweden-b7d29780e808
['The Team W']
2018-06-04 16:38:18.510000+00:00
['Design', 'In Use', 'Sweden']
The Weekly Authority #51
Top 5 Content Marketing Mistakes & How to Avoid Them Content marketing can offer a powerful way to connect with an audience. It can also empower brands to build authority and turn more readers into new clients. So how do you do it for your brand? How do you create “interesting” content that people will want to read (and engage with)? And how do you start developing content that will develop authority, a following and (ideally) more leads for your business? This week, I’ll start answering these questions by pointing out some of the most common content marketing mistakes brands make. Knowing how to avoid these mistakes (or fix them) can help you position your content for more traffic, engagement and conversions. Common Content Marketing Mistakes 1. Sloppily written content — Typos, poor grammar, sentence fragments and other mistakes can kill your content. Sloppy content can be a turnoff to an audience (because it’s a red flag that a source is NOT an authority on a topic). It can also free fall in search (because search engines don’t usually rank sloppy content well). Fix: Make sure to edit your content at least once before publishing it. Ideally, perform this edit with a “fresh” set of eyes so any and all mistakes are caught and fixed before the content goes live. 2. Regurgitated content — This refers to content that just repeats information that’s already widely available online. If content offers no new insights, it’s simply not valuable to an audience. Fix: Make sure that your content provides value, educates your audience and/or shares something new. If you’re finding that you can’t share anything new or insightful about a given topic, choose another topic that you can say something new about. 3. Poorly formatted content — Blocks of text on a page, no subheadings, no bullets, etc. can make a piece of content visually unappealing. In fact, without nice formatting that draws a reader in, content (even really interesting and well written content) is far more likely to get overlooked. Fix: Keep your paragraphs short (like 2 to 3 sentences). Make sure there’s a new heading (or subheading) after every few paragraphs. Put lists in bullets. These formatting changes can make good content great! 4. Dated content — Content that focuses on out-of-date topics provides little to no value to an audience. And that means that the content will fall flat in search and see minimal (if any) engagement. Fix: Do some research before you dive into writing. While it’s great to focus your content on current topics, evergreen topics can be even better (because they can give your content a longer shelf life). 5. Content that’s not optimized — Content that doesn’t target a relevant keyword (or keyword phrase) won’t see good rankings. And that can mean that no one finds or sees the piece. Fix: In the research phase (before the writing starts), choose the best keyword phrase to target in a piece of content. Then, strategically use that phrase in the piece (and the associated meta data) to properly optimize it. And don’t forget about the images! While images can be visually appealing and engaging, they can also provide SEO benefits. In other words, images can satisfy and appeal to both readers and search engines. Common Content Marketing Mistakes: A Final Word Every now and again, mistakes can trip up even the best laid plans. We are human after all. So, try to keep a positive perspective about isolated mistakes. Consider them learning opportunities to refine your content and/or content marketing processes. What have your experiences with content marketing been? Have you made — or successfully avoided — any mistakes? Tell me about your content marketing experiences, challenges and successes on Facebook and LinkedIn. And don’t hesitate to get a hold of me on social media to ask any digital marketing question or just to say ‘hi.’ I look forward to hearing from you!
https://medium.com/digitalauthority/the-weekly-authority-51-73c06e163635
['Digital Authority Co']
2017-03-24 12:02:01.237000+00:00
['Content Marketing', 'Digital Marketing', 'Marketing', 'Content']
Which Should You Use: Asynchronous Programming or Multi-Threading?
Python Example Let’s look at how the three examples above (single-threaded synchronous, single-threaded asynchronous, and multi-threaded synchronous) would work in a Python example. Let’s look at a few different ways to get stock data from the Alpha Vantage API, using the Python wrapper pip install alpha_vantage Synchronous We want to get the current price of four tickers, 'AAPL', ‘GOOG’, ‘TSLA', ‘MSFT' , and print them out only when we have all four. The simplest way to do this is with a for loop. This is the most brute-force way. Once we get a value from our API call (done in `ts.get_quote_endpoint(symbol)` ) we print it out and then start the next symbol. But after learning about async and multi-threading, we know we can start another API call while we wait for a value to be returned. Asynchronous In Python, we have the keywords await and async that give us the new power of asynchronous programming. These are new as of Python 3.5, so you’ll need to update if you’re still on Python 2. A lot of Python 2 is being depreciated anyway, so please update. It may be a little confusing as to what is happening here, so let’s break it down. The loop is where the processor will keep looping between waiting tasks and doing other tasks. This is to keep checking to see if a task (an API call, in our case) is done. The tasks variable is a list of method calls. We put those tasks in a gathered list for asyncio, called group1 and then run them in loop.run_until_complete . This is much faster than our original synchronous version since we can make multiple API calls without waiting for each one to finish. NOTE: Asyncio is odd in Python notebooks, see here for more information. Multi-threading I have already written a little bit more intensely about some multithreading, so if you’d like to learn more and see some examples in Python, check out this link to learn more! A quick note about Python… For beginners, the article is over! But for more advanced users there is a bit of an asterisk on python. Python actually is a little different than other languages when it comes to multithreading. It technically runs concurrently and not in parallel due to how it was created. This makes it great for I/O work too, and not as good for tasks needed high computing power. You can dive a little deeper into these concepts with this post.
https://medium.com/better-programming/which-should-you-use-asynchronous-programming-or-multi-threading-7435ec9adc8e
['Patrick Collins']
2020-05-04 23:07:16.500000+00:00
['Python', 'Programming', 'Data Science', 'Concurrency', 'Asynchronous']
Why a Trauma Therapist Recommends Chessy Prout’s Story
Why a Trauma Therapist Recommends Chessy Prout’s Story A High School survivor teaches how to heal Survivor, author, and advocate Chessy Prout with her book, I Have the Right To: A High School Survivor’s Story of Sexual Assault, Justice and Hope. Photo: The Japan Times | Kyodo As a psychotherapist who works with trauma survivors, I was deeply moved by Chessy Prout’s 2018 memoir, I Have the Right To: A High School Survivor’s Story of Sexual Assault, Justice and Hope. Written with investigative reporter Jenn Abelson, I Have the Right To chronicles Chessy’s journey from being sexually assaulted as a 15-year-old freshman at St. Paul’s boarding school in Concord, New Hampshire, through her recovery and present-day advocacy. Throughout the book, Chessy’s relentless honesty provides the rest of us with a gift: intimate insight into the world of trauma and the arc of trauma recovery. Because her recovery is ultimately so successful, her narrative is a rich source for clients to learn about many elements of trauma recovery, as well as a source of inspiration for clients and therapists, alike. Chessy’s relentless honesty provides the rest of us with a gift: intimate insight into the world of trauma and the arc of trauma recovery. Here I will summarize just a few of the features of trauma recovery that Chessy’s honesty allows us to witness, including: secure attachment, counseling and psychotherapy, supports, shame, dissociation, emotional regulation, and post-traumatic growth. Secure Attachment A critical element of healthy human development is a safe and reliable relationship between care-giver and child. This results in what is called secure attachment. Secure attachment provides two things. First, as the name implies, it gives the child a sense of safety in the world by the parent acting as a refuge to which the child can always return. In this way, the parent becomes a trusted “base of operations” from which to explore the world. Second, the parent models healthy emotional regulation, thereby teaching the child to develop an independent capacity to regulate their emotions. Secure childhood attachment is a key factor in how resilient we become later in life. It is clear that one of Chessy’s many assets in her journey is secure attachment with her parents, Susan and Alex Prout. Two days after her assault, Chessy had not yet told her parents. As she sat alone on the floor of a dorm room at midnight debating whether to call her mother and tell her, the thought that gets Chessy to actually make the call is “Mom always made things better.” Months later, just before entering the courtroom on the first day of her assailant’s trial, Alex says to Chessy, “Listen, Chessy, anytime you need me, I am right here. I am sitting ten feet away from you. You can keep your eyes on me.” Chessy then writes “Dad was my hero.” Such secure attachment at home later enables Chessy to form additional supportive attachments with others along the way, including therapists. Counseling and Psychotherapy A key feature of Chessy’s successful recovery was that she and her family had a pre-existing relationship with the school counselor. As a result, Susan knew who to call after Chessy told her about the rape, and Chessy knew who to go to for support. Her counselor provided a crucial non-judgmental oasis within an institution that was about to become abusive, itself. Chessy also gained specific tools for self-soothing and psycho-education about trauma. After leaving school for the summer, Chessy then engaged with a psychotherapist near her home on a longer-term basis. This engagement eventually provided a forum for the family to discuss difficult on-going choices, such as the gut-wrenching decision of whether Chessy should return to the school in the fall. By advocating for Chessy’s autonomy in this choice, this therapist planted seeds for Chessy’s on-going sense of empowerment. Supports Time and time again, Chessy emphasizes how the on-going support of her family saved her life during her recovery. The first family member she told about her assault was her older sister, Lucy, who was graduating from St. Paul’s that same week. After Chessy told her, Lucy’s first words were, “It’s not your fault.” Throughout her ordeal, Chessy reports how critical hearing those words were to her, both in that moment as well as over the next two years. Similarly, when Chessy tells her mother about the assault on the phone the next day, Susan’s first words are, “It’s going to be O.K. Are you safe right now?” Tragically, such supportive initial responses from loved-ones are atypical. Victims often delay telling loved-ones out of shame — if they tell them at all. It is far more common for family members to respond with “Why didn’t you tell me?!?” or victim-blaming, such as, “Why did you go out with him?!” Sadly, these only result in heightening the victim’s shame. Over time Chessy’s support network gradually extends well beyond her family, connecting her with other survivors and advocacy organizations, some of whom she is working with as an advocate today. No one survives alone. Chessy with her parents, Alex and Susan Prout, at her high school graduation. Photo: The Prout Family Shame Shame is central to the sexual abuse survivor experience. Meanwhile, shame is so powerful that most of us are ashamed … to even talk about shame! As a result, the shame persists. I define shame, and it’s close-cousin guilt, for my clients in the following way. Guilt is the emotion we feel when we know we’ve done something wrong. Shame is the emotion we feel when other people know we’ve done something wrong. It includes the fear of being found out and cast out by others. It is the opposite of a sense of belonging, acceptance and dignity. Lucy and Susan’s initial responses to Chessy laid the foundation for the sense of belonging and acceptance that her family provided throughout. This served as a potent antidote to the subsequent deliberate shaming by the school and her classmates. One could frame Chessy’s journey over the next two years as successfully shedding that shame. Dissociation A little-known aspect of trauma is the capacity for our brains to make us “check-out” during moments of trauma. This hard-wired response allows us to survive the pain. If being fully aware of our five senses and what is happening to us is being fully associated with reality, the opposite is being dissociated. Sadly, dissociation is frequently over-looked and misunderstood by even members of the mental health profession. Chessy does an enormous service by offering an intimate description of her experience of dissociation in terms that anyone can understand. During the assault, she writes that she repeatedly tried to resist verbally, “but the pipeline that delivered words to my mouth was gone…. I felt paralyzed…. I…felt myself float above my body…. I was lifeless…. I couldn’t feel my body anymore.” Such muteness, freezing and numbness during trauma has nothing to do with lack of courage, will, or vocabulary. It is a result of the brain switching into survival mode. This includes the area of the prefrontal cortex that is responsible for speech processing going off-line. Such basic brain science can help survivors feel less guilt and shame over their response during an assault Emotional Regulation It’s normal to have a complicated relationship with our feelings. But in the end, feelings are nothing more than feedback, messages in a bottle that offer guidance about our needs and choices — if we’re willing to listen to them. Meanwhile feelings are a lot like the weather: sometimes they’re sunny, sometimes they’re cloudy, but they never last forever. Healthy families foster healthy emotional regulation by modeling the appropriate expression and processing of feelings. This is facilitated by what is called emotional attunement between family members, that is, the experience of “feeling felt” by others on an on-going basis. Before reading Chessy’s book, I had various clinical ways of describing a healthy family. After reading the book, here is how I would describe a healthy family. Each phrase is derived from a scene in the book. A healthy family is a family where: when someone is afraid, they say so, and someone is there to hold their hand; when someone is sad, they cry, and someone is there to hug them; when someone is angry, they vent, and someone is there to hear them; when someone needs solitude, they say so, and others give them space; and when someone is joyful, they celebrate, and someone is there to cheer them on. Post-Traumatic Growth Post-traumatic growth describes the personal growth that occurs — sometimes with startling speed — once someone has recovered from trauma. At first this term may sound like an oxymoron. How can trauma cause growth? Such growth occurs because healing from trauma releases the survivor from the shackles of shame and the avoidance of the people, places, and things that used to be triggering. Once liberated from these, the world becomes their oyster once again. I can’t think of a more compelling example of post-traumatic growth than Chessy Prout. Her inspiring journey from victim to survivor to thriver and impassioned advocate contains all the elements of successful trauma recovery. In Vulnerability, Strength These are only a fraction of the lessons I Have the Right To offers. In addition to Chessy’s narrative, the book ends with a heart-felt open letter by Susan and Alex to other parents, including an ample list of resources. Trauma pioneer Judith Herman characterized the essence of trauma as the experience of disempowerment and disconnection. Trauma recovery, therefore, is a process of cultivating empowerment through a sense of connection. This is the essence of Chessy’s journey. Nothing dismantles stigma like a first-person narrative of success. At an author talk at Simmons College, Chessy ended the Q&A with this: “I’m no longer afraid to speak my mind. I don’t care if I’m called a bitch or bossy. When men speak their minds, they’re called ‘confident’, they’re called ‘leaders’. And I’m not going away.” On behalf of my clients and colleagues, Chessy, I’m so very glad. Epilogue Chessy Prout is now a thriving sophomore in college and active advocate for the non-profit PAVE (Promoting Awareness, Victim Empowerment). Jenn Abelson is now an investigative reporter for the Washington Post. Susan and Alex Prout are co-founders of I Have the Right To a victim’s rights and advocacy organization. Alex is also founding chair of the Solidarity Council of Vital Voices, a global leadership development organization for women, and a recipient of their Voices of Solidarity Award for 2018. St. Paul’s School has acknowledged a decades-long history of sexual misconduct and is now under oversight of the New Hampshire State Attorney General’s Office in lieu of criminal prosecution for child endangerment.
https://medium.com/fourth-wave/why-a-trauma-therapist-recommends-chessy-prouts-story-e087ba4d8106
['Peter Pruyn']
2020-05-22 14:35:57.346000+00:00
['Women', 'Recovery', 'Trauma', 'Sexual Assault', 'Books']
Four Useful Questions About Starting a Career in Data Science, and Their Answers
Photo by Emily Morter on Unsplash There are countless discussions about getting started in a career in data science, and many young professionals and students are setting up their career path in this field. In fact the search term “Data Science Advice” gives about 1.2 billion results in Google! So this article, instead of giving direct advice, focuses on answering a few specific questions that I have repeatedly answered over the past few years being a data science enthusiast. So, without much ado, let’s start! Photo by Jon Tyson on Unsplash Question 1 — What are the requirements/skills/pre-requisites to be a Data Scientist? Ans: The term data scientist is used quite broadly these days. There are data scientists in business, medical sciences, genetic engineering, computer science etc. As such, one major requirement will be based on what goal you are targeting. Based on which field you want to apply data science to, the first thing you will need is Domain Knowledge. This is an advice I share based on my experience in this field. We have many tools in Data Science for understanding how different variables impact results and which variables are important for a model, but time and again, in many use cases (actually, almost all), I have found it is the in-depth understanding of the domain that sets apart a really good model and analysis! The next thing is a good conceptual idea on Statistics and Differential Calculus. Initially, when you are going to do exploratory data analysis, patterns, dimensionality reduction, imputing missing data, and pre-processing data for machine learning models, and when carrying out statistical analysis such as regression, you will need a good idea on Statistics. When you advance further, Calculus will help as well — for optimization problems, ensemble models, gradient boosting and many other advanced methods. Finally, in terms of software skills, you can start with PowerBI or Tableau, for data intelligence and insights, and quick dashboards. But then in order to actually venture into data science, you will need to learn to code. For that you will need either R or Python. There is a big debate on which one to start with, and there are pros and cons to both the languages, but frankly, I think understanding one will help you understand the other, and I end up working interchangeably between the two, based on the availability of the right package! Question 2: Does Data Science have any international certification course? How much does it cost? Ans: There are many. There are online courses, diplomas, and masters and undergraduate degrees. And the cost varies a lot. But here, I will again share my humble opinion — in the end the degree will not determine your outcome to be a data scientist. Data Science is a very applied field, and as such, for you to be employed as a data scientist or make a mark in the market, you need to have projects. It doesn’t only mean work projects. There are loads of data available in Kaggle, Google Database Search, UCI Machine Learning repository, and just take a data set, and work on it, trying to find patterns and trying to make predictions, measuring accuracy of prediction models etc. In fact, I am a huge proponent on trial and error learning — keep on trying different projects, and you will fail do initially do what you wanted to do, and then learn how to do it, and that learning will stick! Photo by Max Chen on Unsplash Since Data Science is an ever changing field, the contents and the tools are also ever changing. So to be very honest, no university curriculum or online course will be able to cover everything, especially the topics which are on the fringe or new topics. We try to keep us updated through blogs, new methods coming out, academic research and by these data projects. You can create these projects and store them in GitHub. When I am checking the ability of a data scientist applicant, I will actually prefer to go through her GitHub Repository or Kaggle Discussion! Question 3: How much does it take to become a Data Scientist? Ans: Again, no clear answer. Can’t even suggest a number range for the monetary costs. It completely depends on what path you chose. If you go for a master’s, then the costs will be higher, but you will have an accredited certification. If you base on online courses and finding materials yourself, the cost will be much less, but the value of certification will be less as well. But remember my answer to the second question — what matters more is the projects you have. When looking at cost from the perspective of time, it’s quite high. For me, I was a full time lecturer in a university, in a related topic, and so it took me almost 1 to 1.5 years, to slowly learn and dive very deeply into both the concepts and the coding. It depends on how much time you can give. I used to spend 4/5 hours per week at least. The time commitment also varies based on whether you know how to code (in which case it will be less than the time span i mentioned) and whether you have some basic background knowledge in statistics and algebra (if you don’t, the time span will be a little bit higher!) Photo by Antoine Dautry on Unsplash But a very important point here is that the learning never ends — I end up learning new stuff almost every other day, and have a certain time in the week dedicated for learning! Question 4: Where can I get proper guidelines regarding Data Science? Ans: Similar to my answer to the second question, unfortunately there is no single institution, group, platform that holistically covers all of the things that you need to know about Data Science. And that, in my opinion, is the fun with this field! You end up wandering around, and at times, you find something better! Regardless, there are many groups in social media and in places like, GitHub and Kaggle, and in Medium, where you can always interact and keep abreast with the new information. I think the best advice here is to learn and collect information from many sources. Medium has some very good articles and publications. DataCamp blog is quite good. Also Kaggle’s blog and following Kaggle competitions help a lot! Finally, GitHub has access to some really good codes, libraries and data projects.
https://medium.com/intelligentmachines/four-useful-questions-about-starting-a-career-in-data-science-and-their-answers-37fd35bf142e
['Khan Muhammad Saqiful Alam']
2020-06-28 07:40:23.433000+00:00
['Big Data', 'Analytics', 'Skills', 'Career Paths', 'Data Science']
Development and Testing of AWS without AWS: Localstack
Development and Testing of AWS without AWS: Localstack Localstack provides an AWS-like environment that can be used for development and testing without needing AWS available AWS provides a lot of useful tools and services for developers: Serverless, Lambda functions, SNS queue, DynamoDB, S3 storage. When we start developing something connected to such services, obviously we can use one of the test environments provided by Amazon (LAB). Nonetheless, there is also a “local way” to go: Localstack. Reduce costs and save time! Localstack provides an AWS-like environment that can be used for development and/or testing purposes. Will your service use an SNS queue? Then, you can start the development with LocalStack. Will your app need information by a Lambda? Then, you can develop it using LocalStack. Will your frontend need S3 resources? Then, you can use LocalStack S3. How doest it work? LocalStack gets started inside a Docker container and it contains a lot of the Cloud APIs of AWS. Installing The easiest way to install LocalStack is via pip (I know…that’s for Mac, but there must also be easy ways for other platforms): pip install localstack Running in Docker You can run LocalStack with a command localstack start or with a docker compose yml file docker-compose up An example of compose yml file is version: '2.1' services: ... localstack: image: localstack/localstack ports: - "4567-4599:4567-4599" - "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}" environment: - SERVICES=${SERVICES- } - DEBUG=${DEBUG- } - DATA_DIR=${DATA_DIR- } - PORT_WEB_UI=${PORT_WEB_UI- } - LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- } - KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- } - DOCKER_HOST=unix:///var/run/docker.sock volumes: - "${TMPDIR:-/tmp/localstack}:/tmp/localstack" How did we use it? We needed to integrate a new service to an already existing system that automatically pushes messages to a Slack channel using: SNS queue Lambda function Dynamo DB Nodejs App SNS queue: a message queue that receives all the triggers Lambda: serverless function provided by AWS DynamoDb: a database that stores needed configurations NodeJs App: an app with all the required logic To test and verify the behaviour of the whole system, we set up Localstack to act as the needed AWS services, so we added a YML file that spun up what we wanted: version: '2.1' services: localstack: container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}" image: localstack/localstack ports: - "4567-4597:4567-4597" - "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}" environment: - SERVICES=lambda,dynamodb - DATA_DIR=${DATA_DIR- } - DEBUG=1 - DEFAULT_REGION=us-west-2 - PORT_WEB_UI=${PORT_WEB_UI- } - LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- } - KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- } - DOCKER_HOST=unix:///var/run/docker.sock volumes: - "${TMPDIR:-/tmp/localstack}:/tmp/localstack" - "/var/run/docker.sock:/var/run/docker.sock" As you can see, we set up the services Lambda and Dynamodb (we didn’t add the SNS topic because it was of less interest) - SERVICES=lambda,dynamodb After the setup we needed to import and configure data and Lambda for our needs. To do this, we inserted a bash script in the repo that: 1. Creates the lambda package gulp zip 2. Creates a dynamodb table and inserts data in it awslocal dynamodb create-table --table-name MyTableName --attribute-definitions AttributeName=slackChannel,AttributeType=S --key-schema AttributeName=slackChannel,KeyType=HASH --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 awslocal dynamodb put-item --table-name MyTableName --item '{"somekey":{"somesubkey":{"key":{"S":"value"}}},"somethingelse":{"S":"anothervalue"}}' 3. Deploys the lambda function awslocal lambda create-function --function-name myLambda--handler index.handler --environment '{"Variables":{"var1":"val1","var2":"val2"}}' --runtime nodejs10.x --role whatever --zip-file fileb://build/myLambda.zip N.B. awslocal is a command but basically it’s an alias for aws --endpoint-url=http://localhost:4568 it acts as the aws command, but locally. As you can see, you can also specify the runtime for the Lambda, in this case nodejs10.x. All this setup was in a separate bash script file that, after localstack is up and running, populates it with the required function and data. This project took me less than I expected — and I admit to be very cautious when it comes to complexity estimation. Localstack is a very powerful tool, and when you need to test and develop something that requires AWS resources, it can definitely save your time (and some money as well, to be honest!) During this journey we have learnt about some of the AWS capabilities, localstack, lambda, dynamodb and testing. Ref. Localstack repo: https://github.com/localstack/localstack Thanks for reading Miro Barsocchi: Software tester and also Electronic engineer, radio speaker, actor, surfer, barman, but only two of these are seriously. You can find me on Twitter or Github or elsewhere.
https://medium.com/expedia-group-tech/development-and-testing-of-aws-without-aws-localstack-ab02f9425c40
['Miro Barsocchi']
2020-06-30 13:01:01.432000+00:00
['Software Development', 'Localstack', 'Testing', 'Programming', 'AWS']
Machine Learning (ML) vs. Artificial Intelligence (AI) — Crucial Differences
Machine Learning (ML) vs. Artificial Intelligence (AI) — Crucial Differences Unfortunately, some tech organizations are deceiving customers by proclaiming to use machine learning (ML) and artificial intelligence (AI) on their technologies while not being clear about their products’ limits October 15, 2018, by Roberto Iriondo — Last updated: November 12, 2020 Recently, a report released regarding the misuse from companies claiming to use artificial intelligence [29] [30] on their products and services. According to the Verge [29], 40% of European startups claimed to use AI don’t use the technology. Last year, TechTalks, also stumbled upon such misuse by companies claiming to use machine learning and advanced artificial intelligence to gather and examine thousands of users’ data to enhance user experience in their products and services [2] [33]. Unfortunately, there’s still much confusion within the public and the media regarding what genuinely is artificial intelligence [44] and what exactly is machine learning [18]. Often the terms are being used as synonyms. In other cases, these are being used as discrete, parallel advancements, while others are taking advantage of the trend to create hype and excitement, as to increase sales and revenue [2] [31] [32] [45]. 📚 Check out our editorial recommendations on the best machine learning books. 📚 Below we go through some main differences between AI and machine learning. What is machine learning? What is Machine Learning | Tom M. Mitchell, Machine Learning, McGraw Hill, 1997 [18] Quoting Interim Dean at the School of Computer Science at CMU, Professor and Former Chair of the Machine Learning Department at Carnegie Mellon University, Tom M. Mitchell: A scientific field is best defined by the central question it studies. The field of Machine Learning seeks to answer the question: “How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes? [1]” Machine learning (ML) is a branch of artificial intelligence, and as defined by Computer Scientist and machine learning pioneer [19] Tom M. Mitchell: “Machine learning is the study of computer algorithms that allow computer programs to automatically improve through experience.” [18] — ML is one of the ways we expect to achieve AI. Machine learning relies on working with small to large datasets by examining and comparing the data to find common patterns and explore nuances. For instance, if you provide a machine learning model with many songs that you enjoy, along with their corresponding audio statistics (dance-ability, instrumentality, tempo, or genre). It oughts to be able to automate (depending on the supervised machine learning model used) and generate a recommender system [43] as to suggest you with music in the future that (with a high percentage of probability rate) you’ll enjoy, similarly as to what Netflix, Spotify, and other companies do [20] [21] [22]. In a simple example, if you load a machine learning program with a considerable large dataset of x-ray pictures along with their description (symptoms, items to consider, and others), it oughts to have the capacity to assist (or perhaps automatize) the data analysis of x-ray pictures later on. The machine learning model looks at each picture in the diverse dataset and finds common patterns found in pictures with labels with comparable indications. Furthermore, (assuming that we use an acceptable ML algorithm for images) when you load the model with new pictures, it compares its parameters with the examples it has gathered before to disclose how likely the pictures contain any of the indications it has analyzed previously. Supervised Learning (Classification/Regression) | Unsupervised Learning (Clustering) | Credits: Western Digital [13] The type of machine learning from our previous example, called “supervised learning,” where supervised learning algorithms try to model relationship and dependencies between the target prediction output and the input features, such that we can predict the output values for new data based on those relationships, which it has learned from previous datasets [15] fed. Unsupervised learning, another type of machine learning, is the family of machine learning algorithms, which have main uses in pattern detection and descriptive modeling. These algorithms do not have output categories or labels on the data (the model trains with unlabeled data). Reinforcement Learning | Credits: Types of ML Algorithms you Should Know by David Fumo [3] Reinforcement learning, the third popular type of machine learning, aims at using observations gathered from the interaction with its environment to take actions that would maximize the reward or minimize the risk. In this case, the reinforcement learning algorithm (called the agent) continuously learns from its environment using iteration. A great example of reinforcement learning are computers reaching a super-human state and beating humans on computer games [3]. Machine learning can be dazzling, particularly its advanced sub-branches, i.e., deep learning and the various types of neural networks. In any case, it is “magic” (Computational Learning Theory) [16], regardless of whether the public, at times, has issues observing its internal workings. While some tend to compare deep learning and neural networks to the way the human brain works, there are essential differences between the two [2] [4] [46]. What is Artificial Intelligence (AI)? The AI Stack, Explained by Professor and Dean, School of Computer Science, Carnegie Mellon University, Andrew Moore | Youtube [14] Artificial intelligence, on the other hand, is vast in scope. According to Andrew Moore [6] [36] [47], Former-Dean of the School of Computer Science at Carnegie Mellon University, “Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.” That is a great way to define AI in a single sentence; however, it still shows how broad and vague the field is. Fifty years ago, a chess-playing program was considered a form of AI [34] since game theory and game strategies were capabilities that only a human brain could perform. Nowadays, a chess game is dull and antiquated since it is part of almost every computer’s operating system (OS) [35]; therefore, “until recently” is something that progresses with time [36]. Assistant Professor and Researcher at CMU, Zachary Lipton clarifies on Approximately Correct [7], the term AI “is aspirational, a moving target based on those capabilities that humans possess but which machines do not.” AI also includes a considerable measure of technology advances that we know. Machine learning is only one of them. Prior works of AI utilized different techniques. For instance, Deep Blue, the AI that defeated the world’s chess champion in 1997, used a method called tree search algorithms [8] to evaluate millions of moves at every turn [2] [37] [52] [53]. As we know it today, AI is symbolized with Human-AI interaction gadgets by Google Home, Siri, and Alexa, by the machine-learning-powered video prediction systems that power Netflix, Amazon, and YouTube. These technological advancements are progressively becoming essential in our daily lives. They are intelligent assistants who enhance our abilities as humans and professionals — making us more productive. In contrast to machine learning, AI is a moving target [51], and its definition changes as its related technological advancements turn out to be further developed [7]. Possibly, within a few decades, today’s innovative AI advancements ought to be considered as dull as flip-phones are to us right now. Why do tech companies tend to use AI and ML interchangeably? “… what we want is a machine that can learn from experience.” ~ Alan Turing The term “artificial intelligence” came to inception in 1956 by a group of researchers, including Allen Newell and Herbert A. Simon [9]. Since then, AI’s industry has gone through many fluctuations. In the early decades, there was much hype surrounding the industry, and many scientists concurred that human-level AI was just around the corner. However, undelivered assertions caused a general disenchantment with the industry along with the public and led to the AI winter, a period where funding and interest in the field subsided considerably [2] [38] [39] [48]. Afterward, organizations attempted to separate themselves with the term AI, which had become synonymous with unsubstantiated hype and utilized different names to refer to their work. For instance, IBM described Deep Blue as a supercomputer and explicitly stated that it did not use artificial intelligence [10], while it did [23]. During this period, various other terms, such as big data, predictive analytics, and machine learning, started gaining traction and popularity [40]. In 2012, machine learning, deep learning, and neural networks made great strides and found use in a growing number of fields. Organizations suddenly started to use the terms of “machine learning” and “deep learning” for advertising their products [41]. Deep learning began to perform tasks that were impossible to do with classic rule-based programming. Fields such as speech and face recognition, image classification, and natural language processing, which were at early stages, suddenly took great leaps [2] [24] [49], and in March 2019–three of the most recognized deep learning pioneers won a Turing award thanks to their contributions and breakthroughs that have made deep neural networks a critical component to nowadays computing [42]. Hence, to the momentum, we see a gearshift back to AI. For those who used to the limits of old-fashioned software, the effects of deep learning almost seemed like “magic” [16]. Especially since a fraction of the fields that neural networks and deep learning are entering were considered off-limits for computers, and nowadays, machine learning and deep learning engineers are earning high-level salaries, even when they are working at non-profit organizations, which speaks to how hot the field is [50] [11]. Sadly, this is something that media companies often report without profound examination and frequently go along with AI articles with pictures of crystal balls and other supernatural portrayals. Such deception helps those companies generate hype around their offerings [27]. Yet, down the road, as they fail to meet the expectations, these organizations are forced to hire humans to make up for their so-called AI [12]. In the end, they might end up causing mistrust in the field and trigger another AI winter for the sake of short-term gains [2] [28]. I am always open to feedback, please share in the comments if you see something that may need revisited. Thank you for reading! Acknowledgments: The author would like to extensively thank Ben Dickson, Software Engineer, and Tech Blogger, for his kindness to allow me to rely on his expertise and storytelling, along with several members of the AI Community for the immense support and constructive criticism in preparation of this article. DISCLAIMER: The views expressed in this article are those of the author(s) and do not represent the views of Carnegie Mellon University nor other companies (directly or indirectly) associated with the author(s). These writings do not intend to be final products, yet rather a reflection of current thinking, along with being a catalyst for discussion and improvement.
https://medium.com/towards-artificial-intelligence/differences-between-ai-and-machine-learning-and-why-it-matters-1255b182fc6
['Roberto Iriondo']
2020-12-09 23:40:22.383000+00:00
['Artificial Intelligence', 'Machine Learning', 'Data Science', 'Technology', 'Deep Learning']
Build File Upload/Download Functionality with Image Preview
Build File Upload/Download Functionality with Image Preview And also learn how to add drag and drop feature to upload any type of file File Upload Introduction In this article, we will create a file upload and download functionality with a preview of the image using the MERN stack. By creating this App, you will learn How to upload a file using drag and drop How to upload and download any type of file How to restrict the type of the file while uploading How to restrict the size of the file while uploading How to show a preview of the image after selecting it How to use MongoDB to store and get the details of the file and much more. Instead of storing the file in the MongoDB database as base64 encoded, we will be storing the file on the server and only storing the path of the file inside the database so as to keep the database size smaller and to easily access and move the files as per the need. We’re using a very popular react-dropzone npm library for implementing the drag and drop functionality. For actual file upload, we’re using multer npm library which is also very popular for uploading files. We will be using React Hooks for building this application, so If you’re not familiar with it, check out my previous article here for the introduction to Hooks. We will be using the MongoDB database so make sure you install it locally by following my previous article here Initial Setup Create a new project using create-react-app create-react-app react-upload-download-files Once the project is created, delete all files from the src folder and create index.js and styles.scss files inside the src folder. Also, create components , router and utils folders inside the src folder. Install the necessary dependencies: yarn add [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Open styles.scss and add the contents from here inside it. Creating Initial Pages Create a new file with the name Header.js inside the components folder with the following content: import React from 'react'; import { NavLink } from 'react-router-dom'; const Header = () => { return ( <div className="header"> <h1>File Upload And Download</h1> <nav> <NavLink activeClassName="active" to="/" exact={true}> Home </NavLink> <NavLink activeClassName="active" to="/list"> Files List </NavLink> </nav> </div> ); }; export default Header; Create a new file with the name App.js inside the components folder with the following content: import React, { useState, useRef } from 'react'; import { Form, Row, Col, Button } from 'react-bootstrap'; const App = (props) => { const [file, setFile] = useState(null); // state for storing actual image const [previewSrc, setPreviewSrc] = useState(''); // state for storing previewImage const [state, setState] = useState({ title: '', description: '' }); const [errorMsg, setErrorMsg] = useState(''); const [isPreviewAvailable, setIsPreviewAvailable] = useState(false); // state to show preview only for images const dropRef = useRef(); // React ref for managing the hover state of droppable area const handleInputChange = (event) => { setState({ ...state, [event.target.name]: event.target.value }); }; const handleOnSubmit = async (event) => { event.preventDefault(); }; return ( <React.Fragment> <Form className="search-form" onSubmit={handleOnSubmit}> {errorMsg && <p className="errorMsg">{errorMsg}</p>} <Row> <Col> <Form.Group controlId="title"> <Form.Control type="text" name="title" value={state.title || ''} placeholder="Enter title" onChange={handleInputChange} /> </Form.Group> </Col> </Row> <Row> <Col> <Form.Group controlId="description"> <Form.Control type="text" name="description" value={state.description || ''} placeholder="Enter description" onChange={handleInputChange} /> </Form.Group> </Col> </Row> <Button variant="primary" type="submit"> Submit </Button> </Form> </React.Fragment> ); }; export default App; In this file, we’re rendering a form to add the title and description for now. We will add the option to add the file later in this article. For each input field, we have added an handleInputChange handler that updates the state of each input field. We have added a name attribute to each input field which matches exactly with the name of the state variables so we're able to use ES6 shorthand syntax for updating the state. const handleInputChange = (event) => { setState({ ...state, [event.target.name]: event.target.value }); }; In the case of Hooks, the state is not merged automatically, so we’re first spreading all the properties of the state and then updating the respective input field. Create a new file with name AppRouter.js inside the router folder with the following content: import React from 'react'; import { BrowserRouter, Switch, Route } from 'react-router-dom'; import App from '../components/App'; import Header from '../components/Header'; const AppRouter = () => ( <BrowserRouter> <div className="container"> <Header /> <div className="main-content"> <Switch> <Route component={App} path="/" exact={true} /> </Switch> </div> </div> </BrowserRouter> ); export default AppRouter; Now, open src/index.js file and add the following contents inside it: import React from 'react'; import ReactDOM from 'react-dom'; import AppRouter from './router/AppRouter'; import 'bootstrap/dist/css/bootstrap.min.css'; import './styles.scss'; ReactDOM.render(<AppRouter />, document.getElementById('root')); Now, start the application by executing the yarn start command from the terminal. You will see the following screen: Adding File Upload Functionality Now, let’s add the option to upload the file from the UI. Open src/App.js file and before the submit button and after the ending Row tag, add the following code <div className="upload-section"> <Dropzone onDrop={onDrop}> {({ getRootProps, getInputProps }) => ( <div {...getRootProps({ className: 'drop-zone' })} ref={dropRef}> <input {...getInputProps()} /> <p>Drag and drop a file OR click here to select a file</p> {file && ( <div> <strong>Selected file:</strong> {file.name} </div> )} </div> )} </Dropzone> {previewSrc ? ( isPreviewAvailable ? ( <div className="image-preview"> <img className="preview-image" src={previewSrc} alt="Preview" /> </div> ) : ( <div className="preview-message"> <p>No preview available for this file</p> </div> ) ) : ( <div className="preview-message"> <p>Image preview will be shown here after selection</p> </div> )} </div> Here, we’re using the DropZone component with React render props pattern where the text we need to display in the drop area is added after the input field inside the DropZone component. Add the import for DropZone and axios at the top of the App.js file. import Dropzone from 'react-dropzone'; import axios from 'axios'; Add the OnDrop function after the handleInputChange handler. const onDrop = (files) => { const [uploadedFile] = files; setFile(uploadedFile); const fileReader = new FileReader(); fileReader.onload = () => { setPreviewSrc(fileReader.result); }; fileReader.readAsDataURL(uploadedFile); setIsPreviewAvailable(uploadedFile.name.match(/\.(jpeg|jpg|png)$/)); }; Here, the onDrop function receives a files array with the dropped or selected files. We’re uploading only one file at a time so the uploaded file will be available files[0] so we’re using array destructuring syntax to get that file value. const [uploadedFile] = files; To display the preview of the image, we’re using JavaScript FileReader API. To convert the file to dataURL we call the fileReader.readAsDataURL method. Once the file is successfully read as dataURL , the onload function of fileReader will be called. fileReader.onload = () => { setPreviewSrc(fileReader.result); }; The result of the read operation will be available in the result property of the fileReader which we're assigning to the previewSrc state variable. We’re showing preview only for images so we’re checking if the uploaded file is of correct format (only jpg, jpeg and png image) and updating the state of previewAvailable variable. setIsPreviewAvailable(uploadedFile.name.match(/\.(jpeg|jpg|png)$/)); Now, restart the application by running the yarn start command and verify the functionality. Here, we’ve added a file by browsing it. You can even add a file by drag and drop as shown below: If you select a file, other than an image, we’ll not show the preview indicated by the message No preview available for this file . Add drop indication If you saw the drop functionality, we’re not showing any indication that the file is being dropped into the drop area so let’s add that. We’ve already added a ref to the div with class drop-zone inside the App.js file. <div {...getRootProps({ className: 'drop-zone' })} ref={dropRef}> and also created the dropRef variable at the top using useRef hook. Add the onDragEnter and onDragLeave props to the Dropzone component. <Dropzone onDrop={onDrop} onDragEnter={() => updateBorder('over')} onDragLeave={() => updateBorder('leave')} > The onDragEnter function will be triggered when the file is over the drop area and onDragLeave function will be triggered when the file is removed from the drop area. Create a new updateBorder function inside the App component before the handleOnSubmit handler. const updateBorder = (dragState) => { if (dragState === 'over') { dropRef.current.style.border = '2px solid #000'; } else if (dragState === 'leave') { dropRef.current.style.border = '2px dashed #e9ebeb'; } }; As we’ve added the dropRef ref to the div with class drop-zone , it will point to that div and we can use its current property to update the border of the drop area using dropRef.current.style.border . Also, inside the onDrop function, add the following line at the end of the function. dropRef.current.style.border = '2px dashed #e9ebeb'; so when we drop the file over the drop area, the border will return to its normal state. Now, If you check the application, you will see the dropping effect with the changing border. Calling API For File Upload Create a new file with the name constants.js inside the src/utils folder with the following content export const API_URL = 'http://localhost:3030'; We will be starting our Express server on port 3030 soon so we have mentioned that here. Now, let’s write the code inside the handleOnSubmit handler of App.js to call the backend API. Replace the handleOnSubmit handler with the following code const handleOnSubmit = async (event) => { event.preventDefault(); try { const { title, description } = state; if (title.trim() !== '' && description.trim() !== '') { if (file) { const formData = new FormData(); formData.append('file', file); formData.append('title', title); formData.append('description', description); setErrorMsg(''); await axios.post(`${API_URL}/upload`, formData, { headers: { 'Content-Type': 'multipart/form-data' } }); } else { setErrorMsg('Please select a file to add.'); } } else { setErrorMsg('Please enter all the field values.'); } } catch (error) { error.response && setErrorMsg(error.response.data); } }; Also, import the API_URL at the top of the file. import { API_URL } from '../utils/constants'; Inside the handleOnSubmit handler, we're first checking if the user has entered all the field values and selected the file and we're making an API call to /upload API which we will be writing in the next section. await axios.post(`${API_URL}/upload`, formData, { headers: { 'Content-Type': 'multipart/form-data' } }); We’re making a POST request with the formData object and sending title , description and the actual file to the API. Note that, mentioning the content type of multipart/form-data is very important otherwise the file will not be sent to the server. Adding server-side code for file upload Now, let’s add the server-side functionality to upload the file. Create a folder with name server inside the react-upload-download-files folder and execute the following command from the server folder yarn init -y This will create a package.json file inside the server folder. Install the required dependencies by executing the following command from the terminal from inside the server folder yarn add [email protected] [email protected] [email protected] [email protected] [email protected] Create a new file with the name .gitignore inside the server folder and add the following line inside it so node_modules folder will not be added in your Git repository. node_modules Now create db , files , model , routes folder inside the server folder. Also, create index.js inside the server folder. Inside the server/db folder, create a new file db.js with the following content const mongoose = require('mongoose'); mongoose.connect('mongodb://127.0.0.1:27017/file_upload', { useNewUrlParser: true, useUnifiedTopology: true, useCreateIndex: true }); Provide your MongoDB database connection details here. file_upload is the name of the database we will use. Create a new file with name file.js inside the model folder with the following content const mongoose = require('mongoose'); const fileSchema = mongoose.Schema( { title: { type: String, required: true, trim: true }, description: { type: String, required: true, trim: true }, file_path: { type: String, required: true }, file_mimetype: { type: String, required: true } }, { timestamps: true } ); const File = mongoose.model('File', fileSchema); module.exports = File; Here, we have defined the schema for the collection as we’re using a very popular mongoose library to work with MongoDB. We will be storing the title , description , file_path and file_mimetype in the collection so we have described the type of each in this file. Note that, even though we have defined the model name as File , MongoDB creates a plural version of the collection. So the collection name will be files . Now, create a new file with name file.js inside the routes folder with the following content const path = require('path'); const express = require('express'); const multer = require('multer'); const File = require('../model/file'); const Router = express.Router(); const upload = multer({ storage: multer.diskStorage({ destination(req, file, cb) { cb(null, './files'); }, filename(req, file, cb) { cb(null, `${new Date().getTime()}_${file.originalname}`); } }), limits: { fileSize: 1000000 // max file size 1MB = 1000000 bytes }, fileFilter(req, file, cb) { if (!file.originalname.match(/\.(jpeg|jpg|png|pdf|doc|docx|xlsx|xls)$/)) { return cb( new Error( 'only upload files with jpg, jpeg, png, pdf, doc, docx, xslx, xls format.' ) ); } cb(undefined, true); // continue with upload } }); Router.post( '/upload', upload.single('file'), async (req, res) => { try { const { title, description } = req.body; const { path, mimetype } = req.file; const file = new File({ title, description, file_path: path, file_mimetype: mimetype }); await file.save(); res.send('file uploaded successfully.'); } catch (error) { res.status(400).send('Error while uploading file. Try again later.'); } }, (error, req, res, next) => { if (error) { res.status(500).send(error.message); } } ); Router.get('/getAllFiles', async (req, res) => { try { const files = await File.find({}); const sortedByCreationDate = files.sort( (a, b) => b.createdAt - a.createdAt ); res.send(sortedByCreationDate); } catch (error) { res.status(400).send('Error while getting list of files. Try again later.'); } }); Router.get('/download/:id', async (req, res) => { try { const file = await File.findById(req.params.id); res.set({ 'Content-Type': file.file_mimetype }); res.sendFile(path.join(__dirname, '..', file.file_path)); } catch (error) { res.status(400).send('Error while downloading file. Try again later.'); } }); module.exports = Router; In this file, as we’re using multer library for handling file upload. We're creating a multer configuration that we're storing in the variable with the name upload . const upload = multer({ storage: multer.diskStorage({ destination(req, file, cb) { cb(null, './files'); }, filename(req, file, cb) { cb(null, `${new Date().getTime()}_${file.originalname}`); } }), limits: { fileSize: 1000000 // max file size 1MB = 1000000 bytes }, fileFilter(req, file, cb) { if (!file.originalname.match(/\.(jpeg|jpg|png|pdf|doc|docx|xlsx|xls)$/)) { return cb( new Error( 'only upload files with jpg, jpeg, png, pdf, doc, docx, xslx, xls format.' ) ); } cb(undefined, true); // continue with upload } }); The multer function takes an object as a parameter with many properties some of which are storage and limits and fileFilter function. The multer.diskStorage function takes an object with destination and filename functions. Here we’re using ES6 function shorthand syntax so destination(req, file, cb) { is same as destination: function(req, file, cb) { The destination and filename function receives three input parameters namely req(request) , file(actual uploaded file object) and cb(callback function) . For the callback function(cb) arguments, If there is an error, it will be passed as the first argument If there is no error, then the first argument will be null or undefined and the second argument will contain the data be passed to the callback function. In the destination function, we pass the path of the folder where we will be storing the uploaded files. In our case, it will be a files folder inside the server folder. In the filename function, we provide the name we want to give for each uploaded file. In our case, it will be current_timestamp_name_of_the_file . For the limits property we specify the maximum file size allowed for the uploaded file. In our case we have provided 1MB as the max file limit. Then inside the fileFilter function, we can decide to either accepts the file to be uploaded or reject it. If the file extension matches with either jpeg|jpg|png|pdf|doc|docx|xlsx|xls then we allow the file to upload by calling the callback function cb(undefined, true) otherwise we will throw an error. If we call cb(undefined, false) inside the fileFilter function, then the file will always be rejected and will not be uploaded. Now, let’s look at the /upload route Router.post( '/upload', upload.single('file'), async (req, res) => { try { const { title, description } = req.body; const { path, mimetype } = req.file; const file = new File({ title, description, file_path: path, file_mimetype: mimetype }); await file.save(); res.send('file uploaded successfully.'); } catch (error) { res.status(400).send('Error while uploading file. Try again later.'); } }, (error, req, res, next) => { if (error) { res.status(500).send(error.message); } } ); Here, we’re passing the upload.single function as the second parameter to the /upload route so it will act as a middleware and will be executed first before executing the function body. Note that, the file parameter to the upload.single has to match with the name used while uploading the file in the front-end. Remember the code we used previously for making the API call from the App.js file. const formData = new FormData(); formData.append('file', file); we were adding the file to formData inside the property with the name file . This has to match with the upload.single parameter name otherwise the file upload will not work. Inside the function, we will get the title and description inside the req.body and actual file inside the req.file object just because we've used the multer library. Then we’re passing those values to the object of the File model we created. const file = new File({ title, description, file_path: path, file_mimetype: mimetype }); and calling the save method on the object will actually save the data in the MongoDB database. If the file type does not match with jpeg|jpg|png|pdf|doc|docx|xlsx|xls or the file size is larger than we mentioned (1MB) then the below code will be executed (error, req, res, next) => { if (error) { res.status(500).send(error.message); } }; and we send back the error message to the client(our React Application). Now, open server/index.js file and add the following contents inside it. const express = require('express'); const cors = require('cors'); const fileRoute = require('./routes/file'); require('./db/db'); const app = express(); app.use(cors()); app.use(fileRoute); app.listen(3030, () => { console.log('server started on port 3030'); }); In this file, we’re using Express server to start our Node.js application on port 3030 . We’re also using the cors npm package as a middleware, so we will not get a CORS error when we make an API call from React application running on port 3000 to the Node.js application running on port 3030 . Now, let’s run the application, to check the upload functionality. Open server/package.json file and add the start script inside the scripts property. "scripts": { "start": "nodemon index.js" } Now, open another terminal keeping the React terminal running and execute the following command from inside the server folder yarn start This will start our Node.js express server so we can make API calls to it. Also start the MongoDB database server by running the following command from the terminal(If you have followed this article mentioned previously) ./mongod --dbpath=<path_to_mongodb-data_folder> So now you will have three terminals open: one for React application, one for Node.js server, and another for MongoDB server. Let’s verify the upload functionality now. As you can see, when we upload a file, its added to the files folder, and entry is also in the MongoDB database. So file upload is successful. But we’re not showing any indication on the UI that the file is successfully uploaded. Let’s do that now. Create a new file FilesList.js inside the components folder with the following content import React, { useState, useEffect } from 'react'; import download from 'downloadjs'; import axios from 'axios'; import { API_URL } from '../utils/constants'; const FilesList = () => { const [filesList, setFilesList] = useState([]); const [errorMsg, setErrorMsg] = useState(''); useEffect(() => { const getFilesList = async () => { try { const { data } = await axios.get(`${API_URL}/getAllFiles`); setErrorMsg(''); setFilesList(data); } catch (error) { error.response && setErrorMsg(error.response.data); } }; getFilesList(); }, []); const downloadFile = async (id, path, mimetype) => { try { const result = await axios.get(`${API_URL}/download/${id}`, { responseType: 'blob' }); const split = path.split('/'); const filename = split[split.length - 1]; setErrorMsg(''); return download(result.data, filename, mimetype); } catch (error) { if (error.response && error.response.status === 400) { setErrorMsg('Error while downloading file. Try again later'); } } }; return ( <div className="files-container"> {errorMsg && <p className="errorMsg">{errorMsg}</p>} <table className="files-table"> <thead> <tr> <th>Title</th> <th>Description</th> <th>Download File</th> </tr> </thead> <tbody> {filesList.length > 0 ? ( filesList.map( ({ _id, title, description, file_path, file_mimetype }) => ( <tr key={_id}> <td className="file-title">{title}</td> <td className="file-description">{description}</td> <td> <a href="#/" onClick={() => downloadFile(_id, file_path, file_mimetype) } > Download </a> </td> </tr> ) ) ) : ( <tr> <td colSpan={3} style={{ fontWeight: '300' }}> No files found. Please add some. </td> </tr> )} </tbody> </table> </div> ); }; export default FilesList; In this file, initially inside the useEffect hook, we're making an API call to the /getAllFiles API. The /getAllFiles API from routes/file.js looks like this: Router.get('/getAllFiles', async (req, res) => { try { const files = await File.find({}); const sortedByCreationDate = files.sort( (a, b) => b.createdAt - a.createdAt ); res.send(sortedByCreationDate); } catch (error) { res.status(400).send('Error while getting list of files. Try again later.'); } }); Here, we’re calling the .find method of mongoose library on the File model to get the list of all files added in the database and then we're sorting them by the createdAt date in the descending order so we will get the recently added file first in the list. Then we’re assigning the result from the API to the filesList array in the state const { data } = await axios.get(`${API_URL}/getAllFiles`); setErrorMsg(''); setFilesList(data); Then we’re using the Array map method to loop through the array and display them on the UI in a table format. We have also added a download link inside the table. We’re calling the downloadFile function when we click on the download link const downloadFile = async (id, path, mimetype) => { try { const result = await axios.get(`${API_URL}/download/${id}`, { responseType: 'blob' }); const split = path.split('/'); const filename = split[split.length - 1]; setErrorMsg(''); return download(result.data, filename, mimetype); } catch (error) { if (error.response && error.response.status === 400) { setErrorMsg('Error while downloading file. Try again later'); } } }; Inside the downloadFile function, we're making call to the /download/:id API. Note that, we're setting the responseType to blob . This is very important otherwise you will not get the file in the correct format. The /download API from routes/file.js file looks like this: Router.get('/download/:id', async (req, res) => { try { const file = await File.findById(req.params.id); res.set({ 'Content-Type': file.file_mimetype }); res.sendFile(path.join(__dirname, '..', file.file_path)); } catch (error) { res.status(400).send('Error while downloading file. Try again later.'); } }); Here, first, we’re checking if any such file exists with the provided id . If it exists then we're sending back the file stored in the files folder by setting the content-type of the file first. Setting the content-type is very important to get the file in the correct format as we're not just uploading images but also doc, xls and pdf files. So to correctly send back the file content, the content-type is required. Once we got the response from the /download API inside the downloadFile function, we're calling the download function provided by the downloadjs npm library. downloadjs is a very popular library for downloading any type of file. You just have to provide the file content, its content type and name of the file you want the file to have while downloading and it will trigger the download functionality of the browser. Now, open router/AppRouter.js file and add a route for the FilesList component. Your AppRouter.js file will look like this now: import React from 'react'; import { BrowserRouter, Switch, Route } from 'react-router-dom'; import App from '../components/App'; import Header from '../components/Header'; import FilesList from '../components/FilesList'; const AppRouter = () => ( <BrowserRouter> <div className="container"> <Header /> <div className="main-content"> <Switch> <Route component={App} path="/" exact={true} /> <Route component={FilesList} path="/list" /> </Switch> </div> </div> </BrowserRouter> ); export default AppRouter; Now, open src/App.js and inside the handleOnSubmit handler after calling the /upload API, add a statement to redirect the user to the FilesList component await axios.post(`${API_URL}/upload`, formData, { headers: { 'Content-Type': 'multipart/form-data' } }); props.history.push('/list'); // add this line So now, once the file is uploaded, we will be redirected to the FilesList component where we will see the list of files uploaded. If there is some error while uploading the file, you will see the error message on the UI and you will not be redirected to the list page. Assuming that, you have executed the yarn start command in two separate terminals for starting React and Node.js application and another terminal for running the MongoDB server. Now, let's verify the application functionality. Uploading Image File Demo Uploading PDF File Demo Uploading Excel File Demo Uploading Doc file Demo Uploading un-supported File Demo As you can see, we’re able to successfully upload and download any type of file which is in our supported format list. Removing the Need of CORS As previously mentioned, to stop getting CORS error while calling API from React App to Node.js App, we’re using cors library at the server-side like this: app.use(cors()); Try removing this line from the file and you will see that the API calls from React to Node.js fail. To prevent this error we’re using the cors middleware. But because of this, anyone in the world can access our APIs directly from their app which is not good for security reasons. So to remove the need of cors, we will run the Node.js and React application on the same port which will also remove the need for running two separate commands. So First, remove the use of cors from server/index.js file and also remove the require statement of cors . Then add the following code before the app.use(fileRoute) statement. app.use(express.static(path.join(__dirname, '..', 'build'))); Here, we’re telling express to serve the contents of the build folder statically. The build folder will be created when we run yarn build command for our React App. To learn in details about how this actually work, check out my previous article here and import the path Node.js package at the top of the file. const path = require('path'); Your server/index.js file will look like this now: const express = require('express'); const path = require('path'); const fileRoute = require('./routes/file'); require('./db/db'); const app = express(); app.use(express.static(path.join(__dirname, '..', 'build'))); app.use(fileRoute); app.listen(3030, () => { console.log('server started on port 3030'); }); Now, open the main package.json file add start-app script in the scripts section. "scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject", "start-app": "yarn build && (cd server && yarn start)" }, Now, Assuming you have already started the MongoDB server, you just need to run yarn run start-app command from the terminal. This command will create a build folder which will contain all of our React application and then starts our Node.js server on port 3030 . So now, we can access our React and Node.js application on the same 3030 port. So there is no need of running two separate commands and you can access the application at http://localhost:3030/ But there is one issue, If you refresh the /list page, you will get a 404 error. This is because we're starting the App using Express server so when we hit the /list route, it will go to the server to check for that route. But the server does not contain such a route but our React App has that route so to fix this we need to add some code. Open server/index.js file and before the app.listen call, add the following code. app.get('*', (req, res) => { res.sendFile(path.join(__dirname, '..', 'build', 'index.html')); }); The above code will send the build/index.html file back to our React application when we hit any route which is not present on the server-side. So as the /list route is not present on the server-side, out React app will handle that routing as we're redirecting the user to the index.html file. So make sure the above line of code is added after all your server-side routes because the * in app.get will match any route. Your final server/index.js file will look like this now: const express = require('express'); const path = require('path'); const fileRoute = require('./routes/file'); require('./db/db'); const app = express(); app.use(express.static(path.join(__dirname, '..', 'build'))); app.use(fileRoute); app.get('*', (req, res) => { res.sendFile(path.join(__dirname, '..', 'build', 'index.html')); }); app.listen(3030, () => { console.log('server started on port 3030'); }); Now, restart your application by running yarn run start-app command and now refreshing the /list route will not give you a 404 error. Conclusion We have now finished creating the complete file upload and download functionality using MERN stack. You can find the complete source code for this application in this repository. Don’t forget to subscribe to get my weekly newsletter with amazing tips, tricks and articles directly in your inbox here.
https://medium.com/javascript-in-plain-english/implement-file-upload-and-download-functionality-using-mern-stack-with-image-preview-685bb989f4e8
['Yogesh Chavan']
2020-11-12 10:20:43.536000+00:00
['JavaScript', 'Web Development', 'React', 'Nodejs', 'Programming']
Introducing, Fearless She Wrote
In case you missed it, Maggie Lupin, Gillian Sisley, and I have started a new publication. We are thrilled to introduce everyone to Fearless She Wrote! The three of us have been a source of support and inspiration for one another for some time now, and we were happy to lean on each other after each of us received continuous harassment from misogynistic trolls. It felt comforting to have other women writers to converse with and share our pain after being criticized and bullied for being victims of sexual assault. We decided to open up our conversation to every person who has ever been made to feel like they should stay quiet. That’s how Fearless She Wrote was born. Our goal is to shed some light on the harassment that women everywhere receive when they speak up and tell their stories. This is a space to say that we will not be silenced, we will continue to write and share our stories, and to anyone who has ever been criticized for speaking their truth, you are not alone. We hope that Fearless She Wrote will be a place to empower differences, tell stories, and share our lives with one another.
https://medium.com/fearless-she-wrote/introducing-fearless-she-wrote-b1e614269a7
['Jessica Lovejoy']
2020-09-29 16:51:03.385000+00:00
['Feminism', 'Equality', 'Women', 'Writing', 'Sexuality']
Deconstructing Dieter Rams’ ten principles for good design — part 1
More from Pascal Barry Follow Designer of digital products, brands, websites and type. Co-founder at akord.com, a tech startup based in Paris. Visit the source at www.blogofpascal.com
https://pascaljb.medium.com/deconstructing-dieter-rams-ten-principles-for-good-design-part-1-1fc5bc2d1e51
['Pascal Barry']
2020-11-04 11:46:39.198000+00:00
['Design Process', 'Product Design', 'Design', 'Dieter Rams', 'UX']
Writing
Haiku is a form of poetry usually inspired by nature, which embraces simplicity. We invite all poetry lovers to have a go at composing Haiku. Be warned. You could become addicted. Follow
https://medium.com/house-of-haiku/writing-3a237e130258
['Liane White']
2020-12-14 19:01:12.410000+00:00
['Tanka', 'Comfort', 'Words', 'Writing', 'Poetry']
Thank You Pandemic for Teaching Me How to Take Care of Myself
Appreciating the positives “Gratitude will shift you to a higher frequency, and you will attract much better things.” — Rhonda Byrne Showing appreciation and gratitude for the things I have and love is one of the most powerful communication tools I have come across. When you show appreciation regularly, it will open doors to a better conversation, a more positive exchange, and ultimately a better relationship. And, the more I practice this, the sooner I feel good. This initially seemed absurd to me but with time, it transformed my life. 2020 was and is the year to have the courage to be enthusiastic. I express appreciation, give compliments, and call out triumphs — no matter how small — openly. If I see something good, I speak up. And this attitude keeps me positive all through the day. We still have a long path to get through this pandemic, but I’m doing my best to manage the toll it takes on my mental and emotional health. It is making it easier to ride out the coming ups and downs. I feel hopeful, ready and happy about the future that is yet to be unfurled, and that is a great feeling. Choosing health “To ensure good health: eat lightly, breathe deeply, live moderately, cultivate cheerfulness, and maintain an interest in life.” -William Londen At the beginning of the year, I often found myself stressing over the disaster called pandemic that was staring me in the eye. This led me to habits like binge-eating and choosing the wrong sorts of foods. I was literally nothing but a couch potato with a huge bag of chips a cola next to me. Within weeks, I could see it’s negative impact manifest in the form of weak health and immunity. It also made major and devasting impacts on my mental health. That’s when I said STOP! At the end and the beginning of the day, it was up to ME to decide to put my wellness first. I started investing in self-care and gradually I could see visible differences. It’s always easier to prevent disease than to manage or cure it after. When I gave up junk and greasy food and dedicated time on physical activities, I became less stressed and unhappy. It’s important to nurture your mind, body, and soul for a balanced approach to your health and wellness. But the best part was the quality of sleep I started having after I chose health. So, yes, love yourself enough that you know what your body and mind need. Tech-NO-logy “The digital innovation that set out to connect people, has slowly started to tear those people apart both from within and without.”― Abhijit Naskar A few months ago, as a millennial, I was obsessed with social media. Though I was very much aware that it was a time-suck, it caused major FOMO whenever I shut it down, and it used to just put me in a really crappy mood overall. Even after repeatedly shunning myself for the obsession, my hands would still reach down and log in to the apps. It was only after my moods started fluctuating that I gradually switched off from it. My life has changed for the better since deleting social media. I now enjoy catching up with myself, my family and friends. It made me realise who my real friends are and how social media takes the joy out of sharing news with people. I also feel less anxious and less depressed now. My life is a private diary and only the true and close people are a part of it. And I guess, I’m going to let it be that way. Of course, this is no way means that I am away from technology on a whole — I do spend some time watching a few of my favourite TV shows and films. I spend some parts of my day enjoying a cat video, a podcast, a lecture and a little bit of glamour here and there. The trick is to have self-control though. Pious and proud This might not be for everybody but this is something very near and dear to me. And it transformed my life. What I’m about to share with you is the best thing that happened to me in 2020! Till the year hit the broken paths, I was a little here and there when it came to the concept of God. In short, I was an agnostic Hindu. But the time and troubles of 2020 led me to a path to question the purpose of my life, this world and the definition of the hereafter. I spent days and nights searching the depths of the internet to find an answer to my queries and confusion. No matter where you are in your life, deep down we desperately desire to connect with our Creator. And He wants to connect with us too. I somehow felt that the free time God had gifted me with was the time I had to use for gaining the utmost and superior knowledge ever known to mankind — I wanted to know who God is, His creations and His plans. I spent months studying religions, and this is the best thing I have studied in my life. The peace and happiness I gained after knowing that there is the Supreme power who is taking care of my affairs can never be compared to any happiness I have ever gained. “Do not, then, either lose heart or grieve: for you shall surely gain the upper hand if you are true men of faith. “ — Qur’an 3:139 Every day, I praise God, thank Him, ask Him for help, ask Him to take care of my family and friends, ask Him to forgive me and ask Him to guide me. And every time I do this, I feel refuelled and loved. What a great feeling that is!
https://medium.com/blueinsight/thank-you-pandemic-for-teaching-me-how-to-take-care-of-myself-69ae0fa65b10
['Neha Ravindra']
2020-12-22 16:48:30.108000+00:00
['Blue Insights', 'Life Lessons', 'Self Improvement', 'Coronavirus', 'Pandemic']
60 Things I Learned from the book — You Are a Badass
Recently I completed a book named You Are a Badass: How to Stop Doubting Your Greatness and Start Living an Awesome Life. It is a self-enrichment book. During reading the book, I highlighted some important quotes. Here I am sharing the notes. 1. You can start out with nothing, and out of nothing, and out of no way, a way will be made. 2. You need to go from wanting to change your life to deciding to change your life. If there is a will there is a way. There is a subtle distinction between wanting something and doing something. To change a life, we must apply and keep trying rather than thinking. 3. You’re gonna have to push past your fears, fail over and over again and make a habit of doing things you’re not so comfy doing. 4. If you want to live a life you’ve never lived, you have to do things you’ve never done. The truth of life is, progress made outside of the comfort zone. If we do something that doesn’t feel doing then there is no chance for progress. If a person does the same workout in the gym regularly after a while it will not make any effect on his body. To make a change, he has to change workouts with different loads. 5. Most people are living in an illusion based on someone else’s beliefs. 6. The conscious mind is like a relentless overachiever, incessantly spinning around from thought to thought, stopping only when we sleep. 7. Our subconscious mind, on the other hand, is the non — analytical part of our brain that’s fully developed the moment we arrive here on earth. 8. Our subconscious mind contains the blueprint for our lives. 9. Our conscious mind thinks it’s in control, but it isn’t . Our subconscious mind doesn’t think about anything, but is in control. What we think and what we do, there is a direct relationship between our conscious and subconscious mind. 10. “Coincidence is God’s way of remaining anonymous.” — Albert Einstein This quote reminds me of another saying: “Fortune favors the brave”. We can change our life if we are serious and work hard. And luck will soon or later favor with us. 11. Unless your energy is lined up properly with that which you desire, really desire, any action you take is going to require way more effort to get you where you want to go. 12. ”If you are depressed, you are living in the past. If you are anxious, you are living in the future. If you are at peace, you are living in the present. — Lao Tzu 13. The more time you spend in the moment, the richer your life will be. When we work we should fully concentrate on the moment. It's much better to do one task perfectly than more incomplete tasks. People nowadays are multitasking. Social media hampers our moment. People are too busy to share moments with their virtual friends in social networks rather than enjoying the moment where they are. 14. If we really love ourselves, everything in our life works. — Louise Hay ; author, publisher, the Godmother of Self — Help 15. Self — love, the simplest yet most powerful thing ever, flies right out the window when we start taking in outside information . 16. If you want to turn the ship around, you need to rewire your brain and train it to think differently. 17. Our thoughts become our words, our words become our beliefs, our beliefs become our actions, our actions become our habits, and our habits become our realities. 18. Avoid comparison like the plague. 19. You are responsible for what you say and do. You are not responsible for whether or not people freak out about it . 20. What other people think about you has nothing to do with you and everything to do with them. Comparison is not bad if we use it to improve yourself. But sometimes comparison made us sad. Sometimes we are afraid of what people say about us or our actions. Some people also criticize us so badly that it lets us down. We have to be strong to fight this. If we are right about what we do, it does not matter what other people think or say about us. 21. There is such a thing as constructive criticism, and constructive complimenting. 22. If people constantly tell you you’re a good listener, ask yourself, Is this compliment true for me? 23. Instead of wasting hours and days and years trying to figure out your perfect next move, just DO something already. 24. Most answers reveal themselves through doing, not thinking. 25. DO YOUR BEST WHEREVER YOU ARE AT. We must keep rolling to be better ourself. We should fully focus on what we are doing. We must know how to test ourself. 26. Everything you do along your journey contributes to where you’re going. 27. It is better to be hated for what you are than to be loved for what you are not. 28. If you want something badly, even if you don’t have any evidence that it’s possible for you to attain, believe it is anyway. 29. Our minds think in images: If someone says, a horse wearing red lipstick, you instantly create a picture in your mind of a horse wearing red lipstick. 30. Stay away from people with tiny minds and tiny thoughts and start hanging out with people who see limitless possibility as the reality. 31. How you do one thing is how you do everything. 32. We must focus on the positive instead of the list of negatives we’ve collected over time, and keep that focus regardless of what flies in our faces. 33. The more you give, the more you receive. 34. Give however much time or money you can, but do it consistently so it becomes a habit, so it becomes part of who you are. Even five dollars a month counts . 35. You are practically powerless without gratitude. 36. Faith is having the audacity to believe in the not — yet seen. 37. Life is an illusion created by your perception, and it can be changed the moment you choose to change it. 38. You’re the author of your own life — not your parents, not society, not your partner, not your friends. 39. Nothing in this world is permanent, including our stories. Yet we try to hold on to them for false security, which ultimately leads to sorrow and loss. Be willing to let go. 40. When we say we’re unqualified for something, what we’re really saying is that we’re too scared to try it, not that we can’t do it. 41. If you’re serious about changing your life, you’ll find a way. If you’re not, you’ll find an excuse. 42. REMEMBER THAT DONE IS BETTER THAN PERFECT 43. The majority of the pain and suffering in our lives is caused by the unnecessary drama that we create. 44. Once you know what your favorite distractions are, you can build up a good defense against them. 45. Do what you can do in joy, instead of trying to do it all in misery. 46. You absolutely cannot grow a business, get promoted or be a cool parent, and you absolutely will go gray before your time, if you try and do every single little thing by yourself. 47. Put your priorities first — don’t check e — mails or voice messages or Facebook until you’ve gotten into your day and accomplished some of the tasks you want to do. 48. Stress is a leading cause of cancer, heart attacks, liver failure, stupid accidents. 49. DON’T THINK OF ANYTHING UPSETTING IN BED AT NIGHT 50. Michael Jordan was cut from his high school basketball team for lack of skill. 51. Steven Spielberg , a high school dropout , was rejected from film school three times. 52. Whenever I asked them what the secret to their success was, the overwhelming majority answered: Tenacity. Be the last person standing. 53. It’s all about contributing to the world by making life easier, happier, safer, healthier, better, tastier, more beautiful, more fun, more interesting, more thoughtful, more loving — whatever you do, bring something good to the party. 54. One of the best things you can do to improve the world is to improve yourself. 55. When you arrive at a level you’ve never been at before, you’re faced with challenges you’ve never experienced before. 56. How do you form a habit ? Decide to. Make it a part of your regular, everyday activities. 57. Start developing successful habits if you want to become a successful person. 58. Don’t decide you’re going to run ten miles a day when you still consider walking to the pizza parlor around the corner a day’s worth of exercise. Start with running half a mile a day and add more as you get stronger. 59. Your mind will follow where your body leads. If you’re in a bad mood and remember to stand up nice and tall and straight, your mood will automatically lift. 60. Nothing is impossible, the word itself says “I’m possible” — Audrey Hepburn; actress, icon, fabulist. Check it out: You Are a Badass: How to Stop Doubting Your Greatness and Start Living an Awesome Life. Conclusion During my reading, I take notes on the following lines from the book. The book link is affiliated.
https://medium.com/level-up-programming/60-things-i-learned-from-a-self-enrichment-book-you-are-a-badass-e2f001dc778a
['Mahmud Ahsan']
2020-10-28 07:40:12.123000+00:00
['Self-awareness', 'Life Lessons', 'Book Review', 'Lifehacks', 'Self Improvement']
Two Powerful Reasons Racism Is Worse than Classism: Racial Essentialism and Segregation
Two Powerful Reasons Racism Is Worse than Classism: Racial Essentialism and Segregation Both the widespread belief that race is biological and hidden aspects of modern segregation guarantee that racism is harsher than classism Rural poverty (L) by Christopher Windus (L). Urban poverty (R) by Gor Davtyan, both courtesy of Unsplash Poverty Sucks. I grew up in rural Northwest Georgia, living on about two dollars per day. So, I understand very well the sting of poverty. I’ve spent lots of time visiting loved ones in prison, and I have several loved ones who’ve been shot. I’ve also seen the bloody, swollen face, and the broken collarbone, of a teenage brother who was badly beaten by the police. A year later, I saw it again. Thankfully, this beating did not break any bones. But it left facial scars. I’ve never been beaten by the police. But I still get nervous any time a police car pulls up beside me. And my blood pressure still shoots up when I go to the doctor for a checkup. This spike in blood pressure happens because, when I was a kid, going to the doctor meant my parents thought I could be dying. Otherwise, we did not go. But I don’t get nervous when I go to the dentist. This is because I never once saw a dentist as a child. My first trip to the dentist happened when I was 20. Dr. Matthews quickly relieved my excruciating toothache by pulling a badly decayed molar. The last time my dad had a toothache, he pulled the bad tooth himself, with a pair of plyers. If there is an upside to all this, it is that I admire and adore dentists. Although many people in my extended family still live in poverty, I do not. Instead, I’m a middle-class psychology professor, and I’ll soon be 60. I’m also White. Further, if not for actor Chris Hemsworth, I’d confidently say I’m wholly and completely straight. Well, there’s also Idris Elba, but you see my point. My wrinkles and bulging midriff notwithstanding, I’m practically the poster boy of White male privilege. But my highly unusual position — having once lived in extreme poverty but now enjoying tremendous privilege — reveals an important lesson about the difference between classism and racism. Racism is worse. As bad as poverty is, racism is worse. This is true for many reasons, but I will focus on just two. Racial essentialism is one reason. Centuries of socialization have convinced most Americans that race is an immutable and deeply meaningful property of the person. Most Americans believe that race has deep biological roots. But ask almost any anthropologist or geneticist, and they’ll tell you that race is largely fictional. Of course, people whose ancestors lived where it was sunny have darker skin and tighter hair than people whose ancestors lived where it is less sunny. But moving beyond a few genes for surface features, people who are considered members of the same race have very little in common genetically. The myth of race persists nonetheless because it justifies a wide range of laws, norms, and social institutions that place people of color below White Americans. So, most people buy into the myth that race has a deep biological basis. By the way, kids are not born being racial essentialists. They learn it in childhood. Further, kids who believe more deeply than average in racial essentialism have greater difficulty remembering racially ambiguous faces. Photos of multi-racial people by Kat Love (L) and Mark Decile (R), courtesy of Unsplash In contrast to this, almost everyone knows that poverty is not immutable. A poor White person can put on some nice clothes and learn where the salad fork goes. Voila! Classism largely evaporates. (Learning a little French probably helps, too.) But people of color cannot dress their way out of how others perceive them. Perhaps even more important, almost everyone in America recognizes that poverty is a major social problem. But a growing number of White Americans believe that it is an advantage (that’s right, an advantage) to be Black. In a 2011 national survey of 400 Americans, White respondents reported the view that Whites today face more racial discrimination than Blacks do. In 2017, a much larger poll of more than 3,400 Americans conducted by Harvard, NPR, and the Robert Wood Johnson Foundation yielded very similar results. That’s right; many White Americans today believe that it has become an advantage to be Black. How many Americans of any ethnic background believe it’s an advantage to be poor? So, whereas poverty is almost universally recognized as a major social problem, racism is not. And whereas poverty can sometimes disappear, a person’s perceived race cannot. In a very real sense, this makes racism more enduring than classism. But the myth of racial essentialism is only one reason why racism is worse than classism. A second reason has to do with the largely invisible details of segregation. Almost everyone knows what segregation is. But very few people know how powerful it remains today — and how it means very different things for poor people of color (especially poor Blacks) versus poor Whites. Segregation and education. So how, exactly, does being poor and Black mean something different than being poor and White? On average, poor Black people are much more likely than equally poor White people to be surrounded by poor neighbors. In other words, much more so than White poverty, Black poverty is geographically concentrated. This has been guaranteed by practices such covenants and redlining that have concentrated Black Americans into the tightly packed red zones of cities across the nation. One of the most dramatic examples of this can be found in Chicago, where only about 4% of Whites but about 30% of Blacks live in neighborhoods where many of their neighbors are also poor. One of the many negative consequences of this fact has to do with school funding. Poor White kids are much more likely than equally poor Black kids to attend schools where the property taxes of their much richer White neighbors give them a decent education. So, because of segregation, rich Whites often help pull poor Whites out of poverty. But in Black neighborhoods, the concentration of poverty often makes this mathematically impossible. Of course, if Americans did not fund their schools based heavily on local property values, this problem might slowly correct itself. But we do fund schools this way, and this aspect of systemic racism virtually guarantees that Black and White poverty are lived differently. I should add that a 2016 paper published in the Proceedings of the National Academy of Sciences showed that between 1980 and 2010, both racial segregation and the racial wage gap in the United States decreased slightly. But even the authors of this mostly hopeful report concluded that “the end of black segregation is not at hand.” They added that “As of 2010, half of the metropolitan black population … [lived in] … neighborhoods that were home to only 3.6% of the nonblack population.” That’s right. Half of all the Black Americans who live in cities live in places where almost no one is White. Segregation is alive and well, and it is harming people of color, especially Black Americans. Segregation and overincarceration. The geographically concentrated poverty of Blacks and the geographically dispersed poverty of Whites also contributes in hidden ways to Black overincarceration. Large-scale national surveys consistently show that Whites and Blacks use illicit drugs at virtually identical rates. So why are Blacks arrested and convicted for drug crimes much more often than Whites? There are many answers. But consider the implications of segregation — the geophysical concentration of Black poverty. Police officers get paid for making arrests, and their police chiefs and state legislators tell them why people should be arrested. If you were a police officer from another planet — who saw only in infrared and had no idea what Black and White skin looks like — would you patrol for signs of drug crimes in what we earthlings know to be U.S. Black neighborhoods — or would you select White ones? If you wanted to make plenty of arrests, you’d go to the physically smaller areas where more of the crime is. This would all happen not because Black people are more likely than Whites to use drugs but because Black drug users have been corralled by decades of housing discrimination into much tinier geographic areas. If you were an alien police officer who was merely trying to do your job, you’d develop the intuition (the stereotype) that Black people use drugs much more often than Whites do. You’d carry that intuition around with you even when you were in desegregated places — such as the streets between Black and White neighborhoods. As a result, you’d come to feel like racial profiling is just common sense. The sheer physical concentration of Black poverty, combined with your interest in doing your job, would have converted you into a practicing racist. Why would you go driving all over the place to find and arrest a few White people when you could stay in one small place and arrest lots of Black people? New York CIty is NOT a hotbed of illicit drug users. It is a hotbed of people. This situation is even worse than this because Black Americans are much more likely than White Americans to live in cities. A 2018 study by the Pew Research Center’s Kim Parker and colleagues showed that between 2012 and 2016, only 44% of Americans living in urban counties were non-Hispanic White. In the suburbs (68% White) and rural areas (79% White), Whites were a very clear majority. The nation’s most densely populated city, New York City, has more than 26,000 people per square mile. Only about one third of New Yorkers are non-Hispanic Whites. Contrast that with the population of highly rural and White Wyoming. Even if we include Wyoming’s cities, Wyoming has six (yes, 6) people per square mile, and 84% of them are non-Hispanic Whites. About one sixth of both Black Americans and White Americans admit to using illegal drugs. The real figure for both groups is likely to be higher than this. But the key points are (a) that a sizable minority of Americans are illicit drug users and (b) that there are no meaningful ethnic differences in illegal drug use. This means that in New York City there are more than 4,000 illegal drug users per square mile. In sparsely populated Wyoming, there is one illegal drug user per square mile. Where do you suppose police officers spend more time on the lookout for drug users? Image by the author, based on recent data from Kim Parker and colleagues (see below) Maybe you’re not a police officer but an ICE agent who would like to crack down on undocumented immigrants. Cities are the best places to find them, too. Well, they’re certainly the best place to finding immigrants, and by definition all undocumented immigrants are immigrants. Kim Parker and colleagues showed that in 2012–2016, only 4% of people living in U.S. rural counties were immigrants. Contrast that with 11% in suburban counties — and 22% in urban counties.The hidden geography of anti-immigrant biases is muchlike the hidden geography of racism. This point about the nonrandom geographic distribution of poor people — and how it differs radically for poor people of color (especially Black people) and poor White people — is just one of many well-documented ways in which racism operates under the radar. Until America begins to address the enduring legacy of redlining and school segregation — including the school re-segregation that has occurred over the past couples of decades — we are not going to make a real dent in our biggest social problem. The stories of George Floyd and Ahmaud Arbery will repeat themselves until we (a) tackle poverty head on and (b) recognize that Black poverty and White poverty are often very different things.
https://medium.com/an-injustice/two-powerful-reasons-racism-is-worse-than-classism-racial-essentialism-and-segregation-207ba50781da
['Brett Pelham']
2020-12-22 00:27:05.772000+00:00
['Racism', 'Social Justice', 'Psychology', 'Race', 'Politics']
10 Minutes to Dataframe in Pandas
10 Minutes to Dataframe in Pandas Learn and Become a Master of one of the most used Python tools for Data Analysis. Introduction:- Pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. The Data Structures provided by Pandas are of two distinct types Pandas DataFrames & Pandas Series We’ll look at Pandas Dataframes in this post. What is Data frames ? A Data frame is a two-dimensional data structure, i.e., data is aligned in a tabular fashion in rows and columns. A Data frame is a two-dimensional data structure, i.e., data is aligned in a tabular fashion in rows and columns. Pandas DataFrame consists of three principal components, the data, rows, and columns. Features of DataFrame Potentially columns are of different types. Size — Mutable. Labeled axes (rows and columns). Can Perform Arithmetic operations on rows and columns. Diffrence between Series and DataFrame Sources You can think of it as an SQL table or a spreadsheet data representation. How to Create a Dataframe ? A pandas DataFrame can be created using the following constructor − pandas.DataFrame( data, index, columns, dtype, copy) The parameters of the constructor are as follows − Data : data takes various forms like ndarray, series, map, lists, dict, constants and also another DataFrame. data takes various forms like ndarray, series, map, lists, dict, constants and also another DataFrame. index : For the row labels, the Index to be used for the resulting frame is Optional Default np.arange(n) if no index is passed. For the row labels, the Index to be used for the resulting frame is Optional Default np.arange(n) if no index is passed. columns : For column labels, the optional default syntax is — np.arange(n). This is only true if no index is passed. For column labels, the optional default syntax is — np.arange(n). This is only true if no index is passed. dtype : Data type of each column. Data type of each column. copy : This command (or whatever it is) is used for copying of data, if the default is False. Reading the data and first insights The first thing we should do, once we have downloaded or collected some data is to read such data into a pandas DataFrame. This is one of the main Pandas objects, along with the Series, and like I mentioned before, it resembles a table with columns and rows. Before, as always, we should import the library. Pandas also has functions for reading from Excel sheets, HTML documents, or SQL databases (although there are other tools that are better for reading from databases) We can check out the first n rows of our dataframe using the head method. There is also a tail method to look at the last n. By default if no n is given to these methods they return the first 5 or last 5 instances. Using the head method without a parameter returns the following block: Return of the head command on the Life expectancy Dataset After successfully reading our data and creating our dataframe, we can start getting some information out of it with two simple methods: info: the info method returns the number of rows in the dataframe, the number of columns, the name of each column of the dataframe along with the number of non-null values of such column, and the data type of each column. Results of using the info method on both dataframes 2. describe: the describe method returns some useful statistics about the numeric data in the dataframe, like the mean, standard deviation, maximum and minimum values, and some percentiles. Information returned by the describe method of the Life Expectancy DataFrame The next step after getting this global view of our data is learning how to access specific records of our dataframe. Like a python list, pandas dataframes can be sliced, using exactly the same notation as for the lists. So if we want to select the first 10 rows of our dataframe, we could do something like: First 10 rows of data Indexing and selecting data With loc and iloc you can do practically any data selection operation on DataFrames you can think of. 1. loc is label-based, which means that you have to specify rows and columns based on their row and column labels. 2. iloc is integer index based, so you have to specify rows and columns by their integer index. After successfully understanding theory behind the loc and iloc , Let’s get started to implement it. Average Life Expectancy over 15 Years in Afghanistan. 2. Highest Life Expectancy in Developed Country over 15 years 3. Maximum polio in the countries over 15 years. 4. Maximum percentage expenditure of the Developing country over 15years. 5. Lowest Adult Mortality in a particular country over 15 years. Any groupby operation involves one of the following operations on the original object. They are − Splitting the Object the Object Applying a function a function Combining the results 6. Showing data of all Country having avg total expenditure, sum of polio and standard deviation of life expectancy with the method called groupby — We have seen what Pandas is, and some of its most basic uses. In the following posts we will see more complex functionalities and dig deeper into the workings of this fantastic library! That is all, I hope you liked the post. Feel Free to follow me on Medium
https://medium.com/swlh/10-minutes-to-dataframe-in-pandas-40adc93f974d
['Karan Shah']
2020-12-20 16:35:37.241000+00:00
['Data Analysis', 'Pandas', 'Python', 'Data Science', 'Data Visualization']
The three-course meal from ACL — my experience as a co-op student
Hi, It’s Azmarie, a Business/Computing Science student at Simon Fraser University. I just finished my incredible 4-month co-op with ACL as a Software Engineer in the team Titanium. I had a great time there learning and growing beyond my expectations. With all the interesting challenges, team collaboration, and career development opportunities that I have been given, I am confident to say, I left as a more skilled, humbled, and well-rounded developer. Being a foodie and a self-claimed good cook myself, I love a three-course meal where the appetizer gets you excited, the entree is steamy and filling, and the dessert is just the most incredible piece of sweet that one can imagine. I also love making memories with my 5 senses on full blast every meal and every day. Good working experience is just like fine dining — exciting, fulfilling and it leaves you a sweet after taste. Photo by Jay Wennington on Unsplash As I had such a great time working at ACL, I would love to present my experience at ACL as the form of a three-course meal and share it with you. Bon Appétit! Appetizer My journey at ACL was destined to be an interesting one, and it all starts from the interview. People always say “never be late to a job interview”, well luckily I wasn’t. in fact, I arrived one day early. Because of a competing offer and the 24-hour policy set by my university’s co-op office, I had to make a decision between accepting the first offer and cancel all upcoming interviews, or decline the offer and live with the fact that I may not get another offer for the term. Impossible as it seemed for me, I tried my luck at the third option — call ACL. I was honest about my situation and asked if they could arrange an interview for me on the same day. And they said yes! As a joint major student of Business and Computing Science, I had the fortune of a few previous co-op terms in various industry, yet still, my interview with ACL was like no other. I had an interview with soon-to-be my manager Leo Ping, a great leader and mentor, and Alon Sabi, the champion of people and culture in R&D. They started the interview light and breezy by introducing themselves, their roles, and what they enjoyed about working at ACL. Both of them were real and authentic, and they made me feel welcome and included and respected almost immediately. I can tell from the short 45-minute interview that they are as interested in my personalities and interests as they were in my work experience and technical skills. From the first moment of the interview, the organization shows how much they are committed to creating an environment where everyone feels involved and valued and building a community with the people who connect with these values as well. As the interview went by, I was more confident that ACL would be a good place for me to learn and grow as a Software Engineer. Entrée So it starts. Titanium Team Lunch I joined team Titanium as a full-time Software Engineer. First time working on an enterprise codebase could be intimidating, and it makes me appreciate more of the support I received in my onboarding week. The ACL Academy online training courses and The ACL Way new hires orientation gave me the knowledge of the industry we are in and the business value our software provides for customers. It was a lot of information in the first week, but I did find it useful later on when I started to develop the cloud solutions because I understood where my work can fit in the grand scheme of the company’s mission/vision. Participating in ACL’s Hackfest in my first week (Hackfest is a 2-day hackathon where everyone is encouraged to work on whatever you like, be it work-related or not) Since I had more Javascript experience from personal projects prior to this co-op, I gravitated towards the frontend development as I started taking on tickets. One of the first tickets that I independently worked on turned out to be complete. rabbit. hole. It was one of those technical enhancement tickets, replacing a <ExternalLink> component with the internal UI library implementation and remove the old code. After making this change, the console threw a TypeError complaining about this.context.t is not a function for an aria-label. Even if the whole App was supposed to be wrapped around a component that provides the context, the context is clearly missing here. Why couldn’t the React component understand the joke? Because it didn’t get the context. Get it? Okay back to the topic, so I came up a few ideas for a few quick fixes: Fix 1 Me: Wrap the problematic component with the context provider and 💥 this.context appears, I call it magic! Code Reviewer: 😕 The whole app is wrapped around that, what happened to context? Me: 🤷🏻‍♀️ Fix 2 Me: add a passive check for this.context.t in the shared UI component, being a little defensive ain’t gonna hurt nobody 🤫 Code Reviewer: It won’t help the codebase if we just put on a defensive bandage for the ugly, being a little defensive just hurt somebody. 😕 Me: 🤦🏻‍♀️ Okay, the code reviewer was right, I had to do it the hard way and find out what’s the root cause. With the help of many engineers across teams, soon I realized… PureComponent is suspicious. Interesting article on this. is suspicious. Interesting article on this. ConnectedComponent is suspicious is suspicious Passing in as a prop is suspicious And I went down the Rabbit Hole. Photo by Victor Larracuente on Unsplash As we peel back the layers of composition, we end up with the base level component RawButton . And this component is using a helper function getTextFromNode to extract a meaningful aria-label from its children jsx node. insert drumroll here… But the function relies on rendering the children first with ReactDOMServer.renderToStaticMarkup(node) and that will NOT have access to context This was a great learning experience for (the first week) me. Because it taught me some of the core spirits behind ACL’s engineers — relentless. They do not simply put on a bandage to a problem, instead, they ask why and they look for the root cause. Continuous improvement and codebase health really matters here. This also leads me to my first sprint demo, where I presented Lessons from a Rabbit Hole to talk about my experience of figuring out my way (with a lot of help) through this problem. I am really glad to have worked for a company like ACL in the beginning of my career, where a junior developer is being held to an equally high standard of delivering great quality, bandage-free code, with the great team resources available anytime to help. After my unforgettable lessons from the rabbit hole, it all becomes easier (and still exciting) from there. 4 months into the job, I have contributed and spearheaded into some important epics that my team focused on, for example, the company-wide rebranding efforts and the exciting React 16.8 upgrade. I was trusted to own and deliver my first 8-point ticket (the lucky 8 is the highest score on our story point scale) in a feature epic. And I also picked up a bit Ruby on Rails doing full-stack tickets! Dessert I enjoyed my time working at ACL, not only because of the interesting problems that I got to solve, the knowledge I gained from practicing my skills and guidance from mentors, but also the people, the culture, and the community here. That’s the cherry on top. Titanium and Alchemist Team Event — Snowshoeing at the Beautiful Mt. Seymour Toastmaster If you don’t know what toastmaster is, it’s a c̶u̶l̶t̶ club for the purpose of promoting communication and public speaking skills, and it’s awesome. ACL has its own toastmaster club where sessions are hosted bi-weekly and I became an avid attendee since my first week. Honestly, I am never a fan of public speaking (don’t think I will ever be) but I do enjoy the opportunities of sharing experiences and perspectives with a group of like-minded people. The chair and the speech evaluators (and everyone honestly) were always really generous, encouraging, and also gives great constructive feedback. After 3 months of being an audience, I signed up for a prepared 6-minute speech and shared my thoughts on Fear — Spring into Action with my fellow toasties. Toastmaster club at ACL is a safe space where people can get vulnerable, but also a place to be connected, engaged, and inspired. Mingle Over Coffee Craving coffee at 2 PM? Craving bubble tea at 3 PM? Craving beer at 4 PM? (Did I mention the free beer taps??) Join #mingle-over-coffee channel for a partner in crime! Mingle-over-coffee is a channel where two ACLers got paired up by a slackbot to meet over a drink of your choice. The point is for ACLers to learn more from the diverse range of people we have here and get to know someone whom we don’t normally work with. All three of my mingling experiences have been great. I learn about what it’s like being an R&D manager, Data Analyst Consultant, and Operations Business Partner, the challenges and opportunities at their roles, and even exchanges hiking recommendations! Meetup at ACL ACL Vancouver office is the heart of many meetups hosted in Vancouver — React, AWS, QA meetups and etc. As I am part of a non-profit Design Lab Vancouver, my manager was really supportive when I asked if I could use the meetup venue for the panel. Thank you, ACL! Conclusion My work at ACL was interesting and fulfilling, as it was a great mix of growing technical skills, understanding the business value that my work delivers, and collaborating with my super fun and superbly talented team. 4 months go by in a blink of an eye. I feel lucky and privileged to have worked with so many amazing and inspiring individuals. Voila, I hope you enjoyed it.
https://medium.com/galvanize/the-three-course-meal-from-acl-my-experience-as-a-co-op-student-aea729f6e71
['Azmarie Wang']
2019-04-27 05:34:33.436000+00:00
['Internships', 'Women In Tech', 'Software Engineering', 'Co Op', 'Tech']
With No Choice, I Said Goodbye to My Baby Boy.
It’s August 8th, 1964. Joe is twenty-five. I still have fairytale notions of finding a prince who’ll treat me like a princess. If I had been a whole girl, one with boundaries, solid with self-esteem, I would have refused his ride. But I wasn’t, and I didn’t. I’d romantically envisioned Joe showing up for our first date in some kind of chariot; he shows up drunk, in an old car with a hole in the floor and one door tied shut with a wire. I willingly get in, rain peppering me through the window that won’t close. As the evening progresses, Joe gets drunker. At the drive-in, he pushes me down. “NO!” I shout. He doesn’t seem to hear me. I have never had sex before and am not sure what happened. I stumble to the ladies room and yes, I am broken. My cycle had always reliably been 28 to 30 days. When it gets to be 35 days and nothing is happening, I start to panic. When Joe calls to arrange a date, I tell him I’m late. He says we’ll talk about it at 7:30 that night, when we are to meet. He never shows up. My girlfriends say to buy Humphrey 11 pills. They’re supposed to bring on your period. Then I hear quinine pills will do the trick. I walk alone to the drug store to buy them. I’m embarrassed and ashamed. They don’t work. I talk with my girlfriends about abortion, but none of us know how to get one. We hear you can go to Puerto Rico, but you have to pay $600. I keep playing hooky from school because I can’t concentrate. When I finally tell my parents, my father says, “Don’t you know that a woman can run faster with her skirt up than a man can with his pants down?” I quit school. I visit the priest, who tells me, “Adoption is the only option.” I arrange to go to St. Martha’s Residence in Newark. At home, we pretend that my belly isn’t growing bigger. This is not really happening. I can’t understand how my mother won’t talk to me about this when she went through the same thing! In 1938, when she was twenty-two and unmarried, she got pregnant and had to run away from home. She told no one. All by herself, she had my sister, Lois, and gave her up for adoption. Twenty-five years later, she watches me go through the same nightmare, but says nothing. How is that possible? At St. Martha’s, I am with other women who share the same shame and find strength in each other’s company. Each night we sit around sharing stories of our pasts and our hopes for the future. We don’t discuss the pain that is to come, or what to expect with labor and delivery. We’ve heard horror stories from the girls who went before us about the mean nurses at the hospital. No joy will be found in this birth. When the baby is born, I spend two days with him, counting ten baby toes and ten baby fingers on his perfect baby body. I tell him how much I love him. I keep his first baby picture. I feed him and name him Paul Joseph. On the third day, I say goodbye to the sweet baby boy. Mom and Dad finally come to take me to sign the papers. On the way home, we stop for drinks.
https://medium.com/tmi-project/i-committed-the-most-heinous-crime-i-gave-up-my-child-for-adoption-5145b14d6c41
['Tmi Project']
2020-12-21 18:38:39.616000+00:00
['Reproductive Rights', 'Abortion', 'Storytelling', 'Roe V Wade']
Write your first Generative Adversarial Network Model on PyTorch
Generative adversarial networks (abbreviated GAN) are neural networks that can generate images, music, speech, and texts similar to those that humans do. GANs have become an active research topic in recent years. Facebook AI Lab Director Yang Lekun called adversarial learning “the most exciting machine learning idea in the last 10 years.” Below we will explore how GANs work and create two models using the PyTorch deep learning framework. What is a Generative Adversarial Network? A generative adversarial network (GAN) is a machine learning model that can simulate a given data distribution. The model was first proposed in a 2014 NeurIPS paper by deep learning expert Ian Goodfellow and colleagues. GAN learning process GANs consist of two neural networks, one of which is trained to generate data, and the other is trained to distinguish simulated data from real data (hence the “adversarial” nature of the model). Generative adversarial networks show impressive results in terms of image and video generation: transfer of styles (CycleGAN) — the transformation of one image in accordance with the style of other images (for example, paintings by a famous artist); Human Face Generation (StyleGAN), realistic examples are available at This Person Does Not Exist. GANs and other data-generating structures are called generative models as opposed to more widely studied discriminative models. Before diving into GANs, let’s look at the differences between the two types of models. Comparison of Discriminative and Generative Machine Learning models Discriminative models are used for most supervised learning problems for classification or regression. As an example of a classification problem, suppose you want to train a handwritten digit image recognition model. To do this, we can use a labeled dataset containing photographs of handwritten numbers to which the numbers themselves are correlated. Training is reduced to setting the parameters of the model using a special algorithm that minimizes the loss function. The loss function is a criterion for the discrepancy between the true value of the estimated parameter and its expectation. After the learning phase, we can use the model to classify a new (previously not considered) handwritten digit image by matching the most likely digit to the input image. Discriminative model training scheme The discriminative model uses training data to find the boundaries between classes. The found boundaries are used to distinguish new inputs and predict their class. Mathematically, discriminative models study the conditional probability P (y | x) of an observation y for a given input x. Discriminative models are not only neural networks but also logistic regression and support vector machine (SVM). While discriminative models are used for supervised learning, generative models typically use a RAW data set, that is, may be seen as a form of unsupervised learning. So, using a dataset of handwritten numbers, you can train a generative model to generate new images. In contrast to discriminative models, generative models study the properties of the probability function P (x) of the input data 𝑥 . As a result, they do not generate a prediction, but a new object with properties akin to the training dataset. Besides GAN, there are other generative architectures: GANs have gained a lot of attention recently for their impressive results in visual content generation. Let’s dwell on the device of generative adversarial networks in more detail. The architecture of generative adversarial neural networks A generative adversarial network, as we have already understood, is not one network, but two: a generator and a discriminator. The role of the generator is to generate a dataset based on a real sample that resembles real data. The discriminator is trained to estimate the probability that the sample is obtained from real data and not provided by a generator. Two neural networks play cat and mouse: the generator tries to trick the discriminator, and the discriminator tries to better identify the generated samples. To understand how GAN training works, consider a toy example with a dataset consisting of two-dimensional samples (x1, x2), with x1 ranging from 0 to 2π and x2=sin(x1). Dependence x2 on x1 The general structure of the GAN for generating pairs (x̃1, x̃2) resembling points from a dataset is shown in the following figure. General GAN ​​structure The generator receives as input pairs of random numbers (z1, z2), transforming them so that they resemble examples from a real sample. The structure of the neural network can be any, for example, a multilayer perceptron or a convolutional neural network. G G The discriminator inputs alternately samples from the training dataset and simulated samples provided by the generator. The role of the discriminator is to assess the likelihood that the input data belongs to a real dataset. That is, training is performed in such a way that it gives out, receiving a real sample, and for the generated sample. D G D 1 0 As in the case with the generator, you can choose any structure of the neural network, taking into account the sizes of the input and output data. In this example, the input is 2D and the output is a scalar ranging from 0 to 1. D Mathematically, the GAN learning process consists of a minimax game of two players, in which it is adapted to minimize the error of the difference between the real and the generated sample, and adapted to maximize the probability of making an error. D G D At each stage of training, the parameters of the models and are updated. To train, at each iteration, we mark a sample of real samples with ones and a sample of generated samples created with zeros. Thus, a normal supervised learning approach can be used to update the parameters as shown in the diagram. D G D G D Discriminator training process For each batch of training data containing tagged real and generated samples, we update the set of model parameters D , minimizing the loss function. After the parameters are D updated, we train to G generate better samples. The set of parameters is D "frozen" during the training of the generator. Generator learning process When it starts to generate samples so well that it is “fooled”, the output probability tends to one — it considers that all samples belong to the original sample. G D D Now that we know how GAN works, we are ready to implement our own neural network using PyTorch. Your first generative adversarial network As the first experiment with generative adversarial networks, we will implement the above example with a harmonic function. To work with the example, we will use the popular PyTorch library, which can be installed using the instructions. If you’re seriously interested in Data Science, you may have already used the Anaconda distribution and the conda package and environment management system . Note that the environment makes the installation process easier. Installing PyTorch with, first create an environment and activate it: conda $ conda create --name gan $ conda activate gan This creates an environment conda named gan . Inside the created environment, you can install the necessary packages: $ conda install -c pytorch pytorch=1.4.0 $ conda install matplotlib jupyter Since PyTorch is an actively developing environment, the API may change in new versions. Code examples have been verified for version 1.4.0. We will use matplotlib to work with graphs. When using Jupyter Notebook, you need to register the environment so that you can create notebooks using this environment as a kernel. To do this, in the activated environment, run the following command: conda gan gan $ python -m ipykernel install --user --name gan Let’s start by importing the required libraries: Here we are importing the PyTorch ( torc ) library. We import the component separately from the library for more compact handling. The built-in library is only needed to get the value of the constant, and the tool mentioned above is for building dependencies. nn math pi matplotlib It is good practice to temporarily secure the random number generator so that the experiment can be replicated on another machine. To do this in PyTorch, run the following code: We use the 111 number to initialize the random number generator. We will need a generator to set the initial weights of the neural network. Despite the random nature of the experiment, its course will be reproducible. Preparing data for GAN training The training set consists of pairs of numbers (x1, x2) — such that x2 corresponds to the sine value of x1 for x1 in the range from 0 to 2π. Training data can be obtained as follows: Here we compile a training dataset of 1024 pairs (x1, x2). Then we initialize with zeros — a matrix of 1024 rows and 2 columns. train_data The first column is filled with random values ​​in the range from 0 to 2π. We calculate the values ​​of the second column as the sine of the first. train_data We then formally need an array of labels, which we pass to the PyTorch data loader. Since the GAN implements unsupervised learning, the labels can be anything. train_labels Finally, we create a list of tuples from and. train_data train_labels train_set Let’s display the data for training by plotting each point (x1, x2): Construction result Let’s create a data loader named train_loader that will shuffle data from train_set , returning packets of 32 samples ( batch_size ) used to train the neural network: The data is ready, now you need to create the discriminator and GAN neural networks. GAN Discriminator Implementation In PyTorch, neural network models are represented by classes that inherit from a class. If you are new to OOP, the article “An Introduction to Object-Oriented Programming (OOP) in Python” will suffice to understand what is happening. nn.Module The discriminator is a two-dimensional input and one-dimensional output model. It takes a sample from real data or from a generator and provides the probability that the sample is from real training data. The code below shows how to create a discriminator class. A standard class method is used to build a neural network model . Inside this method, we first call to run the corresponding method of the inherited class . A multilayer perceptron is used as the architecture of the neural network . Its structure is specified in layers using . The model has the following characteristics: __init__() super().__init__() __init__() nn.Module nn.Sequential() two-dimensional entrance; the first hidden layer consists of 256 neurons and has a ReLU activation function ; in the subsequent layers, the number of neurons decreases to 128 and 64. The output has a sigmoidal activation function, which is characteristic of representing the probability ( Sigmoid ); ); to avoid overfitting, after the first, second and third hidden layers, a part of the neurons is dropped ( Dropout ). For the convenience of inference, a method is also created in the class. Here corresponds to the input of the model. In this implementation, the output is obtained by feeding the input into the model we have defined without preprocessing. forward() x x After declaring the discriminator class, create an instance of it: discriminator = Discriminator() GAN generator implementation In generative adversarial networks, a generator is a model that takes as input some sample from a space of hidden variables that resemble the data in the training set. In our case, this is a 2D input model that will receive random points (z1, z2), and a 2D output that produces points (x̃1, x̃2) that look like the points from the training data. The implementation is similar to what we wrote for the discriminator. First, you need to create a class that inherits from, then define the architecture of the neural network, and finally create an instance of the object : Generator nn.Module Generator The generator includes two hidden layers with 16 and 32 neurons with the ReLU activation function, and at the output a layer with two neurons with a linear activation function. Thus, the output will consist of two elements ranging from −∞ to + ∞ , which will represent (x̃1 , x̃2). That is, initially we do not impose any restrictions on the generator — it must “learn everything by itself.” Now that we have defined the models for the discriminator and generator, we are ready to start training. Train GAN Models Before training the models, you need to configure the parameters that will be used in the training process: What’s going on here: We set the learning rate, which we will use to adapt the network weights. lr We set the number of epochs, which determines how many repetitions of the training process will be performed using the entire dataset. num_epochs To the variable, we assign the function of the logistic loss function (binary cross-entropy). This is the loss function that we will use to train the models. It is suitable both for training a discriminator (its task is reduced to a binary classification) and for a generator since it feeds its output to the input of the discriminator. loss_function BCELoss() The rules for updating weights (training the model) in PyTorch are implemented in a module. We will use Adam’s stochastic gradient descent algorithm to train discriminator and generator models. To create optimizers, run the following code: torch.optim torch.optim Finally, it is necessary to implement a training cycle in which samples of the training sample are fed to the model input, and their weights are updated, minimizing the loss function: Here, at each training iteration, we update the discriminator and generator parameters. As is usually done for neural networks, the training process consists of two nested loops: the outer one for the training epochs, and the inner one for the packets within each epoch. In the inner loop, it all starts with preparing data for training the discriminator: We get real samples of the current batch from the data loader and assign them to a variable. Note that the first dimension in the array dimension has the number of elements equal to. This is the standard way of organizing data in PyTorch, where each tensor row represents one sample from the package. real_samples batch_size Use to create labels with a value of 1 for real samples and assign labels to a variable. torch.ones() real_samples_labels We generate samples by storing random data, which we then pass to the generator for receiving. We use zeros for the labels of the generated samples, which we store in latent_space_samples generate_samples torch.zeros() generate_samples_labels It remains to combine the real and generated samples and labels and save respectively in and all_samples all_samples_labels In the next block, we train the discriminator: In PyTorch, it is important to clear the gradient values ​​at every step of the training. We do this using the method zero_grad() We calculate the output of the discriminator using the training data all_samples Calculate the value of the loss function using the output at and labels output_discriminator all_samples_labels Calculate the gradients to update the weights with loss_discriminator.backward() Find the updated discriminator weights by calling optimizer_discriminator.step() We prepare the data for training the generator. We use two columns to match the 2D data at the generator input. latent_space_samples batch_size We train the generator: We clean up the gradients using the method. zero_grad() We pass it on to the generator and save its output to latent_space_samples generate_samples We pass the generator output to the discriminator and save its output, which will be used as the output of the entire model. output_discriminator_generated Calculate the loss function using the output of the classification system stored in and labels equal to 1. output_discriminator_generated real_samples_labels Calculating gradients and updating generator weights. Remember that when we train the generator, we are keeping the discriminator weights frozen. Finally, in the last lines of the loop, the values ​​of the discriminator and generator loss functions are output at the end of every tenth epoch. Checking samples generated by GAN Generative adversarial networks are designed to generate data. Thus, after the training process is complete, we can call the generator to get new data: Let’s plot the generated data and check how similar it is to the training data. Before plotting a graph for the generated samples, you need to apply a method detach() to get the necessary data from the PyTorch computational graph: Results of building the generated dataset The distribution of the generated data is very similar to real data — the original sine. The animation of the evolution of learning can be viewed here . At the beginning of the training process, the distribution of the generated data is very different from the real data. But as it learns, the generator learns the real data distribution, as if adjusting to it. Now that we have implemented the first model of a generative adversarial network, we can move on to a more practical example of generating images. Handwritten Digit Generator with GAN In the following example, we will use GAN to generate images of handwritten numbers. To do this, we will train the models using the MNIST dataset of handwritten numbers. This standard dataset is included in the package torchvision First, in the activated environment, you need to install : gan torchvision $ conda install -c pytorch torchvision=0.5.0 Again, here we are specifying the specific version just like we did with PyTorch to ensure that the code examples run. torchvision We start by importing the required libraries: In addition to the libraries that we imported earlier, we will also need to transform the information stored in image files. torchvision torchvision.transforms Since the training set includes images in this example, the models will be more complex and the training will take significantly longer. When training in a central processing unit (CPU), one epoch will take about two minutes. It will take about 50 epochs to get an acceptable result, so the total training time using the processor is about 100 minutes. A graphics processing unit (GPU) can be used to reduce training time. To make the code work regardless of the characteristics of the computer, let’s create an object that will point either to the central processor or (if available) to the graphics processor: device The environment is configured, let’s prepare a dataset for training. Preparing the MNIST dataset The MNIST dataset consists of images of handwritten digits 0 through 9. The images are in grayscale and are 28 × 28 pixels in size. To use them with PyTorch, you need to do some transformations. To do this, we define the function used when loading data: transform The function has two parts: transforms.ToTensor() converts the data into a PyTorch tensor. transforms.Normalize() converts a range of tensor coefficients. The original coefficients are given by the function range from 0 to 1. Since the images have a black background, most of the coefficients are 0. transforms.ToTensor() The function changes the range of coefficients to transforms.Normalize() [ - 1 , 1 ][−1,1] , subtracting 0.5 from the original odds and dividing the result by 0.5. The transformation reduces the number of elements in the input samples to zero. This helps in training the models. We can now load the training data by calling : torchvision.datasets.MNIST The argument ensures that the first time you run the code, the MNIST dataset will be loaded and saved in the current directory as specified in the argument. download = True root We created so that we can create a data loader as we did before: train_set Let’s use matplotlib for the selective plotting of data. Well suited as a palette cmap = gray_r . The numbers will be displayed in black on a white background: As you can see, the dataset contains numbers with different handwriting. As the GAN learns the distribution of the data, it also generates numbers with different handwriting styles. We have prepared training data, we can implement discriminator and generator models. Discriminator and generator implementation In this case, the discriminator is a multilayer perceptron neural network, which takes an image of 28 × 28 pixels and finds the probability that the image belongs to real training data. To introduce the image coefficients into the perceptron neural network, it is necessary to vectorize them so that the neural network receives a vector consisting of 784 coefficients (28 × 28 = 784). Vectorization occurs in the first line of the method — the call transforms the form of the input tensor. Initial tensor form forward() x.view() 𝑥, where 32 is the batch size. After transformation, the form 32 × 1 × 28 × 28 𝑥x becomes equal, with each row representing the image coefficients of the training set. 32 × 784 To run a discriminator model using a GPU, you need to instantiate it and associate it with a device object using the method : to() discriminator = Discriminator().to(device=device) The generator will create more complex data than the previous example. Therefore, it is necessary to increase the size of the input data used for initialization. Here we are using a 100-dimensional input and output with 784 coefficients. The result is organized as a 28x28 tensor representing the image. The output coefficients must be in the range from -1 to 1. Therefore, at the output of the generator, we use a hyperbolic activation function. In the last line, we instantiate the generator and associate it with the device object. Tanh() It remains only to train the model. Model training To train models, you need to define training parameters and optimizers: We are reducing the learning rate compared to the previous example. To shorten the training time, set the number of epochs to 50. The learning loop is similar to the one we used in the previous example: Checking Generated GAN Samples Let’s generate some samples of “handwritten numbers”. To do this, we pass the generator an initiating set of random numbers: To build the generated samples, you need to move the data back to the central processor, if it was processed on the GPU. To do this, just call the method cpu() . As before, before plotting the data, you need to call the method detach() : The output should be numbers that resemble training data. Result of generating images After fifty epochs of learning, there are several numbers, as if written by a human hand. Results can be improved with longer training times (with more epochs). As in the previous example, you can visualize the evolution of training by using a fixed tensor of the input and feeding it to the generator at the end of each epoch (animation of the evolution of training). At the beginning of the training process, the generated images are completely random. As it learns, the generator learns the distribution of real data, and after about twenty epochs some of the generated digit images already resemble real data. Conclusion Congratulations! You have learned how to implement your own generative adversarial network. We first built a toy example to understand the structure of the GAN, and then looked at a network for generating images from the available sample data. Despite the complexity of the GAN topic, machine learning frameworks like PyTorch make implementation very easy.
https://medium.com/dev-genius/write-your-first-generative-adversarial-network-model-on-pytorch-7dc0c7c892c7
['Mikhail Raevskiy']
2020-08-31 09:50:39.477000+00:00
['Data Science', 'Deep Learning', 'Machine Learning', 'Python', 'Programming']
Book Coach Success Spotlight: Marni Seneker
Q: What was your experience in the Advanced Training and Certification course? What did you find most satisfying? What did you find most challenging? Marni: There were times when I was working in an unfamiliar genre (fantasy!) where I found the exercises to be uncomfortable — hello world building! There were also times when my feedback differed from the answer key in one of the exercises and I wished I could discuss it with someone. I loved the practicum work! It was so exciting to work through a fresh project using the tools from the course. The responsibility to the writer is so critical and it forces one to bring their best work to the task.
https://medium.com/no-blank-pages/book-coach-success-spotlight-marni-seneker-b27b95b12d23
['Terri M. Leblanc']
2020-09-01 15:08:54.294000+00:00
['Coaching', 'Freelancing', 'Coaching Skills', 'Writing', 'Success Story']
A Comprehensive Look at My Slightly Unhinged Planning System
My life is a very all-or-nothing proposition. Either I do all the things or none of them. So, I’m constantly on the search for some system or program that will work to keep me on top of staying organized. A magical planner or app or something. I’ve bought an Erin Condren Life Planner every year for the last three years. I usually do pretty well with it until about February. But it’s big and bulky. Too easy not to open up and use. And so I don’t. And then I spend the rest of the year feeling guilty about the $60 planner that I’m not using — and dealing with that awful pit-of-your-stomach feeling when you’re sure you’re forgetting something miportant. This year I’ve kind of cobbled together a system that’s finally working for me. I designed part of it myself. Part of it I took from other people. I thought it might be useful to you to know what I’m doing, but also how I came up with it. Because what I’m using might not be a perfect fit for you. But how I came up with it might help you figure something out that will work perfectly for you. What I Need to Organize I started by thinking about what I need to keep track of — and what I don’t. I think what I don’t is just as important as what I do. Because nothing will put me off a plan faster than trying to make myself keep track of things that I don’t really need or want to keep track of. What I Need to Keep Track Of Work appointments. Personal appointments. Travel schedules. Work events (launches, etc.) Goals and habits. Writing. A meal plan. Clients and students. A work log. What I Don’t Need to Keep Track Of My daughter’s sports schedules (her dad does.) Housekeeping schedules. Charts micromanging things like how much water I drink or how many steps I take. Holiday planning. A personal journal or diary. Your lists will probably look different from mine. Things that aren’t important to me right now might be very important to you and vice versa. That’s okay! It’s just good to know what you want to keep on top of and what you don’t need to worry about so much. My Systems It occurred to me that the reason why the systems I’d tried to use the past didn’t work for me is because they were full of stuff I didn’t need or want. So this time I really thought about exactly what I needed help with. Keeping track of appointments and deadlines and the thousand things I need to do every week. Staying on top of student/client information. A work log. Accountability. Once you know what you need help with, you can figure out which systems will work best for you. Here’s what’s been working for me. Simple Monthly Planner I use a plain month-at-a-glance calendar with large squares to keep track of my appointments. I’ve found that I really like being able to see my whole month all at once. You can see that I’m not going for fancy or Instagram ready here. I usually use a pencil so that I can at least make an attempt at erasing if something changes. FRED FRED is my Folder for Reaching the End of my Draft. You can read all about FRED below. Or get your own here. I use FRED to track my writing goals and to keep a work log. Keeping a log is one of the most effective ways I know of staying accountable. That’s incredibly important when you’re self-employed and there’s no outside force imposing deadlines on you. I use the outside of my FRED to take notes about writing projects.
https://medium.com/the-write-brain/a-comprehensive-look-at-my-slightly-unhinged-planning-system-8a01bdb6a8b4
['Shaunta Grimes']
2019-11-25 19:50:25.808000+00:00
['Life Lessons', 'Planning', 'Organizational Culture', 'Life', 'Productivity']
Peanut Butter Sandwich Caper
Sometimes you start off on a photo-walk, and you end up sharing a peanut butter sandwich with a stranger. In 2012 we used to go on a lot of photo-walks, organized by our friend and fellow camera enthusiast, John Carvalho. The group would meet at some designated Mississauga park or street and off we would go, sometimes as many as 25 people. John Carvalho forges ahead, leading the group. Photo by Louise Peacock During those walks we met and got know a lot of very nice people. On one such walk, a brisk but lovely morning, I ended up walking with lady I had not met before. Her name was Missi and she and I and our dog Tessa went forging along, looking for interesting scenes to capture. We had completed almost half the route, when Missi looked at her watch and announced that she would have to leave the walk and go home. She seemed stressed. I asked her if everything was alright, and she responded that she had missed breakfast and was starting to feel a bit weak and dizzy. Peanut Butter sandwich and makings. Computer art by Louise Peacock It seemed a real shame for her to leave part way through, and then I remembered that I had brought a midmorning snack. A peanut-butter and honey sandwich on some light rye bread. It was pretty plain but I told Missi I’d be pleased to share it with her. I got my backpack off my shoulders and hauled out the snack, which thankfully, I had cut in two. I offered her half. Missi accepted the humble snack gratefully, and we continued on, munching happily and finished the walk. A simple thing, but it forged a special link between us. Since that time I have lost touch with Missi, but I will always remember that special, shared moment and how it made us friends. You never know where a peanut-butter sandwich might lead you.
https://medium.com/chance-encounters/peanut-butter-sandwich-caper-7f21e4f64850
['Louise Peacock']
2020-11-17 18:42:24.074000+00:00
['Photography', 'Friendship', 'Sharing', 'Chance Encounters', 'Nonfiction']
Best Way To Hire A PHP Developer
Driving a web development company or running a standalone application, you would always need someone to get your projects done. In a world where there thrive multiple programming languages, choosing either of them is a tough task and tougher is hiring someone who can proficiently render solutions of web development. Taking into consideration the statistics of the programming language used by websites, it appears that around 79.8% web run on PHP based applications. So, of 10 websites that you come across each day, 8 are PHP driven. Call it the flexibility of the language or the ease to use interface, PHP is one such language proven to deploy strong web applications. However, the language alone cannot be credited for creating dynamic websites. The curious minds that aid the development of PHP based applications are the ones responsible for turning its fate. Efficiently developed applications have a promising future as compared to loosely tied applications. But how do you hire someone who is capable of developing feature-rich applications? How To Recruit PHP Developers? True that there are plenty of choices when it comes to hiring a PHP developer. You can opt for a PHP development company or look for a freelancer. However, finding the appropriate one might appear as looking for a needle in a haystack. Before you decide on who to choose, you need to be clear on the process of hiring a PHP programmer or to be precise, you need to know the process of recruiting someone who is capable of delivering function-oriented applications. Clearing the clutter, let move straight towards the process of hiring PHP programmers. There are three different aspects that you would need to consider before hiring someone. Categorization This primarily is the type of programmer you prefer to work with. Based on the proficiency level, one can segregate programmers into three kinds. Beginners: Whether a developer or a development company, beginners are ones that have just started on the journey of PHP development. They currently thrive in the growing phase and yet to render effective applications. Whether a developer or a development company, beginners are ones that have just started on the journey of PHP development. They currently thrive in the growing phase and yet to render effective applications. Mid-Level: These are the ones that have had industrial exposure. Such companies have developed few projects in the past and can deliver effective solutions. These are the ones that have had industrial exposure. Such companies have developed few projects in the past and can deliver effective solutions. Experienced: The ones that excel in PHP development, experienced programmers excel in developing PHP solutions that add value to your business. They can encapsulate ideas to blend them into real-life features. From designing to developing, integrating and maintaining, they are one step ahead of all. Now, on the basis of what your project requirement is, you can choose either of the above three. However, the fact that a web development company has a team of developers, you are sure to find people having different experiences and so whether you choose a beginner, a mid-level or experienced, you are entitled to find all under the same head. Sources To Hire A PHP Developer Once you are done with making your choice in the category from which you wish to hire, you are now a step closer. While there are many sources that help in the hiring process, there again exist three sources. PHP Developer Community: PHP being a versatile and widely accepted programming language, it has huge community support. While looking to get hold of someone who would help you develop a PHP project, such communities are worthy enough to give a look. PHP being a versatile and widely accepted programming language, it has huge community support. While looking to get hold of someone who would help you develop a PHP project, such communities are worthy enough to give a look. Freelance Portals: Technology has unraveled several domains and the most successful one has been job search. Giving people the ease to work at their own convenience is what freelancing is all about. Today, there are more than 57.3 million freelancers working. So, your next option is to hire one from the available portals. Technology has unraveled several domains and the most successful one has been job search. Giving people the ease to work at their own convenience is what freelancing is all about. Today, there are more than 57.3 million freelancers working. So, your next option is to hire one from the available portals. Certified PHP Development Company: The first term, itself boasts of its capabilities. There exists a multitude of developing companies that cater to the minutest requirements of your project and pledge to deliver optimal results. Such companies hold expertise in their domain (here, PHP) and once you hand over your project to them, your job is done. They take over the task and would do all to develop efficient web solutions. Based on the above two elements, you would have shortlisted some proposals that suit best your requirements. Now, it’s time to scrutinize them and find the ideal one. Things To Consider: Experience: No matter whether you choose an experienced or a mid-level programmer, it is always, desirable to reassess the experience of the company. The greater the number of projects served, the higher is the knowledge base. And this is where a PHP development company outstands individual developers. Different people have varied experience add to the total value provided by them No matter whether you choose an experienced or a mid-level programmer, it is always, desirable to reassess the experience of the company. The greater the number of projects served, the higher is the knowledge base. And this is where a outstands individual developers. Different people have varied experience add to the total value provided by them Past Projects: This is one of the crucial aspects while you choose to hire a PHP development company or a developer. True that you would have gone through the experience level of the same, but you should scan through the projects done by the potential company. See what the earlier clients have to say about them, the website’s performance and additional details to better assess the capability of developing PHP driven websites. The fact that development companies work on multiple projects simultaneously, they are proven to have an extensive set of completed projects This is one of the crucial aspects while you choose to hire a or a developer. True that you would have gone through the experience level of the same, but you should scan through the projects done by the potential company. See what the earlier clients have to say about them, the website’s performance and additional details to better assess the capability of developing The fact that development companies work on multiple projects simultaneously, they are proven to have an extensive set of completed projects Technology Stack: While PHP is used as one of the server-side languages, it is desirable to add more layers to the web page. This could be in terms of technology or functions. And hence, no matter who you choose, they must have hands-on experience in integrating varied features. Also, one who has a vast knowledge of different frameworks like XHTML, HTML, AJAX, MySQL, Zen and Cake PHP is preferred. Of course, blending technologies help develop modern solutions. While PHP is used as one of the server-side languages, it is desirable to add more layers to the web page. This could be in terms of technology or functions. And hence, no matter who you choose, they must have hands-on experience in integrating varied features. Also, one who has a vast knowledge of different frameworks like XHTML, HTML, AJAX, MySQL, Zen and Cake PHP is preferred. Of course, Creative Skills: Web development is not just about creating solutions, but also giving them a distinct way to outreach targeted users. One that has the capability to think out of the box are more proficient than others. It is not always the quantity, but the quality of development that gains user traction. Here again, a PHP development company appears to be one step ahead than individual developers. With a team of developers, there arises a bundle of ideas and suggestions for a single project. This in a way helps them come up with the best solution. Conclusion Now that you have taken a ride through the process of hiring, you would have realized that both developers and development company have their area of expertise. Where one offers a team to cater to your requirements, it’s counterpart is less expensive. Where one needs to be tracked, the other doesn’t require constant monitoring. Yet, lacing up all the plus and minus, it goes without saying that PHP development companies are one step ahead of individual developers. They not only narrow the process but at the same time guarantee developing applications capable of driving traffic and enhancing brand image. But the choice always remains locked in your hands. So be wise before you check the price.
https://medium.com/swlh/best-way-to-hire-a-php-developer-69db67af01a
['Mayank Pratap']
2020-02-28 11:19:22.747000+00:00
['Web Development', 'Startup', 'PHP', 'Developer', 'Hiring']
Mapping Drones for Professional Surveyors
The information below was given to us during an interview with drone operator Madison D. from Landpoint, a surveying company in Louisiana. Madison is researching the potential applications and benefits of unmanned technology in the surveying industry. Madison uses a Stratos Aerial fixed-wing UAV with DroneDeploy (and a DJI Inspire 1 on the side). Click to open the maps below (in a new tab) to explore them: The story Topographic surveys are an essential part of all land development projects In this instance, a new real estate subdivision was under development in Northern Colorado. Before homes could be constructed, an extremely accurate topographic survey was necessary for a couple of reasons: To ensure the initial land development (physical alteration of the land) was successful so that it allows for proper water runoff for drainage. To document subdivision topography in relation to the adjacent river’s floodplain for flood damage prevention and flood insurance purposes. Subdivision development can be an expensive operation — especially if progress has fallen behind schedule This particular project was weeks behind due to frequent inclement weather (notice the low-lying areas in the orthomosaic reference map above that are still holding rain water).
https://medium.com/aerial-acuity/case-study-830cfc23db55
[]
2016-07-29 18:06:23.156000+00:00
['GIS', 'Mapping', 'Drones', 'Surveying']
The Weetabix Complex
Photo by Aarón Blanco Tejedor on Unsplash I love my dad. One of my fondest childhood memories is of making the trip to Scotland to watch our favourite football team — Celtic FC. At my youngest, dad would prop me up on railings. Nowadays, we walk in to Celtic Park together as adults, and he couldn’t lift me if he tried. So being an all-round good guy, dad didn’t leave me atop Cithaeron as an infant. I also didn’t bash his head in on my way back to Thebes. However, dad did unwittingly start a chain of events I still deal with to this day. A journey filled with guilt, shame, discovery, and acceptance. A journey that started with a bowl of cereal… Better The Devil You Know Once upon a time, the nuclear family performed a strange ritual. At certain times of day, each family member would gather around a large, wooden structure called a ‘table’ to share meals. The main motivation was eating, but conversation was encouraged in between mouthfuls. Digestion, in more ways than one. I loved these rituals. Until one day, I didn’t. Noises normally lost among familial chatter and breakfast bonhomie were suddenly centre stage — the inhaling, the breathing, the chewing. Dear God, the chewing. I can’t pinpoint the singularity, but poor dad and his bowl of cereal sticks in the memory. In truth, everybody’s eating bothered me. ‘Bothered’ is euphemistic: at a certain point, everybody’s eating infuriated me. Clenched jaw, bunched fists, incandescent infuriation. Neuroses arrived early, and piled up fast as a youngster. Cleaning compulsions, counting fixations, and depersonalisation complications to name a few. I lobbed this nascent noise-aversion on top, and accepted it as another Michaelism — all the little idiosyncrasies that make me, me. As I got older though, I outgrew most of the quirks. At some point I realised: ‘Hey, it’s not the end of the world if someone other than me sits on my bed, and it doesn’t really matter if I count to seven before doing something.’ It was difficult. As the afflicted will tell you: when OCD is having a party, logic isn’t on the guestlist. But eventually, I stopped. Alas, no amount of logic helped with The Weetabix Complex — I just couldn’t shake the way listening to people eating made me feel. All Hail The Interwebs I was born in ’85, straddling the analogue and digital eras. As a kid, telephones were bakelite ornaments fixed to a wall; now they’re pocket computers. ‘Online’ was a place to hang wet clothes; now it’s shorthand for the abstract, omni-connected dimension we spend our lives tethered to. And it was only when online got its new meaning that I got a chance to connect some dots — to find out if my aversion really was just a Michaelism, or if there were others. And lo, there were others. It wasn’t just a Michaelism. Kindred spirits abounded, scattered throughout the world suffering the same fight-or-flight response to everyday noises. Our condition even had a name — Misophonia, literally ‘hatred of sound.’ In a stroke of serendipity, the term was coined right around the time I first sought answers — 2001. Up until that point, sufferers shouldered the double-shame of having a weird condition and zero recognition. With no name for their ills, how many people thought they were literally crazy for wanting to punch their families or run for the hills during Spaghetti Tuesdays? We’ll never know. This strong, visceral, and sometimes violent response is a common symptom among misophonics. ‘Triggers’ — the set of offending sounds unique to each person — vary in nature, but commonly centre on the mouth: eating, breathing, chewing, whispering, smacking of lips. Cases range from mild to severe, but misophonia can drastically affect lives — whatever the severity. And because triggers often come from loved ones, interpersonal relationships get strained even in light of the condition’s growing recognition. Living with Misophonia ‘Hatred of sound’ is a slight misnomer. For example, I love Stevie Nicks; I detest the sound of people eating. Fleetwood Mac makes me glad to be alive; hearing people breathe through their mouth doesn’t. This dichotomy is what separates misophonia from phonophobia or hyperacusis — both characterised by an indiscriminate sensitivity to loud sounds. The love/hate dichotomy played out recently. I’m walking round the supermarket, running an eye over the vegetable section, invested in a lively discussion on surveillance capitalism. Both me and the podcast are motoring along nicely, when out of nowhere the guest speaker tucks in to a chicken salad. My whole demeanour changes. I’m angry. Products get slammed into the basket, instead of a loving, Tetris-like arrangement. I persevere for a bit, hanging around until she finishes. But then it gets worse. The chewing stops, but the there’s no deliverance: remnants of chicken salad are evidently stuck to the guest’s back molars, and I’m treated to the sound of her tongue flicking in reverse, groping around to dislodge the offending bits. Horrified, I switch to ‘dreamy vibes’ for aural bleach. Yin and yang, then. Hellish chicken salad overtures, and a soft-lounge cleanser. Podcasts are a new minefield, but I’ve been dodging these sorts of triggers in traditional media for decades. Dialogue and exposition were regularly sacrificed on the altar of silence — all I could do was hit mute and curse the scriptwriter for setting key scenes around a kitchen table. Yeah Vince Gilligan made one of the greatest shows in history, but I’ll never forgive him for Walt Jr.’s breakfast obsession. At points it felt more like Breaking Bread, than Breaking Bad. So too The Sopranos. David Chase’s layered and cerebral mob drama is peerless, but it loses marks for every time I had to endure Tony breathily manipulate ice cream, or inhale chicken parmagiani. I might have been able to mute Walt and Tony, but I can’t mute people in real life. And I can’t mute the feelings of shame at harbouring such revulsion, simply because someone close to me is eating the ‘wrong way’. Kindred Spirits After the podcast incident, I petitioned the host to issue a blanket ban on future eating. He didn’t respond, but I did get a Twitter ‘like’ notification from an account named ‘Hear Our Misophonia!’ I connect with the woman behind the account and ask her if she suffers similar relationship issues. ‘Very few people in my personal life know of my shattered self in its most real and raw form,’ says Kshara. ‘For those who know this side of me, it strengthens our bonds and our relationship because of their understanding. ‘Others who know I have misophonia are aware of the more technical definition, not the extent of how it affects me. It doesn’t affect our relationship but they are respectful of it.’ How do misophonics cope, when triggers are everywhere? Unsurprisingly, Kshara is ‘packed in with the majority of sufferers whose triggers involve mouth noises.’ She also notes the common ‘visual triggers related to these sounds’ and that ‘mouth noises are at the top of my list.’ These visual triggers are an unwelcome ribbon on the unwelcome box. I ask Kshara about her coping mechanisms, and learn she uses similar defence methods. ‘I carry my earbuds everywhere, whether I have sound playing or not,’ she says. ‘They’re my shield against triggers even if they can’t block everything out. Because I can’t always leave the situation or have earbuds in, I take extra precautions like eating healthy, drinking water, and getting at least 8 hours of sleep. ‘Anyone in school or with a job knows how hard these can be, but treating my body badly makes me even less tolerant of triggers, which makes me feel worse and contributes to a vicious cycle.’ Solidarity With Sarah. I also reach out to Sarah, who admits she’s ‘at the severe end of misophonia impairments’. Her triggers include ‘noises from people and animals, to technology and inanimate objects.’ Over email, Sarah tells me about added complications. ‘I suffer from the common misophonia trigger sounds and I have visual triggers from movements that a lot of people make,’ she says. ‘I also have triggers in the language I read and hear. What ties these all together is the repetitive nature.’ What does life look like for somebody at the far end of the misophonic spectrum? Sarah concedes it’s ‘pretty awful.’ And because her trigger-set extends far beyond the common oral-centric complaint, there’s little respite. ‘[A trigger] that is still developing in severity of response is a bird call from a species that has taken up residence outside my home in the last few months,’ she says. ‘I already have another bird call that has been a trigger for around six years now after first exposure, which has me considering moving house to get away from it.’ How does misophonia limit her life? She offers an emphatic hypothetical. ‘I have often said that if I was trapped in a room with a trigger and I had no headphones or earplugs, I would throw my head against a wall until I knocked myself bloody and unconscious,’ she says. ‘Think of an animal who is trapped somewhere and has a threat to its life that it must get away from; it could be in a room or a physical trap on its limbs. That animal will do anything to get away; it will end up trying to run up or through walls, chew its way out or chew its foot off. For me, it feels that instinctual; a trigger feels like a threat and I must make it stop or escape. The only difference between me and an animal is I won’t attack and bite people!’ I empathise with Sarah. In particular, her comments about affected relationships resonate. I think of the guilt and anguish surrounding my own reactions over the years: wrestling with the idea of being an intolerant oik, the frustration of undiminished symptoms into adulthood, and how bizarre it must look from my dad’s perspective. ‘I used to wonder if it was just part of my personality,’ says Sarah. ‘When you grow up not knowing people can have reactions to sensory stimulation — other than pain or pleasure — you can misattribute your reactions to something else. ‘Maybe I’m just an easily irritated person? Do I have too high expectations for other people’s behaviour? Why am I over reacting? What is wrong with me that I can’t bloody stand some consonant sounds? ‘You question yourself and other people can get the wrong idea about you, a bit like when people think children are having a tantrum but really it’s a sensory meltdown. The thoughts and feelings I have when I’m triggered are completely opposed to what I think and feel when I’m trigger free, well except for my beliefs around ideal etiquette!’ Kshara and Sarah’s stories mirror those featured in the 2016 documentary ‘Quiet Please’. This critically-acclaimed, talking-heads documentary showcases a cross-section of sufferers and the hardship of living with misophonia. Like Kshara and Sarah, those featured talk of childhood onset, difficulty in expressing themselves and their symptoms, and the effect it has on everyday living. In one illuminating section, an interviewee takes pains to clarify the difference between the mild discomfort everyone experiences at annoying sounds, and the misophonic’s panic-inducing, fight-or-flight response to triggers. The subtext throughout was: we’re not crazy, this is real. Scientific Breakthrough? Sounds Good to Me. Is it real, though? Is it auditory? Intolerance? What even is misophonia? A 2017 study conducted by Dr Kumar of the Institute of Neuroscience at Newcastle University sheds some light on those questions, and the underlying mechanisms. Dr Kumar gathered 20 misophonics, and 22 control subjects. Both groups were exposed to trigger sounds, unpleasant sounds, and neutral sounds. MRI scans showed brain aberrations in the misophonic group, and a higher presence of myelination — a fatty sheath protecting nerve fibres — in an area of the brain responsible for processing and regulating emotions. The ventromedial prefrontal cortex (vmPFC) is part of a complicated brain-networking; it corresponds with the amygdala, the posteromedial cortex (PMC), and the anterior insular cortex (AIC). An over-activation of the AIC gives undue importance to everyday sounds. Faulty connectivity between each of these brain regions — including the fight-or-flight regulating amygdala — means misophonics interpret trigger sounds as threats, which causes the jarring reactions. Crossed wires, in essence. ‘[O]ur data suggest that abnormal salience attributed to otherwise innocuous sounds, coupled with atypical perception of internal body states, underlies misophonia,’ says Dr Kumar in his concluding comments. ‘Misophonia does not feature in any neurological or psychiatric classification of disorders; sufferers do not report it for fear of the stigma that this might cause, and clinicians are commonly unaware of the disorder. ‘This study defines a clear phenotype based on changes in behavior, autonomic responses, and brain activity and structure that will guide ongoing efforts to classify and treat this pernicious disorder.’ Pernicious indeed. A lack of treatment notwithstanding, we can thank Dr. Kumar and his team for shining their torch on a benighted subject. The fact we now know it’s a neurological disorder does little to help us in the short-term, but his results mark a leap forward in knowledge. His study forms the starting point for further investigations, and a springboard to evidence-based treatments. And who knows, maybe one day I’ll be able to join my family around that wooden structure of yore, where chatter is king and the chewing invisible. Maybe one day soon, I’ll finally be able to vanquish The Weetabix Complex. Do eating sounds make you flush with fury? Do breathing partners deserve prison sentences? Let me know if you have misophonia, and we can definitely not have a chat over some ice cream.
https://medium.com/swlh/the-weetabix-complex-d0943547deee
['Michael Kincella']
2019-12-12 08:25:41.879000+00:00
['Disability', 'Misophonia', 'Science']
Remembering the human using the machine
Whenever I hear about new developments in the tech world, my initial reaction tends to be enthusiasm. And right now, many new developments in tech tend to be dominated by AI, machine learning and automation. At its full potential, this technology has the power to assist, delight and improve experiences for people. Yet as we build things for larger and larger groups of people, the harder it gets to anticipate its effects; good or bad. So when I first joined a data science team at the cloud accounting software company Xero, I was excited for the opportunity, even if I wasn’t completely sure what my role would be. (I remember being asked “Why does data science need a designer?” and not initially having an answer — that was something that didn’t become clear to me until much later on.) It was with this team, when testing with customers, that we observed how brilliantly such technologies could address customer pain points. However, we also observed that we needed to respond to all the new problems and new behaviours that the technology could create — ones we didn’t see coming. It’s just like magic! For someone who isn’t technical, the perception of machine learning and automation can be that it’s almost magical. We see this in our own customers when something tedious and time-consuming has been automated or anticipated for them. When they describe something as “magic”, it comes from a sense of delight — our product saved them time, and has done the work for them. But what happens when it gets things wrong? Automation carries high expectations of speed and accuracy We know at Xero that it’s a win for our customers when we can automate repetitive accounting tasks, giving them more time to spend on their business. Depending on the customer’s business, this could save them hours every week. When my team was initially testing automated accounting processes with customers, we left the details of the automation process vague and listened as our customers set their own expectations of what it would be like. We included both ideal (where everything works perfectly) and realistic scenarios (where something is missing or something goes wrong). One of our first learnings was how difficult it was going to be to match our customers’ expectations. The test participants were excited by the prospect of capturing and transcribing information automatically and assumed that the automated process would do it both quickly and accurately. Inevitably, these participants were left disappointed and confused when it worked slower than they expected. For some, this became a barrier for further use. The break in their expectations left them wondering if perhaps it was the process that was broken, or if they’d be better off entering details manually instead. Delight when it works, and devastation when it doesn’t We discovered other reasons why replacing a manual workflow with a fully automated one wasn’t always going to work smoothly. Participants talked about typically keeping a physical paper trail in case there were problems in the future. When moving onto an automated process, they explained that they would need to keep double-checking their paper records until the service proved itself to be reliable and accurate over time. However, once the participants had used our automated process correctly, we saw higher confidence in the process, and more casual behaviour around double checking. They quickly trusted (and in some cases relied on) “the system” and its ability to get things right. With these raised expectations came lowered tolerance and patience for problems or errors. When incorrect data came through, participants initially assumed they had done something wrong. When eventually realising the error came from the software, this quickly turned to anger and embarrassment. Our customers felt that they had been let down and their trust betrayed. In fact, trust in Xero as an entire product (not just the automated process) was damaged. Participants described the pain of additional tasks they would need to do if mistakes like this happened — re-checking invoices and calling their clients to check credits, for example. The ramifications of one error spread far and wide. Our design research lead described the event as “The Big One” — that like Blackpool’s famous roller coaster, when we got automation right, we inflated expectations that customers already had of Xero. When we got it wrong, it was an emotional free fall. When technology doesn’t perform in the way the customer needed it to, the results can be devastating. Photo by Dmitry Mashkin on Unsplash People want to be pilots, not just passengers We often assume that the workflows and existing behaviours of customers could be easily overwritten if it were replaced by the option of a ‘simpler’ process. What we learned instead is the need to design alongside (and not against) their existing behaviours. 1. The mental model of some workflows extend beyond the task. Automation carries the risk of reducing the visibility customers have over the smaller details of their data. Customers who have a habit of overseeing these details have a better sense of what is happening in their business. Reviewing this data can also prompt them to do other important tasks. 2. We can’t help our customers once they’re outside of the system. Tax compliance and customer relationships are a business’s bottom line. If something goes wrong in their software, there could be a huge impact with everything it touches, and that’s something we needed to keep in mind. 3. Customers really need to understand the level of control and visibility they have before they are willing to try new processes. Despite the benefit of automation, they still need a sense of control. The automated process needs to exist to serve them and their needs, without replacing them. They needed to be the one piloting the plane, not just the passenger. “We prefer to think of AI as Augmented Intelligence rather than Artificial Intelligence. Taking a human-centered approach to building relationships between AI and humans compels us to meet humans on their terms, building relationships of trust and respect, and always remembering that intelligent systems must exist in service of humanity, not the other way around.” — Justin Massa, AI Needs to Earn Our Trust, Just Like Any Human Relationship Obvious questions are easily forgotten I was recently talking to a guest at a dinner party. She was a textile technician –her job was to experiment and develop different fabrics for a sportswear company. She was telling me a story about developing this super fabric with her team. She described an amazing new material that was waterproof, dustproof, wrinkle-resistant, light and extremely durable. Yet it wasn’t until somewhere in the middle of making this fabric that somebody suddenly thought to ask, “Hey, is this going to be comfortable?” Which was when they all realised they were developing a super amazing fabric that had the texture of rough sandpaper. Sometimes even the best intentions lead to unintended consequences if we’re not asking the right questions. Photo by rawpixel on Unsplash It seems like an obvious thing to consider, but questions like this are overlooked all the time in any kind of industry. It’s so easy to get caught up on thinking we’re helping people with our innovative solutions, and then suffer the disappointment when it doesn’t work for them. What these research sessions taught me is that it’s important to always shift focus back to the customer and their problem and not get caught up in all the different ways the amazing tech can maybe solve it. Staying connected to the human using the machine Technology companies need to make continuous advancements to respond to needs and improve people’s lives for the better. However, I also think that we are driven to innovate first and ask questions later. Asking questions is important because improving products may not be as simple as just automating a tedious and painful task in isolation. While we believe we are making things simpler and easier, we may also be unintentionally altering behaviours and introducing new problems for our customers. It’s also important to establish balanced expectations with our customers. And that while in some instances customer trust is easily earned, we need to remember that it can also be easily lost if we are careless. This can be devastating to customers, their business, and to our brand as a whole. As the designers and product teams embedded in the data, we need to keep returning to the customer’s problem. Because we know that machine learning and automation isn’t magic, and it doesn’t fix people’s problems by making them just disappear. Change happens incrementally and it takes a lot of rigour. And while the data tells us one thing, talking to actual customers gives us the perspectives we need to challenge our enthusiastic assumptions.
https://medium.com/xero-design/remembering-the-human-using-the-machine-f910caafe153
['Jannyne Perez']
2019-04-02 21:44:08.849000+00:00
['Design', 'Automation', 'User Experience', 'User Research', 'Machine Learning']
How Robin Sharma’s 90/90/1 Rule Can Help You Achieve Your Most Ambitious Goals
The 90/90/1 Rule “A genius? They all have one trait in common: They were able to spend extended periods of time in isolation, focused monomaniacally on their most valuable project.” — Robin Sharma According to Sharma, one factor that differentiates the top performers from the herd is how they spend the first 90 minutes at work. Instead of chatting with coworkers, scrolling through social media, or checking their email, the top 5% focuses on a single task. Based on this insight, Sharma teases us with a challenge: For the next 90 days, can we devote the first 90 minutes of our productive schedule to a single life-changing goal? First thing in the morning, our willpower and mental focus are at their peak. Sharma calls this time the “platinum hours”. If we invest them in our most valuable project, our life will irrevocably change. The 90/90/1 Rule gives us a taste of this life-altering habit. But it isn’t easy. The first time I heard about Sharma’s rule, my blood pumped with drive. I had a book I’d been writing for years, unable to get through the first 50%. Finishing my first fantasy novel seemed impossible, but I gave the 90/90/1 Rule a shot. After the first week, I succumbed to my phone’s notifications. Disappointed, I said, screw it, and returned to my inefficient routine. Months went by without any progress. Then — as if by fate — I came across Sharma’s challenge again. Bracing myself, I went all in. This time, though there were mornings I fell victim to my phone or email, I didn’t quit. For 90 days, I feverishly wrote. To my surprise, when the 90 days ended, I didn’t stop. And when the 90 minutes were up, I didn’t leave my desk if I could afford it. As if by magic, I installed a writing habit. Fast forward to today, I’ve not only written that fantasy book (which was crappy), but two more as well. Though I can’t say I dedicate the first 90 minutes of every day to writing, I make daily progress on my books or articles. Inspired, my partner followed the 90/90/1 rule to fantastic success. Though it’s still in its infancy, he’s now proud to own a profitable side hustle. In the end, we have realized that the 90/90/1 rule works as a trigger. Because of its challenging nature, it will motivate you to stay on course. Because of its duration, it will stick with you long after the 90 days are over. Sharma’s rule is an effective habit-implementation tool.
https://medium.com/the-ascent/how-robin-sharmas-90-90-1-rule-can-help-you-achieve-your-most-ambitious-goals-48712d5e1d2b
['Alexa V.S.']
2020-12-08 20:02:18.583000+00:00
['Advice', 'Self Improvement', 'Habits', 'Success', 'Productivity']
BLM Has Sent Brands Into a Reviewing Frenzy
Logos Mars/ConAgra Foods/Pepsico BLM Has Sent Brands Into a Reviewing Frenzy But are they at risk of whitewashing history? Many major brands are undertaking reviews of logos, packaging design, and marketing messages in the light of the Black Lives Matter protests and the broader debate over racial equality. Big brands, such as Amazon, H&M, and McDonald's have all come out in support of Black Lives Matter. And other businesses are renewing their brands to better reflect changing consumer demand. What they want, is to get rid of product names and packaging designs that seem racist and stereotypical. Aunt Jemima has been under fire and Quaker Foods has promised to drop the name and logo as its “origins are based on a racial stereotype”. Uncle Ben’s is also set for a rebrand after backlash over their packaging. Exactly what they’re going to do isn’t clear yet. The use of the terms “aunt” and “uncle” hark back to how white southerners referred to older black people or African-American slaves rather than using courtesy titles like “Miss” or Mr”. Mrs. Butterworth’s has also announced a complete brand and packaging review. “The Mrs. Butterworth’s brand, including its syrup packaging, is intended to evoke the images of a loving grandmother,” Conagra Brands said in a statement. They admitted that their product design may be interpreted in a way that’s inconsistent with their values, as some have associated the shape of the brand’s syrup bottles with a stereotype for Black women. And this shift is affecting place branding too. Rhode Island, a seaside state in New England, U.S, is now seeking to change its official name — The State of Rhode Island and Providence Plantations — because of slavery ties. When we challenge branding, we’re challenging the narratives they represent. And they should represent where we are today, not where we were 130 years ago. But these changes can only be symbolic Because symbols and statues don’t create racism, they’re artifacts of it. We need to steer clear of attacking the symptoms at the expense of curing the disease. Precisely because the problem is structural, we need to make sure that the conversation stays focused on where real change is required: on a symbolic, emotional, as well as an organizational level. That’s what makes this so challenging. These brand and design choices are a result of creative industries that aren’t representative of society. A 2018 Design Council report showed that only 12% of all design managers and business owners were from minority ethnic groups. Companies need to overhaul how they work because you don’t get “diversity of thought” in a classic hierarchical structure. Developing diverse networks within the industry also helps. Just because something isn’t a problem for you doesn’t mean it’s not a problem, and having access to these different viewpoints can make all the difference. Do more than just correct the error Issues around branding ultimately come down to business. If businesses aren’t updating their branding, they’re limiting their audience. If they’re not speaking in a way that people want to hear, their business will die down. Some brands are slower to respond while others try to anticipate how consumer sentiment is going change in the future. The ones who try to think about what will be required of them in the future to stay relevant usually fare better than those who simply respond to public pressure. The kind of expression and language that is acceptable is constantly changing. Just looking back 20, 30, 40 years, public perception of what’s acceptable has changed tremendously. Parodies of ethnic or sexual minorities that were acceptable several decades ago have no place in public conversation today. It’s no longer acceptable for the general population to have a laugh at the expense of minorities or to appropriate cultural aspects for the sake of marketing. How can brands proceed in an anti-racist manner? The brands that come out of a crisis the strongest are the ones who act with agility and find a purpose. And there is an opportunity for brands that are under fire now to come out of this stronger. But we have to ask, why does Uncle Ben’s think removing a black man’s face from the brand is a big step in helping to put an end to racial injustices? According to owner Mars, Uncle Ben was an African-American rice grower known for the quality of his rice. Gordon L. Harwell, an entrepreneur who had supplied rice to the armed forces in World War II, chose the name Uncle Ben’s as a means to expand his marketing efforts to the general public. The accompanying image of an elderly African-American man is said to have been based on a Chicago maître d’ named Frank Brown and has been on Uncle Ben’s packaging since 1946. And Aunt Jemima is based on a freed slave and activist, Nancy Green, who worked as a chef for a Kentucky family. She became the face of Aunt Jemima (who is a ‘Mammy’, a slave who acts as a housekeeper for white families) in a branding story that blurs the line between truth and marketing. By completely reworking these brands, will they be rewriting history and erasing the historic significance of the people the brand designs are based on? If anything, they now have a unique opportunity to retain these black symbols and ensure that they stand for something positive — these brands have enough money and power to enact meaningful change. To do that, these brands need to have difficult conversations about what it means to be who they are. And it needs to become a part of their modus operandi to constantly do so. The University of Yale is one example of a brand that has long grappled with its legacy Named after Elihu Yale, who got rich plundering India and traded slaves before giving books to a cash-strapped university in Connecticut, Yale’s students and administrators have spent a lot of energy considering his legacy. As a result, they have sequestered two or his portraits in a closet of shame and opted to display a third with curatorial stress on the enslaved Black child next to him. Yale has no plans of changing its name to reckon with their benefactor’s crimes because they have long been divorced from the life of Elihu Yale. “No one was venerating him; no one was trying to live up to his ideals. The name Yale does not belong to Elihu, but to the university, with its faults and virtues, not his.” - Graeme Wood, Yale Doesn’t Need to Change Its Name In chaos, there is always opportunity, and this is a chance for brands and big companies to put their values front and center by fully embracing their heritage and updating their marketing messages for the discerning audiences of the 2020s. And it doesn’t mean hiding the ugly truth Gen Z is more interested in truth and transparency than highly polished brand images and sleek corporate messages that hide exploitation and a blind pursuit for profit. Raised to question fake news and to be suspicious of secrecy, sincerity is sacred to them. This study found that a majority of Gen Zers are more skeptical about brands and want proof that brands align with their values. The research from The Consumer Goods Forum and Futerra shows that 79 percent of Gen Z believe brands are never honest, or not honest enough, about environmental issues.
https://medium.com/swlh/blm-has-sent-brands-into-a-reviewing-frenzy-120083c777cd
['Aliyar Hussain']
2020-09-14 14:17:49.851000+00:00
['Branding', 'Business Strategy', 'Blm', 'Marketing', 'Racism']
In Theory, we all Need sex
All humans are sexual creatures. If we weren’t, our species would have gone extinct long ago. And yet, many of us remain reluctant to accept sex as part of our shared humanness, a key component of our relationships and interactions. Some of us have been conditioned to view sex as dirty and reprehensible, something we should endeavor not to want, think about, or even discuss. As a result, we either go without, have bad sex, or do not trust ourselves to state our needs, preferences, or fantasies. We all have them but we pretend we don’t so as not to appear wanton or lewd, so as not to jeopardize the carefully curated exterior many of us present to the world. This unwillingness to embrace our sexual selves creates myriad issues that can destroy lives. Gay folks end up trapped in heterosexual relationships; trans folks end up trapped in a body that doesn’t fit; hetero folks end up trapped in lies. With courage and communication, almost any situation is reversible. But not everyone will find it in themselves to accept their sexuality in all its complexity. Further, not everyone has the good fortune of living in an open-minded and supportive society. Or of having a partner with whom we’ve created a safe space conducive to the joint exploration of sexuality. Although sex is never a solo pursuit, how many of us go it alone regardless of partnership status? How many of us hoard our desires for lack of a willing interlocutor? How many of us outsource our fantasies to strangers via specialized websites or by hiring a sex worker? How many of us have resigned ourselves to a sexless existence in which the only relief we get comes in the form of erotica or porn, when we’re able to react to it at all?
https://medium.com/sexography/in-theory-we-all-need-sex-2f533095e51f
['A Singular Story']
2020-09-09 19:12:41.340000+00:00
['Relationships', 'Mental Health', 'Culture', 'Sexuality', 'Self']
Use Your Data Skills to Make Money Online
Online Courses If you have been working in the field of data then you have valuable skills and knowledge that others can learn from. There are many online platforms which allow anyone with the appropriate skills and experience to create and market online courses and earn an income. Datacamp has a network of 270 instructors which you can apply to join. They state that they currently have an audience of over 5.9 million data scientists, and course instructors can earn royalties through their revenue sharing platform. The current FAQ’s on the website state that average monthly earnings for instructors are in the region of $1,000-$2,000. I am currently in the application stages to become an instructor here and I will update this article with actual earnings data if I am successful. If you don’t have luck with Datacamp or it doesn’t feel right for you there are many other online course platforms where you can make money. One of these is Pluralsight. This course platform recruits data experts to create video-based, written or hands-on educational material. Similarly to Datacamp, payment is based on royalties which are linked to the number of views that your content has achieved and the volume of paying subscribers the platform has. This excellent article by Troy Hunt gives some insight into working for Pluralsight including some information on earning potential. There are a huge number of other places where you can publish online course content and earn money. These include course platforms such as Udemy and Skillshare. You might also consider publishing a series of tutorials on a social media platform such as Youtube where you could earn money from paid advertising. Blogging I have been blogging and earning a modest additional income through the Medium Partner Program for the past year and a half. As far as blogging platforms go Medium is probably the easiest to get started with and earn money from. You don’t necessarily need to have a large following, if your writing is interesting and of high quality, then it is possible for Medium to distribute it in tags from which you can gain a high number of views. I currently earn around $400 a month and spend 5–10 hours a week writing, typically publishing once a week on the platform. I’ll be writing a follow-up article soon on how, when and why I write about data science on Medium. An alternative to Medium is to host your own blog as a standalone website. Wordpress is a great platform for this. You can start a blog for free or you can pay for premium features such as a unique domain name and more storage. If you can get enough traffic to your blog there are a variety of ways to earn money including Google Adsense and affiliate programs such as Amazon Associates. Freelance work The final areas I will talk about in this article are freelance agencies and websites. There are a number of platforms online where you can offer your skills, experience and services in return for payment. Upwork is probably one of the most popular sites. Here you can create your profile and apply for freelance jobs. A quick search for data-related jobs advertised right now reveal 16,744 results.
https://towardsdatascience.com/use-your-data-skills-to-make-money-online-6afc7a32d6ba
['Rebecca Vickery']
2020-05-11 13:28:53.022000+00:00
['Covid 19', 'Money', 'Data Science', 'Technology', 'Coronavirus']
A faster way to deal with email — “Slack-style email guideline”
1. Do you actually need an email for this Ask yourself if you can reach out to your client via another channel. The best way to reduce mails is not to send mails. So be part of the group that does not use email per default but establishes other channels against all odds. 2. Keep it short to make it fast One of the big advantages of tools like Slack is the short message style. It’s more like a Whatsapp group or chat. Our average attention span dropped below the attention span of a goldfish. So the faster people understand what you want the easier they find it to answer. We of course do not chat via mail that would be nonsense but we keep it super short. Ideally not longer 3–5 sentences. 3. Stay away from formatting — you’re not sending a fax! Especially younger people hate emails because you need to 1. find the contact persons email adress, 2. come up wit a subject, 3. start with an intro, 4. make it look nice and 5. end with an outro. Oh and of course attach an offical signature with company logo the legal section and of course your 5 social media profiles. In fact every email programm shows your email in a different way so even if you use a nice font it’s a waste of time. All this makes you slow. 4. Work with “mentions” Group communication tools like Slack or Hipchat works with so called “mentions”. Instead of writing “Hi Christoph” or “Hello everyone” you write “@CM” or “@everyone”. Mentions allow you to structure an email and pull the readers attention to their section. That’s how a section can stay short even of your email in total is longer. 5. Work in groups not in a silos A big advantage of tools like Slack is the searchability across all channels even if you just joined the a company. You have a full history to work with. CC’ing everyone is not an option but you can also build Slack like channel with a history if you leverage the email distribution groups e.g. Google Groups. These are projekt or client email adresses that include all project members. You would CC these groups, run a filter on them in order not to have an overflowing inbox and be able to search through everything even if you just joined the company. This helps a lot with onboaring and knowledge management. This guideline just covers the basic ideas. We wanted to keep it simple in order to make sharing with others easier. Spreading this idea is actually what will help all of us to save time and nerves.
https://medium.com/work-your-way/a-faster-way-to-deal-with-email-slack-style-email-guideline-a862a9080984
['Christoph Magnussen']
2015-11-10 12:38:40.279000+00:00
['Slack', 'Communication', 'Productivity']
Fiat to crypto at the touch of a button — Crypterium to launch the easiest way to buy CRPT
Fiat to crypto at the touch of a button — Crypterium to launch the easiest way to buy CRPT Crypterium Follow Sep 3, 2018 · 3 min read As you’ve probably heard, we want to open the cryptocurrency market up to everyone and not just the enthusiasts. Thus, the update that enables Crypterium users to buy crypto with fiat (Euro) in the easiest way possible. Going to an exchange isn’t necessary anymore saving you the hassle! In addition to that, we’ve improved the identity verification process (KYC) for our users. Read on to keep up with our latest updates. How to buy CRPT through Crypterium The process is no different to a standard bank transfer: you enter the amount of CRPT you wish to purchase, you get your invoice with the transaction details and wait for the status of your operation to change from “pending” to “invoice paid”. That’s it! When will the option become available? The feature is already available for our beta-testers. Once they provide us with their feedback, we’ll make it accessible to everybody else. Moreover, you’ll soon be able to buy other cryptocurrencies with fiat, not only CRPT. And what is important to mention here is that the payment method we are going to introduce you to is even more convenient — we are working on a solution that makes it possible to purchase crypto with a bank card. Do you wish to buy cryptocurrency instantly? Then you’re going to like it. What can you then do with your CRPT? Once you have bought your CRPT in the app, you can: - Hold it securely, just like you would any other currency - Exchange it into any other crypto or fiat currency - Use it to pay for transaction fees — just as a machine operates on fuel, Crypterium payment solutions are powered by CRPT tokens. Every time someone makes a crypto-fiat payment, a fee equal to 0.5% of the value of the transaction in CRPT is taken from the user’s account and burned. The process is regulated by Smart contracts, so there are no exceptions: with each transaction the number of tokens in circulation reduces. But don’t worry: we will not run out of tokens. Following the laws of economics, increasing demand for tokens will increase the price thereby reducing the amount that is going to be burned for each transaction. What else has been updated? With the goal to make your identity verification process smooth and easy, we’re constantly working on improving our KYC (know your customer) system. Recently, we’ve updated it with more detailed instructions, disclaimers and push-notifications. Also, we’ll soon be able to let our users conduct small transactions without completing KYC. We’re almost sure that verification procedures are not your favourites, but KYC is mandatory in the financial world, and it’s our responsibility to keep all the processes within Crypterium transparent and regulatory compliant. About Crypterium CCrypterium is building a mobile app that will turn cryptocurrencies into money that you can spend with the same ease as cash. Shop around the world and pay with your coins and tokens at any NFC terminal, or via scanning the QR codes. Make purchases in online stores, pay your bills, or just send money across borders in seconds, reliably and for a fraction of a penny. Join our Telegram news channel or other social media to stay updated! Website ๏ Telegram ๏ Facebook ๏ Twitter ๏ BitcoinTalk ๏ Reddit ๏ YouTube ๏ LinkedIn
https://medium.com/crypterium/fiat-to-crypto-at-the-touch-of-a-button-crypterium-to-launch-the-easiest-way-to-buy-crpt-16fc0788db4c
[]
2018-11-09 12:47:35.080000+00:00
['Bitcoin', 'Mobile App Development', 'Digital Banking', 'Cryptocurrency', 'Token']
Storm and Fire
Storm and Fire A poem about a dying earth I see the fires raging, the fumes blanketing the skies. I hear a continent screaming, as its perishing ecosystem sighs. The powerless koalas are burning, unable to outrun the flames. The does and joeys are embracing over a charred buck’s remains. The mighty glaciers are melting, tumbling down to the seas, and the little islands are drowning, as we go up a few degrees. The helpless turtles are emerging with plastic in their throats, and the lively dolphins are choking, no longer chasing the steamboats. The coral reefs are vanishing, and with them, our oxygen supplies. And the earth’s balance is tipping, with brutal winters, hotter Julys. I see the dead barks adorning the one place we call home. I hear the dying earth whispering that we too shall perish, but never alone. — Sharika Hafeez
https://medium.com/the-junction/storm-and-fire-d5f0e9d70a95
['Sharika Hafeez']
2020-01-31 20:39:44.441000+00:00
['Global Warming', 'Earth', 'Climate Change', 'Poetry', 'Creativity']
Let it all unravel
Let it all unravel When soldiering on doesn’t work anymore. My grandfather was a fisherman. Never owned a car. Didn’t drink. I think of him in winter sitting by the window watching the boats in the harbor. The open fire roaring, his pipe nearby ready to add aromatic smoke to the scene. On his lap and all around his feet is a tangled mass of string. Over days, it will slowly transform back into something recognizable as a fishing net. This is not a linear process. He is not starting at one end and finishing at another. Before he can start mending the net there is a period of unraveling. Loosening the tangle and knots. You cannot pull it apart, that would only tighten the knots. You cannot focus on one area to the exclusion of others. You have to loosen the whole mass of string equally before working on any repairs. Let it all unravel. This is where I find myself now in my grief process. My daughter took her own life 18 months ago. At first, my response was to find ways of coping. I organized the funeral. I checked in on people. I made sure they were ok. There was some need to keep normal things going. To keep the job I had. To carry on with a musical project. There was a sense of struggling through the days. I knew it couldn’t carry on. I started to hear stories of those who had soldiered on in their grief for decades only to succumb to mental illness themselves. One man had lost his job, was struggling in his relationship, and was returning to counseling and a grief support group after 30 years. Let it all unravel. Photo by Brook Anderson on Unsplash This is where I am now. Trimming my life down only to things that allow this unraveling. No big plans, no purposeful path stretching into the future. Just paying attention to this tangled mass in front of me. Noticing that the more I experience my sorrow, the easier it is to let in the joy. Feeling my anxiety. Then feeling the layers of commitment behind the anxiety. Noticing anger and blame when they arise, looking behind that. Noticing that any feeling directed outwards has a source from within me. Crucially, I experience all these things together. My image of my Grandad helps me to trust that I can only heal by allowing this unraveling. Lay all my feelings in front of me. Allow them to loosen, to gradually spread out and reveal the hidden layers. Only then when it is all visible can I begin the repairs LET IT GO Let it go, Let it out, Let it all unravel, Let it free And it will be A path on which to travel. (Michael Leunig)
https://medium.com/live-your-life-on-purpose/let-it-all-unravel-7988cec0a815
['John Walter']
2019-12-29 12:23:47.382000+00:00
['Grief And Loss', 'Life Lessons', 'Mental Health', 'Being Present', 'Progress']
ORCA platform secures first e-money institution for SEPA payments, partners with MisterTango
EU-based fintech ORCA just signed a partnership with an e-money institution MisterTango making the first step into the world of payment processing. From now on ORCA, a platform bridging traditional banking and cryptocurrency services, will have the ability to develop solutions for SEPA payment account creation using technology utilized by MisterTango. Moreover, in the future ORCA will also be able to issue debit payment cards. Only a handful of companies in the cryptocurrency space can offer these services at the moment. ORCA aims to join the list before the end of this year. Cooperation with a licensed e-money service provider is opening a lot of doors for the evolution of the ORCA project. Identifying and securing beneficial partnerships is one of the fastest ways to grow the digital business. MisterTango is one of the fastest growing payment processor enterprises in the region and has grown more than 20 times in revenue and scope over the last year. The newly formed partnership will enable ORCA to set up SEPA accounts and initiate payments on behalf of platform users for a seamless fiat and crypto experience inside the same digital application. MisterTango’s vast network of payment options and operational convenience will go a long way to help ensure ORCA platform’s quality of service.
https://medium.com/cc-connecting-crypto-with-banking/orca-platform-secures-first-e-money-institution-for-sepa-payments-partners-with-mistertango-115749ffa767
[]
2018-05-31 16:51:12.082000+00:00
['Token', 'Startup', 'Finance', 'Cryptocurrency', 'Banking']
How to Break Apple’s M1 Chip
How to Break Apple’s M1 Chip My bad Dropbox organization habits brought Apple’s power-sipping MacBook Air to its knees in just a single day Photo illustration: Pavlo Gonchar/SOPA Images/LightRocket/Getty Images I’m not a video editor, but I’m pretty sure I brutally slayed the M1 MacBook Air I just bought (8 GB RAM and a 512 gigabyte SSD, in case you were wondering) in less than a day of use. It comes down to two reasons, really: The lack of an Apple Silicon-optimized Dropbox app (it’s coming), and a solid decade-plus track record of bad habits in terms of how I store my old projects. So, here’s the deal: For a number of years, the content management system I used for my newsletter was based on Node.js, which generally stores its many parts inside of folders called node_modules . Any small Node applet inside of the primary Node app can also have a node_modules folder. These apps basically build on one another, creating a Russian nesting doll of sorts, each folder filled with its own tiny folders, of which there can be thousands included in a single directory. Dropbox traditionally has not handled lots of small files particularly well. And in the past, my Dropbox folder has brought computers large and small to their knees… and made them kick up the fans. Here was a fanless, highly optimized computer with a chipset said to allow for comically robust battery life — Apple claims 18 hours, and many reviewers have gotten close to that. How’d it do? Well at first, I thought it was holding its own. However, I soon noticed that Dropbox was taking up three-quarters of my RAM, and that my swap file was reaching 10 gigabytes in size. Even after letting it sit around overnight to sync, it still wasn’t done — and worse, I started seeing beach balls. The culprit? All those dang node_modules folders. I would keep these old folders of my CMS around as backups of local installs on the off-chance something would break. (Eventually, I started compressing them.) At some point, it sucked up so much power I decided to unplug my laptop to see how fast it would drain the battery. In about two and a half hours, I managed to knock out two-thirds of the computer’s battery life, by which time I had made a point to remove syncing capabilities on every folder with node_modules as a subfolder. It seems on track for about four and a half hours while plugged into a monitor… even after all that. I’m a special case. I experiment a lot with random things, and my Dropbox folder is like a storage locker that’s never been organized. It is totally unfair to Dropbox to blame them for my bad storage habits. And on top of all that, it’s not fair to criticize Dropbox for having an app that appears to otherwise work fine on a brand-new architecture at a time when around 15% of Intel apps aren’t, according to “Does it ARM.” (And glitches are common; my web browser of choice, Vivaldi, does this on the Twitter website right now.)
https://debugger.medium.com/how-to-break-apples-m1-chip-ccdcbe3bf0df
['Ernie Smith']
2020-12-03 06:32:39.661000+00:00
['Apple', 'Tech', 'Macbook Air', 'Apple M1 Chip', 'Gadgets']
Why Reading Fredrik Backman’s Books Will Make You a Better Person
Why Reading Fredrik Backman’s Books Will Make You a Better Person These are the books you won’t be able to stop yourself from hugging after you’ve finished reading. Photo by Priscilla Du Preez on Unsplash If I’d have to name one writer who had a great influence on me as a human being, it would be Fredrik Backman. I discovered his books a few years ago through my mom — she’s my Goodreads when it comes to book recommendations — and I’m very thankful for that. All of Backman’s books focus on the amazingly complicated, beautiful, funny, unexpected, and sometimes tragic experience that we call life while featuring quirky characters that try to do their best at living it. That’s exactly where the magic of these books lies. These are the stories that will take you through the ups and downs of life and leave you wanting to hug the book, the cat, the dog, and really every other person within the perimeter of your house. You’ll most likely find yourself laughing out loud at page twelve, only to feel something tugging lightly at your heartstrings a few pages later that reminds you of all those little things that make us so fragile. Without giving too much away, here are 5 things you’ll learn from Fredrik Backman’s books that will help you become a better person:
https://medium.com/curious/why-reading-fredrik-backmans-novels-will-make-you-a-better-person-93404ebda1e8
['Anita Coltuneac']
2020-10-15 05:44:42.465000+00:00
['Book Recommendations', 'Self Improvement', 'Books', 'Life Lessons', 'Life']
friday lost and found: touch my lion edition
[caption id=”attachment_947" align=”alignnone” width=”600" caption=”Touch my Lion! Love it! Liebe mein affe-Lion!”] [/caption] Apple released Lion this week and MG Siegler immediately made sweet, sweet love to it. More useful than @parislemon’s 3,000 word opus: Greg Kumparak’s, “Nine Things you Should Do After Installing Lion” and Cult of Mac’s ludicrously comprehensive, six-page review/manual. Via TechChrunch, Cult of Mac *** Speaking at Fortune BrainstormTech, Twitter CEO Dick Costolo said the company will eventually offer self-serve ads for brands and is looking to get into the commerce game. Twitter, he said, wants to “remove friction” from the process of buying stuff on the platform. If I may offer a humble suggestion from a lowly non-profit marketer: Can you please also help “remove friction” from the process of donating to charities through Twitter? Via Fortune *** Mmm…. That’s some good Gojee. I’ve seen a lot of people using Gojee recently and am trying it out myself — I’ve been in a bit of a cooking rut these days. The service lets users input ingredients they have in their pantry and spits out recipes (an gorgeous photos) based on those ingredients. I wonder what it will suggest for pita chips, Kraft mac & cheese and craisins? Seriously though, I just typed in “black beans” and favorited 10 recipes. This thing is seriously addictive. *** Dude… your Prius is pretty badass. Toyota responds to the Honda CR-Z Sport hybrid with the Prius Performance Package. For an extra three grand, you get front and rear spoilers, 17-inch alloy wheels, custom tires, a tuned rear sway bar and…. wait for it… limited edition floor mats. w00t! Via PSFK *** Finally, start your weekend off right with this awesome video of the Sesame Street Muppets rockin’ the Sure Shot, by the Beastie Boys: Sesame Street breaks it down from Wonderful Creative on Vimeo. Via LaughingSquid
https://medium.com/david-connell/friday-lost-and-found-touch-my-lion-edition-a202f72d24c7
['David Connell']
2016-08-20 13:06:15.101000+00:00
['Nonprofit', 'Technology', 'Green Tech', 'Environment', 'Links']
How do pre-trained models work?
Introduction In most of my deep learning projects, I’ve used pre-trained models. I’ve also mentioned that it is generally a good idea to start with them instead of training from scratch. In this article, I’ll provide an elaborate explanation for the same, and in the process help you understand most of the code snippets. At the end of the article, I will also talk about a technique in computer vision that helps improve the performance of your model. Let’s get started. Training neural networks When we train a neural network, the initial layers of the network can identify really simple things. Say a straight line or a slanted one. Something really basic. As we go deeper into the network, we can identify more sophisticated things. Layer 2 can identify shapes like squares or circles. Layer 3 can identify intricate patterns. And finally, the deepest layers of the network can identify things like dog faces. It can identify these things because the weights of our model are set to certain values. Resnet34 is one such model. It is trained to classify 1000 categories of images. The intuition for using pretrained models Now think about this. If you want to train a classifier, any classifier, the initial layers are going to detect slant lines no matter what you want to classify. Hence, it does not make sense to train them every time you create a neural network. It is only the final layers of our network, the layers that learn to identify classes specific to your project that need training. Hence what we do is, we take a Resnet34, and remove its final layers. We then add some layers of our own to it (randomly initialized) and train this new model. Before we look at how we do this in code, I’d like to mention that pretrained models are usually trained on large amounts of data and using resources that aren’t usually available to everyone. Take ImageNet for example. It contains over 14 million images with 1.2 million of them assigned to one of a 1000 categories. Hence it would be really beneficial for us to use these models. The code Now let’s take a look at how we do this in fastai. We start by loading a pretrained model. Initially, we only train the added layers. We do so because the weights of these layers are initialized to random values and need more training than the ResNet layers. Hence we freeze the ResNet and only train the rest of the network. Once we’ve trained the last layers a little, we can unfreeze the layers of Resnet34. We then find a good learning rate for the whole model and train it. Our learning rate plot against loss looks as follows. We don’t want our loss to increase. Hence, we choose a learning rate just before when the graph starts to rise (1e-04 here). The other option, and the one I have used is to select a slice. This means that if we had only 3 layers in our network, the first would train at a learning rate = 1e-6, the second at 1e-5 and the last one at 1e-4. Frameworks usually divide the layers of a network into groups and in that case, slicing would mean different groups train at different learning rates. We do this because we don’t want to update the values of our initial layers a lot, but we want to update our final layers by a considerable amount. Hence the slice. This concept of training different parts of a neural network at different learning rates is called discriminative learning, and is a relatively new concept in deep learning. We continue the process of unfreezing layers, finding a good learning rate and training some more till we get good results. Finally, pretrained models are not just available for computer vision applications but also other domains such as Natural Language Processing. We can now move on to tricks for computer vision projects.
https://towardsdatascience.com/how-do-pretrained-models-work-11fe2f64eaa2
['Dipam Vasani']
2019-12-04 04:12:10.895000+00:00
['Machine Learning', 'Artificial Intelligence', 'Data Science', 'Neural Networks', 'Deep Learning']
The Pandemic Won’t Kill My Love for Sports
The Pandemic Won’t Kill My Love for Sports But some things change when you age Photo By gilaxia Mistaken identity My daughter said something to me the other day that surprised me. “Daddy, do you play sports?” “Well of course I do”, I replied. “But I’ve never seen you play sports”, she said. Since the start of the Pandemic, I haven’t been able to play any organized sports. So, my 5-year-old, with her limited frame of reference, doesn’t recall a time when her father enjoyed sports. To her, my identity as her dad doesn’t include the 4 decades of hockey or basketball or any of the other team games that I’ve amassed in my sports résumé. This conversation made me realize that the way in which others perceive us is constantly changing. Nobody, not even my own child, sees me as I see myself. To me, I’m still a 12-year-old ice hockey goalie. Even though I haven’t really been that guy for decades. Me, at 12: Staring out the window of my dad’s Chrysler New Yorker, at the passing houses and cars, while the radio plays a song by Alphaville. It’s early Saturday morning and we’re on our way to ice hockey. “Forever young, I want to be forever young, Do you really want to live forever?” — Alphaville Dad generally stays silent on the 15-minute drive to the rink. So do I. This is a time for both of us to think about the upcoming game. In my town, in my league, on my team, I’m a budding superstar goalie. And nobody should interfere with the pre-game thoughts of a pre-teen prodigy and his dad. There will be lots of time to talk afterward, on the drive home. And we will. Sometimes we’ll chat about the game, or about how I played, or about something completely different. When we get home, mom will have pancakes ready for breakfast. My sister and I will sit on the shag carpet in the living room, with our plates on our laps, and eat them while watching Saturday morning cartoons. Scooby-doo. The Smurfs. Spiderman. Later, I’ll ride my bike over to my friend Darren’s house, and we’ll call the other kids in the neighborhood over to play a massive game of street hockey. And then, when it starts getting dark, I’ll ride my bike home, and eat dinner with my family. The Saturday night Toronto Maple Leafs hockey game will glow brightly on the TV for a few hours, and I’ll probably fall asleep watching it. This will be a great day. This will be a time in my life that I will probably remember forever. For me, these moments will happen every Saturday. But one day, they will stop happening. And when they stop happening, it will barely be noticeable. Because I will have started to grow up, and my life will have changed.
https://medium.com/in-fitness-and-in-health/i-dont-play-sports-right-now-because-of-the-pandemic-6bdc0a1a7d04
['Keith Dias']
2020-12-13 14:16:45.921000+00:00
['Fitness', 'Lifestyle', 'Sports', 'Psychology', 'Parenting']
3 Ways to Dynamically Render UI in a React Component
Photo by Denys Nevozhai on Unsplash Dynamic pages allow you to show users what they need without having any extra markup on the page diluting the user’s attention from what they are trying to do on your application. Writing components that allow for that dynamic behavior can quickly get complicated if you are expecting a myriad of user behavior. I’m going to show you three ways you can add conditionals to your React components to render content dynamically. 1. The If-Else Condition Probably the most explicit and easy to read is the if-else conditional. Let’s use the component below as our example. A component with conditional rendering via if/else statement We have a boolean in state and render a completely different snippet of HTML based on the value of that boolean. Here we are using a simple example, but based on different combinations you could continue in a similar manner to create a variety of behavior. 2. The && Logical Operator The issue with the if/else statement is that you may have a certain piece of code that exists in multiple cases. This means you are going to have to re-write those snippets of code for each case they are needed. This gives off a bad code smell and begs for something less brittle. In comes &&. Here’s the same component as above with a much more concise solution, but the same behavior. What’s happening under the hood? The condition this.state.display is being checked and when it returns false it does not read the right side of the && operator and continues through the code. When the condition is true the markup on the right side of the && is returned and made visible. If your component has a lot of if/else statements that return slightly different UI configurations this method might be a better option to avoid repeating your code and being prone to errors. 3. The Ternary Operator The ternary or conditional operator is the only JavaScript operator that takes three operands. Its syntax follows this pattern: condition_to_test ? execute_when_truthy : execute_when_falsy Let’s use a different component for this one. This component takes a user’s input and renders something dynamically based on that input. The ternary expression handles all of our UI cases by checking for the default case first then using a second ternary expression in the false check to give us our second and third render options. I hope this comparison was helpful and can help you in your next project. Overall, I don’t think there is one option that is best for all situations here and I think a combination of these techniques is probably the best way to go. Happy Hacking! Resources: Connect with me:
https://robert-keller22.medium.com/3-ways-to-dynamically-render-ui-in-a-react-component-99881474f378
['Robert K.']
2020-11-16 03:21:02.159000+00:00
['Coding', 'JavaScript', 'React', 'Programming']
Pica Peekaboo
Photo by Ignacio Campo on Unsplash I’m happy she says as her world unravels I’m happy she says as she eats the inedible I’m content she says as she strokes the twins hair I’m content she says when the voices whisper they care I’m fulfilled she says as the blood runs down her throat I’m fulfilled she says as the object sinks without a hope The baby grows The baby grows The mothers’ life spirals out of control I’m okay she says as hubby brings the boys home I’m okay she says as voice falls flat as a monotone I’m happy she says when hubby talks over her I’m happy she says keeping her composure I’m agreeing she says when the colleague asks for a hug I’m agreeing she says nodding with a shrug The baby grows The baby grows Mother longs to be hold I’m sharing she says as the sun burns her eyes I’m sharing she says as hubby shouts as she cries I’m sorry she says when the doctors pull out the pin I’m sorry she says as her weight falls wafer thin I’m feeling good I guess I’m feeling content with the food I digest The baby grows Healthy and strong But mother knows something is wrong I can do it I’m in control Leave me to it The addiction makes me whole Swallow I did things I shouldn’t have A needle A rock A battery Swallow They made me feel in control The baby grows Hubby wants the child The baby grows Mother feels defiled Soon the baby will arrive And all she can do is smile
https://medium.com/the-bad-influence/pica-peekaboo-3322df6379f5
['Reuben Salsa']
2020-05-06 15:45:20.354000+00:00
['The Bad Influence', 'Poetry', 'Poem', 'Mental Health', 'Parenthood']
Paul Romer on a Culture of Science and Working Hard (Ep. 96)
Paul Romer on a Culture of Science and Working Hard (Ep. 96) Paul Romer makes his second appearance to discuss the failings of economics, how his mass testing plan for COVID-19 would work, what aspect of epidemiology concern him, how the FDA is slowing a better response, his ideas for reopening schools and Major League Baseball, where he agrees with Weyl’s test plan, why charter cities need a new name, what went wrong with Honduras, the development trajectory for sub-Saharan Africa, how he’d reform the World Bank, the underrated benefits of a culture of science, his heartening takeaway about human nature from his experience at Burning Man, and more. Listen to the full conversation You can also watch a video of the conversation here. Read the full conversation TYLER COWEN: Hello, everyone. Today I am chatting once again with Paul Romer, who needs no introduction. Paul, welcome. PAUL ROMER: Good to be here. COWEN: You have a recent article in the periodical Foreign Affairs about the failings of economics. Let me try to defend the economics profession. Tell me what you think. If I look at the big catch-up winners over the last few decades, it seems to me it’s Poland and Ireland, and they basically followed a neoliberal recipe. They more or less did what economists told them to do. What’s the failure in that? ROMER: By the way, what about China? China caught up pretty well too, but they followed some of the basic insights from economics. COWEN: And the Solow model. ROMER: Yeah. But the origins of that article were that I read some books that said economists got a lot more influence and things got worse in the United States, and this was a really troubling argument for me because it’s not easy to dismiss. What I concluded in that article saying we should do a cost-benefit analysis — look at the big things that economics has done well, the things it may have done badly, and just see how it works out. The point you’re alluding to is something that my colleague, Peter Henry, has also made, which is that one of the areas where economics may really have been helpful is in the development process or the catch-up phase of growth. So that should go on the plus side, I think, on the benefit side of the cost-benefit analysis, no question there. And I think there’s some other ones that belong there too. My point was that there may have been some things that have also been significant negatives, and it’s time to do the numbers and see what the net is. COWEN: So if I ask myself, “What do I think has been the biggest negative?” I suppose I would say around 2000 date, economists for the most part did not understand the importance of the shadow banking system. What seemed to be a kind of ordinary real estate bubble, like the early 1990s, was far, far worse, and we totally missed that. That seems to be a defect of institutional knowledge, but you tell me what you think the greatest problem has been. ROMER: I think this problem is an interesting one. I put a slightly different spin on it, but I think it’s in the class of things of a failure to understand or incomplete understanding. I don’t think that’s a sign of a science that’s failed. That’s a sign of a science which is just making progress. There’s some things it knows and things it doesn’t know. So I don’t view this one as a sign of a systemic problem that we’re not doing it right, in a sense. For what it’s worth, we can come back and talk about this, but I think the lesson from the financial crisis, which we’re learning again now, is one about the fragility of extensive interconnection. We’ve paid attention to optimize efficiency with massive reliance on specialization and these complicated supply chains. But the growth, the proliferation of connection means that our system is more fragile than we realize. A shock comes, and things happen that we didn’t anticipate. But again, that’s part of learning about a very new type of economy which is changing in real time. The ones that struck me as being particularly worrisome were, first, I think the negative effect that economists have had in terms of protecting competition. Through the law and economics movement, we ratified this notion that big is okay as long as you can make some case that it’s efficient. The upshot is, is that I think because of technical economics and the arguments of economists, antitrust is much more tolerant now of dominant firms, and if we believe that competition’s good in a whole bunch of ways, this could actually be very, very harmful. So that’s one. COWEN: Doesn’t Amazon look pretty good right now in the midst of the pandemic? Do you wish we had split it up into different parts? ROMER: My sense is that we’d be better off if we had five Amazons instead of one. And I don’t see why we couldn’t have five Amazons if we, as voters, say, “This is the kind of society we want to live in. Let’s just aim for that.” And same thing — the more worrisome positions are those of the tech firms that are so deeply connected now to many aspects of our lives and where there’s really very little competition and a lot of opacity about what these firms actually do. COWEN: Let me try to defend the economics profession a bit more. If we look at climate policy, a lot of economists have recommended a carbon tax — not quite a consensus, but a very common view. Now, of course we haven’t done it, but it seems to me the profession, in some manner, is essentially correct there. So you would side with the profession on that? ROMER: Yeah. Again, in some sense, the main point of the article that I’m making is that economists need to accept that our role is that of the technical adviser. We can say, “If you apply a carbon tax, carbon emissions will go down. Here’s what other effects we think they’ll have. But it’s up to you, the voters, to decide whether you want to follow that policy or not.” So, if the voters don’t follow us, I think, to a first approximation, that’s not really our responsibility. And what I’m critical of is this tendency for economists to assume the responsibility of philosopher king and say to voters, “Well, we know better what a society should be like, what society should do. Listen to us. We’ll tell you the way things should be. We’ll tell you what you should do.” And, in truth, I think we get into that mode a lot more than we realize. Certainly, some members of the profession get into that mode. And I think they’ve done, really, quite a bit of harm when they did that. COWEN: When you, as a voter, judge policies, what normative or philosophical standard do you use? ROMER: Well, I think all of us have some notions about self-interest, and then the well-being of those around us. Everything else equal, if our position is the same, we’re somewhat happier if those around us are happier as well. Different of us have either a bigger or smaller circle of those we care about, so there’s some mixture of making sure everybody is doing okay, and then making sure I’m doing okay. That’s the first thing that I look at as a voter. But I also look at — this is a little bit of a tangent relative to your question — but I also look at the question of, in what direction will this policy take our norms, our beliefs about right and wrong? I think those change. I think there’s some beliefs about right and wrong which are better in an objective sense, in the sense that we economists think of in terms of efficiency. If everybody thought this was the right thing to do, then we would actually all be better off in some objective sense. So those issues weigh heavily in my thinking about policies. Economists have been a little slow to take those up, but frankly, I guess that is a part of the problem with economic analysis. Because many of the arguments about, say, allowing the market to run and giving people more freedom make more sense if, when you do that, you don’t change norms. But if, when you do that, you encourage norms that are destructive, that kind of laissez-faire approach can be harmful. Let me take a trivial example. Suppose the promulgation of laissez-faire makes everybody feel like it’s okay to litter. We used to have a norm that we shouldn’t litter because it was inconsiderate and it was just wrong. Laissez-faire convinces us I can litter if I want to. It’s somebody else’s problem to deal with the litter. That kind of laissez-faire would be bad because we’d live in a world that was full of trash all over the place. And I think in more important areas, economists have been inattentive to the effects that their policies have had on norms. COWEN: But if you take, say, litter, why wouldn’t the economic approach be either create a private property right, which we do sometimes? Other times that’s not possible, so we want something like a Pigouvian tax or a cap-and-trade. And your view, then, is not really far from the standard economics view, if at all. ROMER: No, but I think, actually, there’s enormous value in norms that are kind of self-enforcing. Suppose people think litter’s bad. Suppose they think it’s bad when other people litter. So they’ll kind of scold or criticize when they see somebody litter. Then, without police powers, without courts, without taxes, you actually get the outcome we want, which is we live in a world with no litter. And if we lost those norms, we’ve got to overlay these more heavy, expensive governmental solutions. COWEN: Let’s look at macroeconomics. If I look at the current crisis, which is turning into a depression, it seems to me we were on the verge of a financial implosion in March. The Fed acted to limit that. The macroeconomic response, to me, from the Fed seems to be quite good. So, isn’t all well in macroeconomics? ROMER: Yeah. When I was talking about this cost-benefit exercise, one of the positives we mentioned was in the sphere of development and catch-up. I think another is in stabilization policy. The kind of practical macro policy as practiced at the Fed is much better now than it was during the 1930s, and we get real benefits from that. I think it’s good that the Fed is trying to make sure that we don’t have this cascade of bankruptcies. There’s no question that economists have learned something and contributed to society. Just as a side note, I’ve been critical of the more theoretical, rational-expectations macro, but set that aside because that really hasn’t had that much impact on policy. So macro policy as practiced at the Fed or as practiced by the Congress right now is, I think, a reflection of things we’ve learned relative to, say, the 1930s, and that’s good. But let me come back to what are the minus sides of this balance sheet. I talked about antitrust and the failure of competition policy. The other one is in regulation. If you ask me who’s my representative of somebody, an economist who overstepped, overreached, and did real harm, it’s Alan Greenspan. Greenspan was this tireless advocate for cutting regulation. He was quoted at one point, saying he’s never met a regulation that he thought was valuable. And he played a very important role in deregulating the financial system for decades running up to the financial crisis. That financial crisis cost us an enormous amount worldwide, and it’s because we unwound systems of regulation that kept our financial system from being as fragile as it’s become. You look across the board at other types of regulation — we’ve failed to support the kinds of regulations that we need alongside of Pigouvian taxes. There’s some bad things that people do, bad in a sense they’re inefficient. We can try and tax them. Other times you just use regulation. But one way or the other, we collectively want to stop people from doing things that are harmful. COWEN: But if you look at the profession as a whole, wouldn’t most economists agree that, say, tax preferences for owner-occupied housing are a bad idea? And various other subsidies built into the system for housing are a bad idea? If they had been listened to on that, we still might’ve had a crisis of some kind, but it would have been far smaller. Scott Sumner has argued if we had targeted nominal GDP, the crisis would have been milder. You’re picking a bit on the one thing that Greenspan got wrong, but there’s many other things economists have said that would’ve made it much better. ROMER: Yeah, but we still should have been saying, “Given the choices that voters are making, which reflect your preferences as voters, like supporting owner-occupied housing, the regulatory choices that we’re recommending as economists are actually exposing us to just massive, massive harm.” I don’t remember the number off the top of my head. Haldane did some calculation where the worldwide cost of the financial crisis was, I think, in the hundreds of trillions of dollars. This is a really huge, huge mistake. And we’re still, I think, exposed to a financial system which could just blow up on us at any point in time. It’s part of why the Fed has to be so active right now with providing funds. So deregulation, especially of financial markets, I think was harmful, and competition policy was a failure. And then the bottom line is, you just look at one of the most basic ways to measure progress: how long do people live? People in the United States are not living as long as they used to. Life expectancy is declining, and life expectancy hasn’t been keeping up with other nations around the world. COWEN: Sure. Is that a failure of economists? ROMER: Well, I think partly. When the pharmaceutical firms that were trying to make money off of Oxycontin and these opioid-based painkillers — when they went to Congress to try and stop the DEA from shutting them down, what they used was the language of economics. “You have to have innovation. You’ve got to let the market proceed. There’ll be some creative destruction, but you have to let us do our thing. You can’t interfere — it will be bad for growth.” So, to the extent that we lent cover, indirectly, for those kinds of arguments against regulation so firms could make money killing people, we really did something bad. COWEN: But again, there are people out there who have misused your ideas or misrepresented them. ROMER: Oh, yeah. COWEN: I think they’ve done the same with mine. I don’t blame you at all for that, right? If something bad happens with a charter city, I don’t say, “Oh, Paul Romer gave them cover.” I say, “No, it’s the fault of the people who did it.” So I would say economists were pretty much not to blame for the opioid crisis. ROMER: There’s a speech Greenspan gave, where he doesn’t cite me, but he could. It’s all about “We can’t have regulation because we’ve got to have growth. Growth comes from innovation. Regulation slows innovation. It’ll stop creative destruction. We just have to live with greater destruction.” So, I feel like, yeah, some of my ideas could have been used to support bad policy, but instead of asking whether I’m personally to blame or personally a bad person, what I’m stepping back and asking is, did we create a system that let someone like Greenspan make recommendations under the cover of science? Like, “I’m a scientist, I’m telling you how it should go.” But those recommendations were really based on a worldview he got from a novel by Ayn Rand. There was no technical scientific basis for them. And they turned out to be really incredibly harmful, so we need to make sure that the system that we’re building isn’t misused in that way. COWEN: Do you think there should be an obsession with math GRE scores when admitting people into graduate programs in economics? We know there is, right? ROMER: Yeah, yeah. Well, it’s not the only — put it this way . . . COWEN: Is that part of the problem, or is that how we need to do it? ROMER: It’s not the only thing I think we should be looking at. And I’m not sure what are the other predictors, but I don’t think just practice in math is going to lead to a successful career in economics. COWEN: You’ve been interacting a lot with epidemiologists due, in part, to your arguments for testing. What’s your opinion of that field? ROMER: There’s actually an interesting parallel in epidemiology with a technical kind of issue in economics. In macro, we shifted towards model-based reasoning about macroeconomics. So representative agent — the whole rational-expectations movement was a shift towards “Let’s see what the models say,” rather than “Let’s see what the data say.” In epidemiology, there’s a very well-established model — this SIR model that is behind a lot of these predictions, but there’s an alternative — the Institute for Health Metrics and Evaluation, which has this model that’s been very influential, widely watched these days. The IHME is using a much more data-driven approach, kind of a curve-fitting approach. It’s almost like old-style Keynesian macro, where you just say, “Well, let’s just fit something to the numbers and see what comes out of that without imposing a lot of theory onto the estimation process.” And I just found it interesting to see that tension in another field, from the outside. The way it looks to me is, it’s good to have both of those wings active in a discipline. And it’s good to have them in contention with each other. If I have a criticism of macro in economics, it’s like the criticism in epidemiology — we may have biased things a little bit too much towards the models, and we’re not giving enough weight to just the facts themselves. And I think that’s because it’s actually easier to do models than to look at data, so we need to have a little bit of collective pressure to, “Yeah, yeah, that’s what your theory says, but let’s look at the numbers.” On Romer’s COVID-19 testing plan COWEN: Many people have supported mass testing plans. Of course, you’ve been in the lead here. Why do you think they’re not getting more support? Because the benefit-cost ratio, if you can pull it off, seems to be quite high. ROMER: Yeah. I’ve actually been working — oddly, mind you, I’m the theorist criticizing the use of models, so, go figure — but what have I been doing recently? I’ve been using a model to try and figure out, what is actually the value of an additional test relative to its cost? Models definitely have their role, but you’ve got to stick to the idea that a fact beats a theory every time. Now, the thing is, just very clear, that a test is worth a lot more than it would cost us to provide. Why aren’t we delivering more? I think there’s a genuine confusion and puzzlement about how to increase the supply of tests. And because people don’t know how to go about increasing it, they say it’s not possible. They treat it as if it’s just something beyond our control. I think we have to look carefully at what changes should we make in policy to increase the supply of tests. One part of that, as I’ve been saying, is we just have to pay for them. If we put up enough money, we can get tests. It’s like I’ve been saying: if we spent about twice as much on tests as we spend on soda, we could have all the tests we need, like 23 million tests a day. If we spent about twice as much on tests as we spend on soda, we could have all the tests we need, like 23 million tests a day. So first, you’ve got to provide money. Because tests are a public good, this has got to be money that comes from the government. It’s hard to get there with having consumers pay. But the Congress has allocated $25 billion. There’s a proposal now for another $75 billion from the Democratic plan. So, we’re getting there on the money. The other side, which frankly, I’m in this position — think about what I was saying before — I’m generally in the position of defending regulation. The more I’ve looked at the role of the FDA in holding back progress in testing, the more I’ve concluded this is a case where we have to say, as economists often say, “This regulation is just getting the cost-benefit tradeoff wrong.” It’s way too restrictive. There’s little harm from tests. Tests don’t hurt people. It’s not like a vaccine or a pharmaceutical agent, so that the FDA is just needlessly slowing down innovation that could otherwise flourish. So, pay some money, and then get the FDA out of the way. And then all of these very clever researchers and university labs all across this country — they could give us all the tests we need. COWEN: Germany has done a great deal with testing, as you know. But at least now, as we’re speaking, as of, I think, May 12th, their R is still over one. Does that worry you? Does testing really get you into the promised land if you’re not a small island? ROMER: It does worry me. And I don’t think that, as well as they’ve done . . . Let’s just pause for a second because this is a little tricky. If R is equal to one, that means that the number of deaths, the number of infections will stay constant over time. So you can have R0 equal to one at a low level of infection and low rate of deaths, which is where Germany is. We have R0 about equal to one at a higher rate of death. But in any case, it is worrisome because what we want for suppression is R0 significantly less than one. And Germany is not testing at the scale that I would propose, and I’m afraid that the way to get there is that even Germany is going to have to do more testing, including more testing where you’re just kind of fishing for people who were infected. You just test people who are asymptomatic. You’ve got no indication that they were in contact, but you test them anyway because that’s the only way to find some of the people who are currently infected. COWEN: Do you worry that some of the countries that have done the best with testing have combined it with forced quarantine and that maybe you need forced quarantine for testing to work? ROMER: Well, I was critical. It’s been very funny to go from that real kind of angst, almost like crisis of the review I wrote of what economists have done, but then to shift into economist mode where I think we can actually provide some real benefit and some clarity in these conversations. The way I frame this on testing is, first ask, what would be the value of a particular piece of information? How valuable would it be if we know who’s infected and who’s not? Then, given that information, separately let’s think about, what’s a good way to use that information? And I think there’s some open questions about how best to use this. I have some colleagues who I’ve written a paper on because they were also promoting this idea of test everyone. Their view is that one of the ways we might do this is, at home, get devices that can test you at home, so everybody finds out if they’re infected or not. Their attitude is, that may be all you need to do because once people know they’re infected, they’ll take decisions, take actions to protect their colleagues, the people they know. Most people are responsible. Most people don’t want to inflict harm on others. They may well just self-isolate. So maybe with enough testing, we just let everybody know, are you infectious or not? And that’s all we have to do. You could go to the other extreme and have some government system where the test system has to report every positive, and the government forces quarantine on people. I don’t think you need that. I don’t think you’d get that much benefit out of that. And it’s got a lot of potential costs. So let’s get that information, and then let’s use it very gently. First, just let people know and let them adjust. Second, maybe give someone like . . . I keep talking about, recovery means I can actually go back to the dentist. Maybe my dentist will say, “Paul, I don’t want to be working in your mouth — and you can’t be wearing a mask when you’re in the dental chair — unless you have a recent test that shows that you’re not positive. Then fine, you can come on in, and I can work on your teeth.” We might give other people the right and the ability to say, “There’s certain things you can’t do, certain services you can’t have access to unless you can show that you got a negative.” Restaurants might offer sit-down meals but say, “You can only get a reservation if you can show us you’ve got a negative test result in the last couple of days.” We can use this information in ways that I think aren’t very oppressive, aren’t very risky. They could let us go back to going to the dentist and having restaurant meals and do it without big risks to our freedoms. COWEN: Take the people who test positive. It seems that at some point they’re likely to be immune, and in a sense, they’re more valuable as workers. ROMER: Yeah. COWEN: When do we give them the clear? I read papers, “Oh, you can be infectious for up to five weeks, maybe more.” We’re in a very risk-averse society. Don’t you run the risk by getting a test at all that, in essence, you end up locked out of polite society? ROMER: Well, again, this is where I’m defending science and economics as science. Here’s really the science of medicine. We need to help everybody know, “Here’s what the facts are. Based on these kinds of signals or this elapsed time, you can be confident that a person is not infectious any longer.” And then people may still have some emotional aversive reactions. But I think if we can just credibly provide the facts, then that will start to change practice, and practice will start to change some of those deeper emotions. COWEN: Should there be a liability waiver for businesses that test their employees? We all know there are false negatives and positives, in fact. ROMER: Yeah. COWEN: Say your business tests you. They tell you you don’t have it. It turns out you do have it. You infect your spouse. Should there be a liability waiver to encourage this testing? ROMER: You know, for vaccines, we created a special compensation mechanism so that, instead of litigating, somebody who’s harmed by taking a vaccine — because there’s a small fraction of the population that has a negative side effect — there’s a separate compensation mechanism. I think there are many reasons to think that our judicial system is an ineffective way to address a harm or to provide insurance. And it slows down many important things that we need to do. But I’d be more in favor of a broader look at ways to improve the functioning of the judicial system, rather than just do a . . . Actually, I don’t have a strong view on this. It may be that to move quickly, we want to have a special patch related to what firms do with test information. But I don’t think we should stop there. We really should be asking, how can we tune the judicial system to make it work better? COWEN: Could it be that litigation is the ultimate reason why America is so slow in testing? That any big push for anything — someone can raise their hand and object. Someone could sue. “Well, this violates The Health Insurance Privacy Act (HIPAA).” I’m not even sure it does, but you would need a ruling. Someone sues on disabilities regulation. “Oh, I need to have this app. I can’t read it.” Someone sues about masks. “Well, I can’t do lip reading.” The actual solution — something we’re far from — and that’s to clear away all this emphasis on litigation in American policy. And economists have been mostly right about that, too. Or not? ROMER: Yeah, yeah. Well, my dive into testing has persuaded me that the FDA is far more important as a force that’s slowing down progress there. There’s been speculation about lawsuits, but there’s really little indication that those will materialize. And the people I talk to who can’t do things they want to do in testing are failing to do it because of the concern about the FDA. So I don’t think the facts support that litigation is the big threat here. Also, in terms of moving quickly, I think one of the things we could leverage — because this is a public good — is the sovereign immunity of the states. I think the states can actually just purchase the test, say, with money they get from the feds and then even give instructions about “here are ways to use these tests.” Those could even be regulations. “Here is what you have to do. If you’re a restaurant, if your employees test negative, you can open. You have legal permission to open. And you have to require that people test negative. But if you do, that’s fine.” If somebody who tests negative goes to restaurants and other people get infected because of that, the restaurant could actually have the protection of the mandate from the state that this is what you should do to protect public health. So I think the states could actually provide cover for firms to do — and individuals to do — what’s best in terms of how to use this test information. COWEN: Let’s say we make you testing czar, and the Romer regime is put into place. Over the first month, what percentage of Americans do you think would show up to be tested? ROMER: Well, I would try and do a calculation about where might tests be most valuable. And if the states are the ones who are buying tests and providing them, encourage states to use them for those high-value purposes. I think frequent testing in nursing homes might be all it would take to cut the death rate in half right now. The estimates are that as many as half of the deaths are actually taking place in nursing homes. And it seems to me that there’s no hope for contact tracing there. There’s talk about rebuilding all of the nursing homes. That’s not going to happen anytime soon. But if you tested everybody, initially every day so you know exactly who’s infectious inside a nursing home — test all of the staff; test all of the visitors — then we should be able to isolate the few who are infectious and really bring down the deaths in nursing homes. So I’d use those, there, first. Second, I think it would be great to get Major League Baseball started again. I think we should use the relatively small number of tests it would take to test all the baseball players every day, and let them start playing games in empty stadiums, because you need a lot more tests to test the fans, to have them come in. But we could be playing baseball in empty stadiums without any risks that we’re increasing this R factor, and people enjoy baseball. It would be an important signal of how we go back to work in this regime. I think that could be an important complement to nursing homes. To keep going — COWEN: There’s a study out. I think it came out May 10th or May 11th, and they did test everyone in Major League Baseball — a lot of the staff, not just the players. And hardly any of them are COVID positive. We tested so many of the NBA players. But given those sports are still not reopening, doesn’t that mean testing isn’t enough? ROMER: Well, nobody made a plan, which says . . . Look, there was an initial plan, which is put all the baseball players in a big dome or something and isolate them. But obviously, they don’t want that. They’re going to be going home to their families. Some of them are going to get infected in their families, so you need a plan for testing and retesting the baseball players if you want to make sure that one player doesn’t infect another. You need to do some calculations about, how frequently do we need to test? Also, you alluded to this point before, which is, how do we have to respond when somebody tests positive? How long should they be isolated, both from other players but also from the general public? But if we just put together the plan, I think we could safely restart baseball and do it with confidence, knowing that we’re not going to increase the number of infections. COWEN: Not under the Romer regime, but in the world we live in, can we reopen our colleges and universities for this coming fall? ROMER: Well that’s one of my list of plans to actually work out. So there’s Major League Baseball, but then universities and then K–12 education, I think, are the next two. Part of the reality is that people are afraid of opening universities and K–12 education right now. If we had the tests, we can show everyone, if you test people frequently, isolate the few who are infectious as soon as you find out that they’re infectious, you can actually let people start to interact again without raising this R number. So I think it’s totally possible to reopen universities and reopen schools. The universities — you may make some adjustments beyond just test and isolate. It may be that a 300-person lecture hall, unless it’s well ventilated, is just too risky because even just one person who’s infectious could infect many more, and we’d have to see if that’s true. So you might have to have better ventilation or not have those big lecture halls, but we could surely restart university education, restart K–12. And these would be very important things to do because we know how valuable human capital is. We know how high the returns are to those kinds of investments. I said before, I was doing some calculations the last couple of days. The calculation I’m looking at is for each unit of testing capacity. And if we could test one more person each day, how many more jobs or how many more people could reenter, return to their previous activities? The model suggests that it’s about nine. Testing one person per day throughout the year would free up about nine people who could go back to doing what they were doing before, get out of the shelter-in-place rules and have no net effect on the reproduction number R because the tests depress it. More people in circulation raise it. You just set those numbers so that they balance each other out. Nine economically active people is worth a hell of a lot more than it costs to provide one test a day for a year. On Romer’s plan vs. Weyl’s COWEN: How does your testing idea differ from Glen Weyl’s testing idea? ROMER: I think Glen and I are in agreement that tests are very valuable. Glen thinks that we can target the tests. I’m saying just test everybody on a regular basis. Glen is saying you don’t have to test everybody. What you can do is target your tests to people who are more likely to be infectious. I agree that if you can target tests effectively, then you don’t have to test as many people because really all you have to do is find enough positives and get them into isolation. But I think Glen is assuming that the way we’re going to predict safely and reliably who’s infectious is through apps that do digital contact tracing, and I’m skeptical that that’s going to be ready in time and ready in the sense that everybody will be comfortable using it. I’m saying if we want to have a plan that we know we can execute now, where we know we’re not going to have a divisive fight and get stuck because we can’t make a decision, the way to do that is just don’t make the digital contact tracing part of the critical path. Just create a path where we get there, whether or not apps could work. If it works, great. I’m not opposed to targeting the tests if you’ve got a good way to do it, but don’t make that a requirement for the path that protects us all. COWEN: Would you ever get involved in another charter city project? ROMER: Actually, before I leave the testing — one of the things, as economists and scientists, I think we really can usefully bring to these debates is just quantification, just talk about the numbers. This morning, I was trying to think the best estimate, say, from New York State is that this infection fatality rate is about a half a percent. So if a half a percent of the people who catch an infection die, and if you look at where we have got about 2,000 people a day in the United States who are dying, that means there’re about 400,000 people a day who are newly infected. Now, each of those 400,000 people has, say, 10 contacts, which I think is modest — it could be more. That means that there’s 4 million people a day that you’ve got to go out and find with your contact tracers. I’m not sure we’ve got the capacity to do that. But the real point here is that whether you follow Glen’s model or my model, you’re already up to 4 million tests a day, which is 10 times the capacity we’ve got. So let’s not even argue about whether Glen’s right or I’m right. Let’s just get a lot more tests because both of us think we need way more than we have. On trying again with charter cities COWEN: Okay. Charter cities — would you try it again someday? ROMER: Sure, absolutely. COWEN: Under what conditions? ROMER: I might rename it. I don’t know that, in communicating the idea, that charter city is the best name, but I think the idea is still a compelling alternative. And unfortunately — maybe this is now my shtick — it’s like a $100 billion a year on testing. It’s an unpleasant, bad idea that nobody likes, but it’s just better than the alternatives. The same thing is true, I think, on migration flows, as on dealing with the pandemic, which is that the alternatives are so terrible, the best option may be something that’s kind of bad, it’s kind of expensive — we just do it anyway. I’m not sure we should call it charter city, but the thing I think we could do is create new cities that would solve the current impasse, where you’ve got 750 million people who say they want to leave the countries where they currently live, and the existing countries that say, “We don’t want to take that many people.” So I’m saying, “Okay, what’s the middle ground here? What’s the deal we could do? Let’s create some new places that are still places that people want to go to, but where nobody in existing countries feels threatened by the creation of those new places. And let’s try and offer that as a solution to what seems like this impasse.” COWEN: How do you frame what happened in Honduras, conceptually? ROMER: I thought in selling this idea, to do this, you’d need both some country that is willing to volunteer or supply the location for a new jurisdiction, and then some countries — a country, or more than one country — that can help establish the new jurisdiction, like its legal systems and so forth, and administrative — all the systems you’d need. I thought the biggest risk constraint on this idea was that it would be hard to find countries that would be willing to say, “You could use our land to start something new.” So I spent time in Madagascar. I spent time in Honduras. They were actually willing to try this, but what I think, in retrospect, I should have done, and what I’ll do now, is go first to the countries which are willing to help set something up because a country like Honduras was not — the reason it was willing to do something radical like a new charter city was that it did not have the internal capacity to do something like a charter city. What went wrong was that we couldn’t get sufficient participation from outside of Honduras in setting this up. And then, frankly, in Honduras there was a little bit of lack of transparency. They didn’t really want outsiders either because it was kind of a small group that actually wanted to set these things up and control it internally. So I think the scarce player, the short side of this market is going to be countries that are willing to say, “We will help set up a new place that people can go to.” It’s the citizens of those potential countries that we need to persuade. This would be worth trying. And if they’re willing to do it, then I think we can find locations where it could be done. COWEN: Do you worry about a negative selection effect in the volunteers? In a lot of your work, you’re concerned with corruption, quite appropriately, I would say. And could it be, the countries that want to do charter cities — well, it’s one branch of the government wants to do something a little funny without the other branches of the government seeing, and, in essence, cut its own deal, and that there’s something intrinsically worrisome about a country volunteering to do it. ROMER: I think this is one of these places where we have to be willing to just select from the feasible alternatives and not hold out for some ideal that we can’t achieve. It’s worth being specific here. My hunch is that China will eventually realize that the way to pay for the infrastructure that it’s building as part of this Belt and Road program is to do urban real estate development. The transport never pays for itself. It’s always the real estate that goes up in value that you use to pay for all of this. So I think that the Belt and Road project is inevitably going to turn to a version of new city, city-scale real estate development to finance what they’re trying to do. In parallel, the United States could be offering its own version of cities around the world that are new. There’s gains in the value of the land that pay for the stuff you want to do. Then to answer your question, would I be worried if that’s the way that China and the US compete with each other? Actually, no, I think that would be pretty good. The Chinese wouldn’t set up those cities and run them exactly the way that somebody from the United States might prefer. But if people who want to migrate could choose between a Chinese location and a US location, that would put some pressure on both the US and China to organize these new opportunities in ways that really benefit the people who will go there. COWEN: How important is religion for explaining economic development? You said before, norms are important, and charter cities, in a way, are identifying laws, rules, norms as a public-good legal structure. So why isn’t religion also a key? ROMER: Well, I think it’s important for us to think about what are the mechanisms that we use to try and shape norms over time. Some of them are just an invisible-hand process where nobody’s in charge and norms often, I think, go in directions that are beneficial and appropriate. There’s a great book you may know of, called The Civilizing Process, that looked from the Middle Ages up to the present, looked at norms about just what it means to be polite or civilized, even just table manners. And it’s really a fascinating account. Some of this happens automatically, but some of it happens because of activists and organizations and structures like churches, and we should be at least mindful of, what are the ways in which those different bodies can push norms? What are the ways that are beneficial to everyone, like that increase efficiency? What are the ones that might harm efficiency? How do we get more of the ones that increase efficiency? COWEN: Say I’m a Christian missionary. I’m working in Nigeria, and say I’m fairly persuasive and effective. Is it possible I’m doing more for economic development than any economist? ROMER: It’s possible, but you’d really want to look in detail and see which parts of the norms that are being conveyed there are beneficial and which parts are not. One also has to be thoughtful about the fact that you should ask, are the people who are being socialized into some new norms aware of what the transaction is? And are they agreeing in some sense? Do they actually have some agency and some ability to choose “Yes, I’m okay with this” or “No, maybe not”? This is why I like the migration decision, because it involves a more affirmative choice. If some missionaries set up a city and said, “Here’s how this city will work. You’re welcome to come,” and people could choose to go to that city or not, and can choose to leave if they don’t like it when they get there — I’d be a lot more comfortable with it. COWEN: How optimistic are you more generally about the developmental trajectory for sub-Saharan Africa? ROMER: There’s a saying I picked up from Gordon Brown, that in establishing the rule of law, the first five centuries are always the hardest. I think some parts of this development process are just very slow. If you look around the world, all the efforts since World War II that’s gone into trying to build strong, effective states, to establish the rule of law in a functioning state, I think the external investments in building states have yielded very little. So we need to think about ways to transfer the functioning of existing states rather than just build them from scratch in existing places. That’s a lot of the impetus behind this charter cities idea. It’s both — you select people coming in who have a particular set of norms that then become the dominant norms in this new place, but you also protect those norms by certain kinds of administrative structures, state functions that reinforce them. We need to think about ways to transfer the functioning of existing states rather than just build them from scratch in existing places. If we don’t pay attention to that and just keep doing what we’ve been doing in development assistance, I’m still fairly pessimistic about how many will make the radical transformation that China made. On reforming the World Bank COWEN: If you could reform the World Bank, what would you do? ROMER: Oh, that’s an interesting question. I think the Bank is trying to serve two missions, and it can’t do both. One is a diplomatic function, which I think is very important. The World Bank is a place where somebody who represents the government of China and somebody who represents the government of the United States sit in a conference room and argue, “Should we do A or B?” Not just argue, but discuss, negotiate. On a regular basis, they make decisions. And it isn’t just China and the US. It’s a bunch of countries. I think it’s very good for personal relationships, for the careers of people who will go on to have other positions in these governments, to have that kind of experience of, basically, diplomatic negotiation over a bunch of relatively small items because it’s a confidence-building measure that makes it possible for countries to make bigger diplomatic decisions when they have to. That, I think, is the value of the World Bank right now. The problem is that that diplomatic function is inconsistent with the function of being a provider of scientific insight. The scientific endeavor has to be committed to truth, no matter whose feathers get ruffled. There’s certain convenient fictions that are required for diplomacy to work. You start accepting convenient fictions in science, and science is just dead. So the Bank’s got to decide: is it engaged in diplomacy or science? I think the diplomacy is its unique comparative advantage. Therefore, I think it’s got to get out of the scientific business. It should just outsource its research. It shouldn’t try and be a research organization, and it should just be transparent about what it can be good at and is good at. COWEN: Do you regret the time you spent there? Or what would you have done differently? ROMER: Well, I was brought in to reform the research group. People in the Bank could tell that research was dysfunctional there, but shortly after I arrived, the number two, who I think had been behind this initiative, left to go take a position back in finance minister in Indonesia, and a different number two came into the Bank. In retrospect, what happened was that number two decided we’re not going to reform research. We don’t want any noise. Because you reform things, you’re going to get noise. You’re going to get complaints. All other parts of the Bank had been reformed. Research hadn’t. So I wasted 16 months talking to the number two and the number one and saying, “You understand if I’m really going to reform the research group, there’s going to be noise, and it’s going to be a little contentious. You really want to do this, right?” And they, “Yeah, no, no, absolutely, full speed ahead. We’re totally 100 percent behind you. We totally agree with each other.” And they were just lying to me. So I would go out and try and do something, and they would undercut every simple thing I tried to do. What I regret is the dishonesty of the leadership and failing to just say what was true, which is, “We changed our minds. We don’t want to reform research anymore.” So I spent months and months doing really simple things like trying to move two direct reports, who reported to me, who didn’t have the integrity to have the kind of responsibility that they had. But I was facing a bureaucratic system that opposed moving these positions — and I’m not even talking about firing them, just moving them out of the critical positions so other people could fill those roles and do them correctly. I faced not only internal bureaucratic delays, but my bosses were undercutting me and stopping me from doing this. I finally figured it out and said I was going to resign. They told me, “Oh no, it’d do enormous damage to the Bank if you resigned.” And I still took what they said seriously. So then I went out and just got myself fired. I gave an interview in the Wall Street Journal, which I knew would make them mad. Then they said, “Okay, well, you broke the rules, so we have to have an investigation.” I said, “No, you don’t have to have an investigation. I broke the rules.” They said, “Okay, well then we have to put you on administrative leave, and you have to sign this agreement where you won’t say anything without our approval.” “I’m not going to do that.” And then they said, “Okay, well then you have to resign.” And I said, “Well, that was what I tried to do on Thursday. I resign.” And that was the end of it. COWEN: Why are you interested in the American philosopher, Charles Sanders Peirce? ROMER: Charles — COWEN: Charles Sanders Peirce. ROMER: Oh, is that how you pronounce his last name? COWEN: People say it “Pierce” sometimes, but Peirce is — ROMER: I was thinking “Pierce,” so funny. [This is better on the podcast -Ed.] COWEN: The pragmaticist, yes. ROMER: Because I’m really interested in science, and I think he was a very deep thinker about science from this pragmatic perspective of, how does it work? What does it accomplish? How can we get more of that? I think it was Tim Besley, actually, another economist, pointed me to him. I have to say, it’s heavy going to read his stuff, but I’m still quite interested. If you go back to what we were saying before, about what could an existing successful society bring when it sets up a new one, I used to think a lot about — and as economists, we talked a lot about — the rule of law. Law is, in some sense, the basis for things like honesty and trust. I’m starting to think that science may have actually been more important for the West in developing a culture where a reputation for integrity and telling the truth became something that was valued. Science may have actually been more important than we realize for that. COWEN: I very much agree with that — and engineering, right? And engineering also is a broader branch of science. And if you look today at software engineers who have to make things work, they tend to be blunt people who will frequently speak the truth. ROMER: Yep. When you think about this level of norms — a commitment that it’s a good thing to be honest; it’s a good thing to be disapproving of people who were found not to be honest — that’s very helpful because it helps build trust, and trust is an important part of social interaction. I think we may have underestimated the value of science, so it’s all the more important to support it. It isn’t just that it gives us some facts that feed into a discussion. It conveys norms about integrity. Also, there’s a harsh side to this, that when you are found to have misled people intentionally, those norms say you’re no longer taken seriously. You’re excluded. You’re not respected or listened to anymore. Those kinds of things are critical for supporting trust. I think that we should learn how to protect science and get it to do its job better in building those norms and encouraging trust. COWEN: What do you find most interesting in French fiction? ROMER: Well, actually, let me just bore on for one minute about this. COWEN: Sure. ROMER: One of my predecessors at the World Bank as chief economist, Justin Lin, has a very interesting paper on this puzzle of why didn’t China develop the industrial revolution. His argument is basically that China — because there were so many people looking and discovering; they discovered a lot of things, like gunpowder, steel, printing, and so forth — but what China didn’t do was invent the social system we call science. They had some knowledge and some technology. They didn’t invent science. And what was different in Europe was the invention of science. I found that argument really compelling, and I’ve taken it one step further and think that part of what the West benefited from were notions about integrity and individual responsibility for what we say that fostered trust, and that science indirectly gave us those things. For any country around the world, it’s worth thinking about — if you’re short on that, if there’s a tendency for a lot of people to cheat on their taxes, to lie about what’s true, if there’s norms that hold a society back in those ways, I think it would be good to think about, how do we rebuild a system where we respect and admire people who consistently tell the truth, and where we look down on, disapprove of people who are found to have intentionally misled us? COWEN: Do you think the evolution of science in the West has much to do with Christianity and Christian norms, which do emphasize some of those values? And science evolved in the West, right? And out of the church. ROMER: That’s a very good question. I speculated in one group meeting about a difference between the Old Testament version of Christianity and the New Testament version. And my conjecture was that some of the Old Testament norms were closer to the ones that matter for science. Christianity really succeeded by competing with other religions, partly because it brought in redemption, forgiveness. The New Testament version of Christianity was a softer, kinder form of Christianity. It may be the older form of Christianity, which is a tradition shared with Judaism, where there was a little bit more strictness about truth and integrity and more harmful consequences from violations of that. It may actually be that earlier tradition that was the one that was most beneficial. I tried to say this about Old Testament values, and somebody accused me of being anti-Semitic. I was talking about Christianity, and I was actually saying it was good, so I don’t really quite understand. But one has to be a little careful when you talk about these issues. COWEN: French fiction — what do you find most interesting in that area? ROMER: Oh, we have a division of labor in my house. My wife is the one who you should ask about French fiction. Right now, her goal is to get me to read any fiction at all. I’m heavily biased towards nonfiction, and she’s trying to broaden my horizons a little bit. COWEN: But fiction is arguably one of the best ways to understand the norms of a society, right? ROMER: Yeah, that’s true. So what am I going to cite to support that? A piece of nonfiction. A colleague of mine at NYU who had served as dean for many years — he looked at a large sample of promotion cases, and he then tried to generalize, what are the differences between the humanities and the sciences? What makes these things tick? Where are they similar? Where are they different? He wrote a really nice book called The Geography of Insight that talks about what’s distinctive about humanities, as opposed to sciences, and how they both contribute to a better understanding of the world that we live in. COWEN: Last question thread, what did you learn at Burning Man? ROMER: Sometimes physical presence is necessary to appreciate something like scale. The scale of everything at Burning Man was just totally unexpected, a total surprise for me, even having looked at all of these pictures and so forth. That was one. Another thing that really stood out, which is not exactly a surprise, but maybe it was the surprise in that group — if you ask, what do people do if you put them in a setting where there’s supposed to be no compensation, no quid pro quo, and you just give them a chance to be there for a week. What do they do? They work. What people do at Burning Man is they go there and they work. They’ll do a different job, like they’ll work as part of the volunteer police force, or they’ll help maintain sanitation. They’ll work to set up something which offers a service to other people. But there’s enormous satisfaction that we draw from accomplishment and the provision of the output that we produce, making it available to others. If somebody asked me, “What’s a post-scarcity society going to look like?” Somebody actually said this to me there. He was like, “What does post-scarcity society look like?” People work hard because they like it. They work on things that they care about and they think others will care about, and that’s an encouraging insight, I think, about people. COWEN: We can leave it at that. Paul Romer, thank you very much. Hope we can do this again someday. ROMER: Good. My pleasure.
https://medium.com/conversations-with-tyler/paul-romer-tyler-cowen-science-economics-covid-19-93276c8a57dc
['Mercatus Center']
2020-05-20 12:40:29.865000+00:00
['Authors', 'Economics', 'Podcast', 'Covid 19', 'Coronavirus']
In What We Trust?
As a blockchain technology follower, I keep excited when seeing more and more use cases on blockchain. Among other aspects, trust is driving this boom. When people increasingly believe the promising future of blockchain, it’s time to ask: Where is the trust coming from? In what aspect are we trusting blockchain? A Very Quick Overview of Blockchain Many people percepts blockchain as a type of database, and this is good enough as the beginning. Blockchain is a method of storing data with strong integrity protection, and it is almost impossible to make alteration on the stored data as time goes by. Source: https://anders.com/blockchain/blockchain.html Data are grouped and stored in a block. Each block is hashed in such a way that any change inside the data is easy to be detected, and therefore invalid. Each block is also linked to the previous such that changing one block (and any data stored inside) fails the blocks thereafter. While this mathematical magic (cryptography) can be done in one machine (node), blockchain protocol is always distributed: it is executed by multiple nodes. These nodes will agree on the same piece of outcome after a predefined process to reach the consensus. Variety of Implementations What is mentioned above is the very basic technical aspect about blockchains. But how to implement this is more important. In the market there are several implementations depending on different situations, and they are bearing various levels of trust. I roughly divide them into four categories. Enterprise-owned Blockchain The blockchain is owned by an enterprise, and therefore it is also known as enterprise blockchain. Enterprises are considering blockchain, more on optimizing their existing system and process for cost saving or data protection (e.g. streamlining the process), or creating new types of service (e.g. a database with stronger integrity protection). More and more use cases and proof of concept (POCs) appear nowadays on enterprise use of blockchain. Different camps are building enterprise blockchain frameworks. The most common one now is Hyperledger Fabric, part of Hyperledger projects hosted by The Linux Foundation. Others are Quorum, Multichain or some vertical pre-built solutions. Note that all of them are permissioned blockchain, meaning that only those allowed can participate into the operation and use of the blockchain. All the nodes are owned and maintained by the enterprise. Security measure is therefore determined by them. For example, the enterprise decides how many nodes are running the consensus, and where the nodes are distributed geographically. This determines the level of robustness and against DDoS attacks. In general their users do not perceive any additional values of blockchain. For example, if a bank is streamlining their backend system with blockchain technology, unless it is reflected by cost reduction to end user, no one is interested whether it is blockchain or not, or whether they do this or not. When a company is offering blockchain-based solution, user only cares the type of service, the company’s reputation and their service level agreement, rather than the term “blockchain”. The trust level is more on the enterprise than the technology. Consortium Blockchain Another big driver of blockchain use cases is consortium. In general a consortium is formed by a group of business entities of common interests. The formation can be either by same vertical (like insurance industry) or cross-vertical to achieve a business objective (e.g. supply chain management for companies of various types). They largely adopt existing Enterprise Blockchain frameworks. In the market there are some pre-built solutions for specific verticals. They are still permissioned blockchain, as only consortium members can participate into it. Every participant in the consortium has the incentive to run and maintain this blockchain as far as they benefit from it. For example, supply chain management gets automation with faster business outcome and business transparency after implementing certain processes on blockchain, which benefits to all participants in the consortium. Similarly, from user perspective, unless they feel the benefits, they really do not care whether it is blockchain or not. For example, it does not add any values unless users can get insurance claim faster or gain more transparency on the whole process. But comparatively speaking, users have higher trust on consortium blockchain over enterprise one. Now the blockchain is not kept and maintained by one single party, but by a group of enterprises who have an incentive to maintain its operation. It is believed that a company can modify data for whatever reason, but a consortium is of less chance doing so. Public Blockchain Public blockchain is not owned by anyone, but an agreed rule set (protocol) that every participant is willing to follow. Famous examples are Bitcoin and Ethereum. Bitcoin has the longest history and is still considered as the most successful and sizeable blockchain implementation. Public blockchains are permissionless, meaning that everyone can join or leave at any time one wishes. Any new “join” is enhancing the operation of blockchain, while any “leave” doesn’t do much harm on it. In public blockchain we first see the term decentralization. Not a single organization, be it an enterprise, a consortium or even a government, keeps this blockchain and its operation. No one can shut it down completely, and no individual can take control of it unless one can hold majority of nodes. In this sense we consider public blockchain is even more robust. (Here we are not talking about their value and its volatility. It is determined by the market forces.) To certain extent, users are trusting public blockchains because of decentralization. Just consider how many digital assets (like tokens) are running in public blockchain platform, and their overall value. These blockchains are believed able to withstand stronger DDoS attack and to avoid manipulation by single party. Nothing comes free. The use of public blockchains (for example, transferring bitcoins or use of smart contract) comes with cost. It is reflected as transaction fees. Many effort is put to improve this either by reducing transaction fee or by bringing new fee-free technologies. Smart Contracts on Public Blockchain Smart contract is a program running on blockchain platform. Contract variables are stored and contract functions are executed inside the blockchain in a deterministic way. Smart contract opens a door of using blockchain in business world. Transaction is executed and enforced according to the code in smart contracts. Specifically, smart contracts in public blockchain enable a new type of applications (always known as decentralized application or DApp) that provides more robust data storage, being automated yet of strong integrity in managing data. While tokens (or coins) dominate the use of smart contracts, more innovative DApps are coming. Ethereum by far is the most common platform in which smart contracts are running. Some new platforms are also emerging. While the platform itself is permissionless and decentralized, and therefore maintains certain level of trust (see above), smart contract itself is not necessarily so. Bear in mind that contract owner (who owns the contract code and deploys the contract on the platform) has the right to implement rules within the contract. From this perspective contracts are not as decentralized as the platform itself. What happens if a contract owner has a backdoor to modify the token he or others own? Therefore, to gain user’s trust, contract owner always takes additional security measures. As common practice, contract owner opens up the contract code to public or invites audit from third party to enhance their trust level, such that users have more confidence on it. And so do investors. Summary This article by no means tells which implementation is better than others, as the choice is largely situational. By understanding the level of trusts, users and investors know more what they are dealing with, and decide how much they can trust them. And application owners, when they try to attract users and customers, need to consider more on the trust level. For example, one common question is whether I should build and run a separate blockchain for my application, or I should leverage the public blockchains. Just “blockchain” itself and its mathematical magic is not convincing enough. How to implement this can make difference for their business.
https://kctheservant.medium.com/in-what-we-trust-1457f9bc11b
['Kc Tam']
2018-04-27 04:10:58.821000+00:00
['Trust', 'Smart Contract', 'Blockchain', 'Dapp']
How To Deal With The Adversities Of Life
On July 19th in 64 AD, a fire broke out in Rome. Within just six days, the world’s most prosperous city was almost completely destroyed. Ten out of Rome’s fourteen districts burned down to the ground, leaving dozens of buildings in ruins, hundreds of people dead, and thousands more homeless. To this day, historians argue whether emperor Nero ordered the fire himself to take credit for the splendor of a rebuilt Rome. Regardless of the disaster’s origins, rebuild the city he did, in part thanks to a big donation from Lyons. Just one year later, Nero had a chance to return the favor: Lyons, too, burned down. While he sent the same sum, Rome’s premier philosopher thought about the irony of it all. Remembering when he stood in the rubble of his own city’s decimated remains, Lucius Annaeus Seneca shows empathy for a friend: “Sturdy and resolute though he is when it comes to facing his own troubles, our Liberalis has been deeply shocked by the whole thing. And he has some reason to be shaken. What is quite unlooked for is more crushing in its effect, and unexpectedness adds to the weight of a disaster. The fact that it was unforeseen has never failed to intensify a person’s grief.” You may not have had to mark yourself as ‘safe’ on Facebook during a fire, an earthquake, or a tsunami, but you’ve surely had things go very wrong very suddenly. Maybe you got fired instead of promoted. Maybe a loved one died unexpectedly. Maybe an illness disabled you for three months out of the blue. We’ve greatly reduced the toll on human life taken by natural disasters since the Roman age, but as individuals, we’ll all encounter surprising twists of fate at least a few times over the course of our life. If these twists are unfortunate, their suddenness adds to our pain. At worst, it might incapacitate us for years. When adversity is all but guaranteed, how can we stop it from paralyzing us? The Jurisdiction Of Fortune Never one to point out a problem without a solution, Seneca offers multiple comforting alternatives. The first and most obvious is preparation: “Therefore, nothing ought to be unexpected by us. Our minds should be sent forward in advance to meet all problems, and we should consider not what is wont to happen, but what can happen.” Humans aren’t perfect. Our brains are flawed and as individuals, we all have a unique, but limited perspective. Nonetheless, being the simulation machines that we are, few things are inconceivable to us. We might never be able to expect everything, but we can make a lot of accurate projections. We can ruminate on the duration of the good times we live in and consider what’ll happen if they end. We can extrapolate some of the bad eventualities bound to come and make guesses where they will come from. Finally, we can acknowledge that, contrary to Murphy’s law, not everything that can go wrong will — but it might. As we make plans and execute them, this helps. The second thing Seneca offers to his friend Liberalis is perspective: “Therefore, let the mind be disciplined to understand and to endure its own lot; let it have the knowledge that there is nothing which fortune does not dare — that she has the same jurisdiction over empires as over emperors, the same power over cities as over the citizens who dwell therein. We must not cry out at any of these calamities. Into such a world have we entered, and under such laws do we live.” It’s true that life may sometimes render even our best efforts useless, but in this powerlessness, at least we are not alone. Even nature itself is no match for a universe governed by the forces of change, Seneca says. Mountain tops dissolve, entire regions perish, hills are leveled by the power of flames, and landmarks are swallowed by the sea. And yet, not one of these events can live up to the rumors about it we indulge in. Especially because often, setbacks are actually the beginning of something better. While these are all formidable coping mechanisms, somehow, none of them seems to capture the true essence of the problem. Lucky for us, Seneca did. Finding True Equanimity Despite Seneca’s various attempts at providing relief, when it comes to fate’s toughest blows, there is a sense of discomfort that’s hard to shake. The mere thought of losing a friend or watching our house burn sends shivers down our spine. That’s because at its core, all adversity reminds us of a dark truth: “It would be tedious to recount all the ways by which fate may come; but this one thing I know: all the works of mortal man have been doomed to mortality, and in the midst of things which have been destined to die, we live!” We live in a world in which everything has been designed to die. Including us. As a result, it matters not so much if our misfortunes are unpredictable or if they happen to someone else rather than us, for these modalities merely determine the intensity of the underlying, universal reminder: all things die. It’s painful to watch anything crumble, knowing full well we’re bound to meet the same fate one day. Every dried plant, every dead animal, every decaying building, broken chair, and crumpled piece of paper; they’re all constant little notices that, one day, our time too will be up. Facing this truth is uncomfortable, but it is exactly in this confrontation where true equanimity lies. According to Seneca, death is the equalizing constraint allowing us to “make peace again with destiny, the destiny that unravels all ties:” “We are unequal at birth, but are equal in death.” Emotional suffering is a subtle complaint about the unfairness of life. Why didn’t your relationship last? Why weren’t our career expectations met? Why do fake news, armed robberies, and disturbing videos exist? All of these are moot questions once you accept that everything eventually comes to an end. We can’t predict all of life’s eventualities, but we also don’t need to, because every possible outcome is still an outcome that will pass. Life has always consisted of both creation and destruction, the universe’s balancing forces. If anything, we are the ones beating the odds. Our very existence is defiance. Maybe that’s why we’re so easily upset by it. We’re the ones who get to live the longest, to witness the world and what’s in it, to contemplate the circle of life. This is the condition we lament when, actually, we should be grateful for it. A Strange Fact Of Life Earth has always wreaked the occasional havoc on its inhabitants. And while our grasp on fortune’s worst calamities gets stronger and stronger, no one can dodge all of life’s curveballs. Because abruptness adds emotional anguish to our many challenges, Seneca suggests we should prepare for all imaginable possibilities. Like setting the dinner table every night, it won’t protect us from uninvited guests, but it’ll allow us to welcome them when they show up at our door and at once begin. We are also not alone in facing our ordeals, for fate makes halt for none. Neither our cities nor our neighbors will be spared; even nature must remake itself. Luckily, every rock bottom we hit is a chance to build something better. Unfortunately, none of Seneca’s great advice can shield us from the true source of adversity’s paralyzing discomfort: we live in a world destined to die. The transience of life is tragic and we don’t like being reminded of it. At the same time, it is this very fragility that unites all things in the universe. Only if we embrace it can we move past the expendable questions that make our lives miserable. There is no need to prepare for every case, because all cases are subject to change. Mortality is the great equalizer, but what prelude could offer more cause for gratitude than the experience of being human? It goes back only to medieval Persian poets, but the old adage might as well stem from our favorite Roman philosopher himself: this too shall pass. It’s a strange, but also rather beautiful fact of life, don’t you think?
https://medium.com/personal-growth/how-to-deal-with-the-adversities-of-life-1a1a9286aa48
['Niklas Göke']
2018-11-11 00:25:43.834000+00:00
['Self Improvement', 'Life Lessons', 'Life', 'Psychology', 'Culture']
Beginning Python Programming — Part 13
Beginning Python Programming — Part 13 Diving into asynchronous code Photo by amirali mirhashemian on Unsplash In the previous article, we covered iterators and generators. In this article, we are going to dive into asynchronous code, or code that can do multiple things at once. Just a word of warning, this lesson is going to be hard. It’s going to require that you have a good grasp on everything we’ve covered so far. The good news is that after this, the rest will be easy in comparison. While most articles use sleep() when giving examples of how async programming works, I promised a friend that I would avoid using this syntax to explain async code. All you need to know is that when you call sleep(3) the program’s execution will wait for 3 seconds before resuming again. Before we dive in, we need to cover a few terms that will be used throughout this article. Coroutine — the lowest level of async programming, this is a function that does stuff asynchronously. These can be run directly or used in a task. Task — used to schedule coroutines for async execution by the system. Future — used to place coroutines on hold until another coroutine is completed. (think of this as background code that needs to fetch data before it can complete) Event Loop — a loop that iterates over one or more tasks until completed. In a web server, this might be an infinite loop waiting for a client connection. This is the core of async in Python; it orchestrates all of the work done in the background. CPU — Central Processing Unit; the piece hardware in your computer that performs calculations (i.e., work). CPU Physical Core — CPUs today contain multiple cores. These cores may include logical processors (e.g., Intel i3, i5, i7) each physical core is responsible for processing data or handing off work to a logical processor. You can touch physical cores if you tear down a CPU. Logical Processor — If a physical core contains logical processors, these perform work. Logical processors only exist because of software, but this allows your computer to do multiple things at the same time. (Streaming music while writing code while downloading a file, etc.) . Logical processors are the reason you see eight cores on a quad-core machine. Thread — a queue within a physical or logical processor that performs work. Processors can contain multiple queues; they are just scheduled based on priority. Since computers have multiple cores, your code may run on the main thread of one core while your user interface runs on the main thread of another. Any code that does not run on the main thread is considered to run in the background. Awaitables asyncio is one module we can import to perform asynchronous code. It is designed for situations where you have many connections with slow I/O (input/output) and provides two utilities that matter right now — async and await . The good example where asyncio would be used would be a web server. An awaitable is an object that can be used in an await expression. async is used in front of def for any function that contains await . Coroutines and Futures While I gave you a definition above, I wanted you to have a basic understanding of the big picture before getting into detail. I know this looks like a lot of code, but we will work through it one section at a time. This is a rework of a script I found here. At the top, we have two imports: asyncio and json . asyncio is what will be doing the heavy lifting when returning data to clients and json is just used to encode our dictionary to a JSON structure that we can return to each client. The next method async def connection(reader, writer) is used to respond to a client when it asks for data. Because we used async , this is a coroutine object that will run on a separate thread so the server can process other requests while this is returning a response. reader is used to pass the request to our function, and writer forms a response to return to the client. First, we read the request using await reader.read(1024) . When we ask for something, there is usually some data that comes with the request. This data tells us what the client wants to do and how it wishes to do it. It’s up to our server to fulfill this response as long as it is valid. Here we are reading the first 1024 bytes which should be enough for any GET request. We use await to put this method on hold until all of the data can be read. This means that reader.read(1024) is a future. Next, we begin building our response. Since we need to tell the client how to understand the data we are sending back, we need to include a header. In this header, we tell the client a few things: HTTP/1.1 — the version of HTTP we are responding with 200 OK — the status code of the request, 200 means successful Content-Type: application/json — the content type we are returning will be JSON. You might notice we have \r included in the request, and twice at the end. \r is an escape sequence that creates a carriage return (CR). This is representative of old typewriters when you hit the edge of the page. A carriage return was needed to bring the carriage (the bar that holds the paper) back to the left margin of the page. \r was used for new lines for Mac OS versions earlier than X. is an escape sequence that creates a line feed (LF). Back to the typewriters; a line feed was when you advanced the page down to start a new line. is still in use in Linux and macOS systems today to represent new lines. \r is the idea of doing both a carriage return and a line feed (CRLF). While it is a more accurate description of what happened when people were using typewriters, it was also adopted for creating new lines in Windows. It is also a big reason why your code sometimes doesn’t work when you switch between Windows and Linux (or Windows and macOS). Ok, back to it. Next up, we have data which is just a random bit of JSON that I created using this script, modifying slightly for brevity, and stored it as a dictionary in the snippet. Next, we convert our dictionary to a JSON string using json.dumps(data) . dumps stands for “dump string.” Since our web server only likes dealing in bytes, we cast it to a bytes object, giving it a UTF-8 encoding so our client can parse it correctly. We then use writer.write(header + body) . First, it concatenates header with body , then we write this data back to the web server to return to the client before closing our writer, which effectively flushes the writer’s buffer and deletes the object from memory. async def main(host, port) is a coroutine used for handling the requests that come into the web server. It accepts a host and a port as parameters which will be used in the body to create a basic web server. server = await asyncio.start_server(connection, host, port) is long but very simple to understand. We create a server object using asyncio.start_server . This is a future that will create a web server using our connection() coroutine from above, a host , the IP address of the server this will be hosted on, and the port on the server that we will accept connections from. async with server is new syntax that we haven’t covered yet. with essentially does the creation and any cleanup needed after we are finished with whatever we are trying to do. async with server kicks off the server using async , and if we run into any issues, it shuts down the server cleanly. Here we use it to run a future await server.serve_forever() . That’s right, the server that was created for us above also comes with an async function. Finally, we need to set up the server. asyncio.run() creates an event loop that allows us to run our coroutines. main("0.0.0.0", 8000) is the coroutine that we pass in because it will eventually call our connection coroutine. If you are unfamiliar with IP addresses, here’s a quick rundown: 127.0.0.1 — there’s no place like home, this is your local loopback address. 0.0.0.0 — this tells the server to host on all IP addresses on all interfaces (network cards). If you have both an Ethernet port and a wireless card, the server will be available to both interfaces. Be careful with this; you may want to specify the IP address to only one network interface, Ethernet is always preferred. Every computer has a number of ports available to it, 65536 to be exact, although we start from 0, so the maximum port number is 65535. Be sure to review the well-known ports (there are 1024 of them). You should stay away from these as much as possible during testing since they are reserved for specific functions. Port 80 is for HTTP and 443 is for HTTPS. While I can’t say I’ve remembered all of them, I remember the ones I use the most. There are other ports you should watch out for, such as PostgreSQL — 5432, MSSQL — 1433, 1434, MySQL — 3306, and RDP — 3389. These are essential things you will become familiar with the more you use them. Back to it! You might notice that I wrapped this in a try/except block. I did this because when I pressed ctrl+c on my keyboard, the program crashed. I didn’t like it, so I handled the exception KeyboardInterrupt which occurs when the execution is broken by the user. I could have just passed and let it end quietly, but decided to print “Stopping web server” before exiting just to be friendly to the web admin. Photo by Glenn Carstens-Peters on Unsplash Tasks When it comes to tasks, the first thing to remember is that tasks are not thread-safe. What this means is that if data is changed by one coroutine that another coroutine uses, there could be some unexpected results, or worse, your program could crash. In terms of thread safety, think of it like this: We both drive the same car. We have to plan around who drives at what time. If I drive the car to work between 8–5 and you try to drive the car at 10 am, you won’t be able to because the car isn’t there. Translated: Let’s assume we have a buffer that can contain any data we want to store in it. Then we have two tasks that use this same buffer to perform work. One task uses an integer data type, and the other uses strings. The first task stores 42 inside of the buffer, the second task replaces the contents of the buffer with “Hello,” then the first task tries to use the buffer to add 1 to the buffer… Do you see how this could create a problem for us? With that warning out of the way, let’s dig into tasks. Tasks are used for scheduling coroutines. Tasks can also be used to have multiple coroutines run at the same time. There are several methods available to tasks that allow you to cancel tasks, return results from tasks, inspect the status of a task, and add or remove callbacks to tasks. A callback is essentially a method or function that a task will call when it is completed. If I had a task that printed “Bob” to the screen, it might have a callback that called a function that printed “Program finished” when it finished. Time for an example that I borrowed from this page. As usual, we import asyncio . We then have a count function which takes the current run number as an argument. Then we print 100 iterations for each run. We’ve done all this before. create_tasks is an async function which creates an inner event loop using asyncio.get_event_loop() . It provides an event loop that we can use to execute tasks on. I then created a list comprehension that generates 300 tasks to run. Inside of this list comprehension, we use inner_loop.run_in_executor . The first parameter of this function wants a concurrent.futures.executor instance. If we pass in None as we did here, we use the default executor. This is fine for our needs. The second parameter, count , refers to the function that we wish to call concurrently, that is, at the same time. Finally, i is the argument that we pass in, which will be handed off to the count function when it is called. Finally, we use for _ in await asyncio.gather(*tasks) to add all of our tasks into a future that aggregates all of our tasks into one result. Our program will start off creating an outer loop which will be used to schedule the inner loop, telling it to run_until_complete and passing in the function create_tasks() as the only task it will need to perform. Once the task is complete, the outer loop closes. While it may appear that we forgot to close the inner loop, it was automatically closed when create_tasks ended. If we attempted to close it at the end of the function, we would get a runtime error that we cannot close a running event loop. I want you to run this code and look for anything that appears strange in the output. If you use small ranges, you’ll see fewer anomalies, perhaps none. But don’t worry if you do, because we didn’t create all of these tasks in a thread-safe way. If we wanted to do that, we’d handle each task’s result one at a time if they were all being stored in the same place.
https://medium.com/better-programming/beginning-python-programming-part-13-6147ce4cd88d
['Bob Roebling']
2019-06-20 17:52:25.348000+00:00
['Coroutine', 'Python', 'Async', 'Programming', 'Asyncio']
Dysphoria Isn’t New
This story starts with a girl. Stacy’s long, black hair hung straight down — impossibly far — to her waist. It was the biggest thing about her. She’d smile and twist it over an ear when she answered the phone, or when she stood up to walk across the small waiting room. I watched the way she walked, the way she kept her balance in heels, the way she sat down in a skirt. She probably thought I had one of those crushes twelve-year-old boys get on women just out of college, but it wasn’t that. I was studying her, because: I wanted to be a girl. I wanted to be just like Stacy. When you're transgender, you can hide from everyone-except yourself. Photo Credit: Photo by DANNY G on Unsplash That’s what brought me to that small waiting room in the therapist's office, filled with white noise machines that made sounds like the crowds cheering on Bo Jackson or Roger Clemens or Daryl Strawberry in the baseball games I still had time to watch on TV in those days. The Saving Solace of Imagination I would close my eyes in that waiting room. I’d pretend I sat in the stands — in the bleached denim miniskirts the girls wore then. In a tee-shirt cut short and tight in the sleeves so my bra wouldn’t show. With my long, dark hair clipped back in a banana clip, blowing in the breeze trailing in across the bleachers. Everyone would just see me and see a girl, in the stands, like all the other girls there. But, then, Stacy, with the real, long, dark hair would call my boy-name. It was my turn to see the therapist. I’d die inside, open my eyes, and see the jailyard-blue uniform pants I’d worn that day to school. I’d stand up, walk past the real girl and into Kevin’s office. I’d catch a glimmer of her bare, smooth leg as she shut the door. “Are You Still Cross-Dressing?” “So, how have you been since last time?” Kevin the therapist would ask, eyes cast downward onto his clipboard, reading last week’s notes so he could remember who I was. Even I didn’t know who I was. My doctor had sent me to Kevin when my pants starting falling off my waist, even with the belt tightened. The anxiety attacks had melted 20 pounds off my 140-lb. frame. I talked myself out of suicide a few times a month. But, I didn’t tell anyone that. Not even Kevin the therapist. “Are you still cross-dressing?” I’d shake my head because no words would come without tears and sobs. The DSM in the school library — the bible these therapists used to distinguish the mentally ill from the sane — had anxiety, crossdressing, and transsexualism all listed as mental disorders. The crazy, sobbing people on Phil Donahue or Sally Jessy Raphael had disorders like those. So did criminals in jail. I was two, maybe three, sentences away from being sent to a mental institution, the ones with the white, soft walls. You could spend the rest of your life in one of those. Kevin’s Tests During our visits, Kevin would lean back in his pleated khaki pants and cross his legs across the knees, the way men did, not women. He’d rub his bald head. And squint at me, like he was going to say, “Chin up, Chief, you’ll hit the next pitch.” But I wanted to be the girl in the stands. He just ran his tests. This went on for weeks. By that time, I could twirl the hair in my wig the way Stacy twirled hers between her fingers when I was the day’s last appointment. I could cross my legs like she did when she had pushed back from her desk, phone cradled against her shoulder. But, I’d be a ‘fag’ if anyone ever saw me do any of these things, even if I was good at them. And ‘fags’ got beat up. Bad. “Tell me what word comes to your mind when you hear the word, 'barn.'” “Cat.” “Girl.” Kevin asked, eyes on his notes, pen in hand, ready to record my delusions. I told him. His pen scratched across a fresh sheet of white paper. On the day after Stacy wore a dark skirt with sheer black nylons, he moved onto ink blot diagrams. “Tell me what you see on this card.” I told him. More scribbling. “This one.” I told him again. He nodded. “You Have an Addiction to Shame.” Then, on the day I perfected mimicking the way Stacy’s voice rose at the end of her sentences, Kevin pulled a big packet of papers out of his desk drawer, the paper ribbons still attached where they had pulled them through his dot-matrix printer. “You’re a very intelligent young man,” he started. My mind shut down, and pulled me away to wonderous experiences like slumber parties and make-up experiments, and shopping at the mall that I imagined all girls-allowed-to-be-girls got to do. “But, you’re lazy,” he continued, pulling me back into my lonely world of baggy pants and polo shirts. “You kind of gave up at the end of each test.” I nodded. I did… give up. My imagination allowed me to escape dysphoria, for a while. Photo by JR Korpa on Unsplash “Do you think, maybe…” Kevin started. I listened because I knew I had to. “Do you think maybe, you’re not addicted to women’s clothes at all, but you’re, maybe, addicted to shame? The shame that you think comes with a boy wearing girl’s clothes?” That night, I took a long shower — like the ones I took when I wanted to be alone, when I needed to be Stacy, or just someone else. To be free from this war that raged every moment between my mind and my body. I closed my eyes and clutched a bottle of sleeping pills, hard until my fingers hurt. I leaned against the cold tile of the wall and tried to convince myself — again — not to swallow every single pill so I wouldn’t be mentally ill and addicted to shame anymore. That night, I didn’t practice anything I had seen Stacy do or say that day. I shuddered at the gulf between who I was and how the world saw me. I felt as if I had already fallen into that gulf, spinning and spinning, living only until the moment I crashed upon land again. Only fifteen years later would I first hear the words dysphoria and transgender. I almost didn’t make it.
https://medium.com/empowered-trans-woman/dysphoria-isnt-new-8521fe0d30a4
['Arabelle J.']
2020-04-18 03:03:11.042000+00:00
['Adolescence', 'Mental Health', 'LGBTQ', 'Dysphoria', 'Transgender']
“You Might Have To Kiss A Few Frogs” 5 Leadership Lessons with Julie Leffler, of Big Hype Marketing
Can you share the funniest or most interesting story that happened to you since you started your company. The funniest and most interesting thing is that our clients think of us as their internet and email help line. At least once a week we have a client that needs help figuring out how to work their email, the internet, how to reset passwords, etc., things that they just need help with and that are not part of the service that we are providing them. We are happy to help our clients when we can, who else are they going to ask? What do you think makes your company stand out? Can you share a story? I think there are two things that make us stand out….1. We work with small to medium size businesses so we are usually working directly with the owner of the company or at least they are aware of what’s going on with our marketing strategy and efforts. I like making a difference in a business that is individually owned, where our efforts are directly making a difference in the lives of the owner, their family and their employees lives. 2. At a lot of other web developing companies, a developer is designing the websites on their own. Although they have the knowledge to build a website, they don’t usually have the creative side. At Big Hype Marketing, our creative team plans out the website, chooses the photos, comes up with the creative, content and design and then brings in the developer to discuss more ideas, different functions of the site and gets his input on the project before he then builds the site. I also review all website creative before it goes into production to make sure it is up to my creative standards as well as includes all of the necessary information for that business. This process allows our websites to have a certain design element that you don’t see at a lot of other web design companies. None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful towards who helped get you to where you are? Can you share a story? My dad is a marketing consultant and has worked with many hi-profile celebrities including Mary Kate and Ashley Olsen, R.L. Stine from Goosebumps, Jane Fonda, and Richard Simmons to name a few. I tagged along with him when I could when I was young and got to see what goes on behind the scenes of filming, photo shoots, press junkets, VIP travel and celebrity life. My dad definitely inspired me to pursue marketing and I often call on him as a sounding board. How have you used your success to bring goodness to the world? It’s not quite making an impact on the world but as a female business owner, I hope to show my two daughters that women are equal to men. That we can succeed, be smart, and that they can become anyone they want to be. Do you have a favorite book that made a deep impact on your life? Can you share a story? I would say my three favorite books are: 1. The Glass Castle: A Memoir by Jeannette Walls — This is a true story of a woman who had a bizarre childhood with unconventional parents and is now a successful writer and author. 2. Big Little Lies by Liane Moriarty — I was sad when this book ended and thrilled when they made it a television series. 3. Handmaids Tale by Margaret Atwood — This book was written back in 1985 but is very relevant, however more extreme, to issues we face today in society and in government. It is thought provoking and shocking and a story like no other I have read before. What are your “5 things I wish someone told me before I started my company” and why. Please share a story or example for each. 1. Get ready to step outside your comfort zone — As an entrepreneur and a lady boss, I have to be a tough cookie sometimes which is somewhat outside of my comfort zone. With employees and with clients I have to stand up for what’s right and what I want for my company. Sometimes I’m afraid of confrontation but I don’t have a choice. I have to ask for what I want and say what I need to say because there is no one else who can do it for me. All women should work on asking for what they want. You will not get anywhere in business or in life if you sit back and wait for things to fall into your lap. 2. Keep an eye on your P&L, especially your expenses. — Know what every expense is and why it is the amount it is. I check my P&L every day and go through my expenses regularly. Even though I monitor my expenses, I often find items that I can either reduce or get rid of entirely, depending on our current clients and needs. 3. You Might Have To Kiss A Few Frogs — When I first started hiring employees I made some poor choices and ended up having to replace people. I have since gotten a better feel for what I should be looking for in the right candidate. For most of the positions I hire for, my first priorities are to look for someone who is creative, has good writing skills and is self-motivated. A lot of the other skills needed for a job can be taught but I can’t teach someone to be creative, write properly or to be proactive. Figure out what you need to look for in a candidate and teach them the rest. 4. Learn how to do everyone’s job — As a business owner and leader, I have to be able to show my employees how to do their job the way I expect it to be done. Also, I have to be able to fill in if that job becomes vacant or if we get too busy. If someone quits or is out or our pipeline is overwhelming, someone has to pick up the slack. As the boss, I can’t expect a manager or counterpart to absorb ALL of the extra work. As the owner of the company, sometimes I have to help out and in order to do so, I need to know what everyone does and how to do their job. Understanding how everyone contributes and how to do their tasks is also valuable for evaluating productivity as well. 5. Be flexible and adaptable to change. — When you own your own business, there is always something popping up and changing. The world of digital marketing, social media marketing and the internet as a whole are constantly evolving. One day one strategy will be the most current and valuable way to market, and the next day it’s completely invaluable and something else is the best and most productive way to market a product or service. As a marketing professional in the digital world, it’s important to stay ahead of the game. I need to constantly educate myself, read and research, share information, and foretell the future. My skills and goals are constantly evolving and changing and I have to keep up. Some of the biggest names in Business, VC funding, Sports, and Entertainment read this column. Is there a person in the world, or in the US whom you would love to have a private breakfast or lunch with, and why? He or she might see this. :-) Ruth Bader Ginsburg — She is the ultimate female role model and one badass B! She is 85 years old and still fighting for what she (and I) think is right. She has made HUGE contributions to women’s rights and gender equality. This was very inspiring. Thank you so much for joining us! Note to our readers: If you appreciated this interview, please click on one of the buttons on the top left to post to your twitter, facebook or pinterest. If 2000 people like you do this, there is a good chance this article may be featured on the homepage. : -) If you would like to see the entire “5 Things I Wish Someone Told Me” Series In Huffpost, ThriveGlobal, and Buzzfeed, click HERE.
https://medium.com/thrive-global/you-might-have-to-kiss-a-few-frogs-5-leadership-lessons-with-julie-leffler-of-big-hype-7f22fdc94872
['Yitzi Weiner']
2018-08-05 18:33:33.833000+00:00
['Women In Business', 'Life Lessons', 'Wisdom', 'Entrepreneurship', 'Public Relations']
The Lie (2018) • Amazon [Welcome to the Blumhouse]
The Lie (2018) • Amazon [Welcome to the Blumhouse] A father and daughter are on their way to dance camp, but when they stop to offer a friend a ride, their good intentions soon result in terrible consequences. With their low-budget big-profit formula, Blumhouse Production has been a powerhouse over the last decade. And it’s not just horror they’ve found success with, as they were also behind Whiplash (2014) and BlacKkKlansman (2018). Their latest project, ‘Welcome to the Blumhouse’, is in conjunction with Amazon Studios, offering an anthology of four horrors during October. The Lie is released alongside their psychological-thriller Black Box (2020), but was originally made under the title Between the Earth and Sky and premiered at 2018’s Toronto International Film Festival. Unfortunately, it failed to make any noise due to being overshadowed by A Star is Born (2018) and First Man (2018), resulting in it being put on the back-burner for two years before being picked up by Blumhouse. Loosely based on the 2015 German chiller Wir Monster / We Monster, The Lie was written and directed by The Killing producer Veena Sud. In her second feature-length movie, the Canadian filmmaker examines how far parents will go to protect their children. Jay (Peter Sarsgaard) and Rebecca (Mireille Enos) are divorced parents of 15-year old Kayla (Joey King), drawn back together by tragic circumstances. While driving his daughter to a ballet retreat, Jay picks up her friend Brittany (Devery Jacobs) and, midway through the drive, the girls insist they pull over in a remote area to go to the toilet. When they don’t come back, Jay goes looking for them only to find his daughter distraught after admitting to pushing her friend off a nearby bridge. Jay and Rebecca scramble to cover up their daughter’s crime, but while doing so they find themselves becoming tangled in their own web lies. What holds this family drama together is the incredible work from the cast. Audiences may be familiar with Joey King from lighter fare such as Summer ’03 (2018) and Kissing Booth (2018). After finishing this picture in 2018, the actress went on to earn an Emmy Award nomination for her work in the miniseries The Act (2019), where she portrayed the calculating real-life killer Gypsy Rose Blanchard. She continues to demonstrate her range here, playing sociopathic teen Kayla. Unfortunately, she doesn’t get much to do, but she sells her role as an unnerving adolescent brilliantly. As the stakes keep escalating, Kayla remains nonchalant to her crime. There are several extremely chilling moments that make her complex character hard to pin down. The morning after the incident she’s joyously cooking breakfast for her parents and later she’s laughing at cartoons. Kayla’s a puzzle wrapped in an enigma as she switches from happy to raging. Sarsgaard is wonderful as always, finding himself on familiar ground having played a similar role in Human Capital (2019). He’s the cool dad who plays in a band and will cave into his daughter’s demands. Enos gives a razor-sharp performance as Kayla’s stern mother, Rebecca, resembling a distressed Julianne Moore and enforcing daily routines on her daughter. Sarsgaard and Enos both give nuanced performances portraying the pain of their characters. While bearing the burdens of their divorce, they’re challenged with having to protect their daughter over anything else. Rebecca initially wants to take Kayla to the police, but Jay takes a stance saying “it’s our daughter, we must protect her”. However, he only makes things worse as his lies force the family into increasingly dangerous situations. Having previously worked together on an episode of AMC’s The Killing, there’s a natural chemistry between these actors that emanates from the screen. Writer-director Veena Sud has a deft hand creating atmosphere and tone. With a background in crime dramas such as Cold Case and Seven Seconds, she imbues The Lie with similar energy. Taking advantage of the brisk winter setting, Sud wraps the isolated characters inside an ice-cold blanket. Aesthetically, the light-blue tone is wonderfully melancholic and perfect for the story. Peter Wunstorf’s (Meditation Park) cinematography also captures the environment beautifully and evokes Norwegian noir The Snowman (2017). Unfortunately, Sud leans too heavily on a filmmaker’s playbook that undermines what works so well. While mainly taking place in Rebecca’s modernist home, production designer Elisa Sauve constructs the setting with large glass panels, giving Sud the opportunity to heighten the tension in a Hitchcockian manner. Further, it’s pandering as the director sprinkles unsubtle clues that scream ‘Chekhov’s Gun’ throughout… causing the big reveal to fizzle without a bang. After watching The Lie, it’s understandable why it sat on the shelf for two years. The script had the potential to be a clever morality tale about the psychological horrors of keeping a secret. Similar to Luce (2019), Sud could have examined the depths a parent would go to protect their offspring. Alternatively, it could have explored the consequences caused by covering up a child’s crime, echoing We Need to Talk About Kevin (2011). However, The Lie stretches believability almost immediately as each plot point becomes more ludicrous than the next. Due to their actions, each character lacks any redeemable qualities and there’s no drive to care about the family’s predicament. With some additional dark humour, watching the ill-advised cover-ups could have been enjoyable, like Shallow Grave (1994) or Fargo (1996). However, as they repeatedly make outlandish decisions and the collateral damage increases, there’s no suspense or amusement in their actions. The script strives to remind audiences of Big Little Lies (2017–19), but it’s is too mindless to be a hard-hitting drama and too serious to be a dark comedy. The most frustrating aspect is that Sud dangles inexplicable plot devices like a carrot, with no intention of exploring them. There’s a moment when Kayla and her father are sat by a pool having a conversation. While revealing a nasty self-inflicted wound on her arm, Kayla admits “I do it because the boys don’t look at me like they look at Brittany. Am I pretty daddy?” Additionally, Jay and Rebecca attempt to pin the blame on Brittany’s Pakistani father, Sam (Cas Anvar). This brings out the prejudices of two police officers (Patti Kim and Nicholas Lea) during an interrogation — one of which makes a remark drenched in racial profiling. These brief moments could have added so much more depth to the overall story. The director could have tackled subjects such as women valuing themselves through male attention, along with the mental health crisis amongst teenagers. Moreover, racial injustice and how human beings are treated differently based on the colour of their skin. These are poignant themes that would make for some delicious text, but the director chooses to diminish them. Despite being repackaged and marketed as a horror-thriller from Blumhouse, The Lie barely flirts with the elements the studio is most strongly associated with. One could argue it’s nether a horror or a thriller. Tonally it evokes a daytime drama airing an obscure TV channel. Sud continuously uses cliches such as an unfortunately timed ringing telephone or doorbell to generate any suspense. Having said that, there’s a fleeting moment where it seems like The Lie might turn in to something particularly thrilling: as Sam chases after Kayla to learn the truth, there’s a brief home invasion element the director momentarily toys with. Unfortunately, this sequence is the only time The Lie starts to create a shred of tension. In true Blumhouse fashion, there’s an attempt to ramp up the story with a shocking twist, too, but veterans may find it predictable. For all their acclaim, Blumhouse does have a history of disappointing remakes, from Martyrs (2016) and Black Christmas (2019) to Fantasy Island (2020). Unfortunately, Veena Sud’s reworking of Wir Monster can be added to that list. Under the surface are hints of an intriguing examination of the marital divide, teenage mental health, and white privilege…. but The Lie fails to tackle any of them enough to create an interesting story. The beautiful visuals and excellent chemistry between the leads are the only things that make this feature bearable. Overall, it becomes a ridiculous account of intelligent people making unintelligent decisions and an idiots guide to parenting a teenager.
https://medium.com/framerated/the-lie-2018-amazon-welcome-to-the-blumhouse-33e86e19cd9
["Jonathan 'Jono' Simpson"]
2020-10-11 17:31:57.309000+00:00
['Movies', 'Amazon', 'Review', 'Film', 'Horror']
How To Be More Decisive And Simplify Your Life
How often do you find yourself thinking long and hard about decisions? Most of the time, they are probably decisions that don’t hold any significant value or meaning regardless of if you make them or not. I am a person who struggles with making decisions from time to time. Actually… a lot of the time. In fact, it took me fifteen minutes to even make it this far in my writing. All of these thoughts kept coming into my mind if I was going to talk about the “right” topic, if I “felt” comfortable enough talking about decision making, and if I “knew” what I was actually talking about. The more that I thought about these questions and concerns, the more stressed and anxious I became. Admittingly, these questions and concerns sometimes leave me feeling paralyzed while having made no progress at all. Indecision and making choices are something that we all struggle with. There is no one person who doesn’t think about the many possibilities of making a decision before actually making it. As humans, we have a tendency to want to get things right the first time so that we don’t need to spend any more time on a given task than we need to. But have you ever thought about how this impacts your life? Have you ever thought about how your indecisiveness has the potential and ability to increase your stress levels and anxiety? Have you ever thought about how much time of your precious life you are wasting simply by questioning what pair of jeans you should wear to work? The truth of the matter is, I used to be a lot worse when it came to decision making. Am I perfect and crystal clear now when it comes to being more decisive? Absolutely not. And while it will never be easy to make decisions, I have found 4 things that have helped me to be more decisive in life that I want to share with you. 1. Understand your hesitation. Before anything, it’s important to know what your reasoning is for the hesitation in the first place. What is stopping you from making the decision? Is it because you are feeling scared and not capable or prepared to make a confident decision? Is it because you are worried about the consequences if you fail? Worst of all, is it because you are worried about not making the perfect decision? You have heard it before and you will most definitely hear it again, but there is no such thing as perfection. Most of the time when we struggle with making decisions, we struggle because we are worried about perfection. We want to make the perfect decision. We want to be seen as perfect in the eyes of others. And we want to look at ourselves as being perfect. So much so, that we often allow this pursuit of perfection to stop us from progressing in the first place. If we don’t have to make a decision, then surely nobody can judge us for making the wrong one. But is this going to lead you towards a better life? Is the risk of making no decision at all going to bring you any closer towards living a better life? No. So by all means, stop letting your fears determine how decisive you are. Everybody fails, but only those who are willing to fail until they get it right are the ones who are willing to succeed. 2. Trust your gut instinct. Another way to be more decisive and to simplify your life is to simply trust your gut instinct. Simon Sinek says “Leaders are those who trust their gut. They are those who understand the art before the science. They win hearts before minds.” This is why your teachers always told you to go with your gut decision if you didn’t know the answer to a question. When you trust your gut, you are usually right. The part of the brain that is responsible for gut decisions is called the limbic brain. The limbic brain is responsible for all of our feelings such as trust, loyalty, and it is also responsible for our behavior and ability to make decisions. When we trust our gut, we are trusting our limbic brain. Everybody’s ability to be more decisive is going to rely solely on their beliefs and on their values. The gut decision that some make probably isn’t going to be the same gut decision that you or I make. It doesn’t mean that others are necessarily wrong or that we are wrong. It simply means that we just have different beliefs and opinions. Regardless, it’s important to follow your guy, aka, your limbic brain. It’s been with you through thick and thin, and it has guided you to where you are today. Odds are, it’s probably going to know what’s best for you based on your beliefs. 3. Have confidence in yourself. Confidence is the key to success and to all happiness in your life. If you do not show confidence in your ability and in the decisions that you make, then you are never going to feel worthy, fulfilled, or satisfied. Trust me, I know that it’s a lot easier said than done. If we all knew how to feel more confident, wouldn’t we already do so? Here are two things that I do to help me build confidence, and in turn, be more decisive in making decisions. Small decisions: If a small decision needs to be made — one without little risk or reward — I will try my best to just act. I don’t want to waste valuable time trying to decide on something that doesn’t really matter. An example: if I go to the store and want to buy something for dinner, I could sit there for fifteen minutes before even realizing how much time had passed. To help with this, I try to follow my first desire, grab, and go. I will get whatever food I went to the store to get and won’t give myself the opportunity to browse other options. Big decisions: Big decisions are a little bit of a different story. If a decision needs to be made that has high risk and high reward, surely you are going to want to get it right. For these decisions, I like to envision myself as successful and bigger in general. It’s weird, but in my mind, I imagine being twice the size of everybody else, I imagine myself as being stronger than anybody else, and I imagine the success that I am going to achieve. For me, it just helps me to puff out my chest a little bit more, be more decisive, and take action. We make life and decisions far more complicated than they need to be. Ask yourself something. Is a decision going to impact you in any significant way? If so, envision your success and take action. If not, take action because it doesn’t really matter in the grand scheme of things. 4. Start with WHY. Speaking of Simon Sinek, follow his advice and start with WHY. What is your WHY in life? WHY do you do the things that you do? What is your purpose in life? How is your WHY going to positively impact both you and those around you? Everybody has different beliefs and values. As long as your beliefs don’t cause direct harm to others, there is no need to care about anybody else’s opinions about your beliefs. Take action towards the things that you want to achieve no matter what others think. You have your life to live and others have their life to live. Clarifying the purpose behind any decision you make is the first step towards taking action in the right direction. Knowing your WHY will allow you to take purposeful action that leads to results that align directly with your beliefs. Not only that, but the more meaningful your why is and the decisions you make, the more you are able to combat procrastination and eliminate any of the distractions that arise. Be More Decisive It’s important to be more decisive. The more decisive you are, the simpler your life will become. There are still going to be times where it’s difficult to make decisions with the utmost confidence, but the more you practice decisive decision making, the easier it will become. These tips will help you on your journey. Michael Bonnell
https://medium.com/swlh/how-to-be-more-decisive-and-simplify-your-life-86431bbcdc92
['Michael Bonnell']
2020-01-24 20:06:43.214000+00:00
['Personal Growth', 'Decision Making', 'Self Improvement', 'Productivity', 'Life']
How I Learned Data Science And the 1 Course That Changed Everything
How I Learned Data Science And the 1 Course That Changed Everything If you don’t know where to start If you are reading this, you have probably just started on your data science journey and am wondering what course to take to catapult you to the next level. I previously answered a question about this topic on Quora, which got a decent amount of views, so it inspired me to go more in-depth on it. Just like you, I consider myself a beginner in data science as I still have a lot to learn, but I have taken some online courses and wanted to document my process so far including its ups & downs. I also wanted to share the course that, in my opinion, is the one every beginner data scientist should take when starting out. Let me begin with my story. Why I’m Learning A few years ago when I was studying civil engineering at the University of Toronto, I had always come across articles depicting how people, just like me, was learning programming and was able to create awesome projects from scratch. I wanted to be like that. At that time, I already knew that engineering was not such a great fit for me as I did not do too well in my technical design courses. I wanted to learn some extra skills so that I could distinguish myself better. So I decided to give programming a try. If other people could do it, it shouldn’t be too hard right? Starting Out In Python I tried my hand on different programming languages such as Ruby, HTML / CSS, and Java, but finally settled on Python as it made the most sense to me. I started learning using Codecademy’s Learn Python course as it was one of the first things that came up when you searched ‘how to learn Python’. When I first started using it, everything was free. They’ve since added special courses which cost money but I believe they still have basic lessons which you can use to learn the basics of the language and help you build up your foundations for free. I also came across this Learn Python — Full Course For Beginners YouTube video through freeCodeCamp that takes you through the most important concepts of Python. It is great to be used as a guide to understand Python on a basic level or as a refresher for what you have already learned. You should be able to finish it in a weekend! Another great resource is to be active on coding challenge websites such as Code Wars where you can use your Python skills to solve coding challenges. You can choose the difficulty of the challenges and look at other people’s code and how they solved it to learn. I used to try to do one each day but had trouble being consistent and eventually stopped altogether. Not being consistent was a big problem I faced. Throughout those few years, I would be inspired to learn/practice, but then lose the enthusiasm and stop for a period of time only to start over again a while later. This cycle would repeat. The back and forth was because I doubted myself and my ability to learn programming. I would give myself excuses like: There are many services out there that can help you code without you needing to know it. I was too old to learn something entirely new and should focus on what I knew. There is so much competition. Its too hard. Not being consistent and doubting myself cost me a lot of time. I now realize that even a little progress is good as long as you are pushing yourself to learn something everyday. Journey Towards Data Science Fast forward a few years, I was working as a construction project manager but felt like something was missing. I regretted not putting more effort into my programming studies. I was still regularly reading articles about people who self-taught themselves and thought if I didn’t try it myself, I would really regret it in the future. Data science was a trend that I kept hearing about and when I found out that Python was the language of choice for data scientists, I felt it was a sign for me to pursue my learning in that direction. I searched around online on how to learn data science and came across this article, The best Data Science courses on the internet, ranked by your reviews, that shows you a path of courses you can take, from no programming experience, to machine learning engineer. As I felt that I needed to brush up on my Python basics, I started following the path by taking their recommended intro to programming course, Learn to Program: The Fundamentals (LPT1) and Crafting Quality Code (LPT2) by the University of Toronto via Coursera. The courses consist of video lectures that encourage you to follow along, as well as assignments you can work on that engage you while solidifying your skills in Python. They have a great mix of content difficulty and scope for the beginner data scientist. I found LTP1 to be a great refresher course as I had already learnt the basics of Python previously. I did learn quite a few new things that I didn’t know through taking it as well. I love that they teach you how to write code properly for great ‘readability’. LTP2 was quite a bit more difficult and I had some trouble completing the course but I also learned a lot from it. Following the Coursera courses, I continued on the recommended path and decided to take one of the intro to data science courses. I settled on Udacity’s Intro to Data Analysis because I was attracted to their professional looking website and great layout. I don’t know why, but I had a lot of difficulty starting up the course. In the beginning, it introduced me to Anaconda & Jupyter Notebooks and I actually spent a great deal of time trying to understand what it was and how to work it. I didn’t find the explanations on the course very helpful, so I had much difficulty moving forward in it. I find that Udacity’s courses sometimes do not provide enough guidance during exercises/quizzes and can assume you already have prior knowledge of concepts. Because of that I stopped working on the course. So I started to have doubts again. I thought if I couldn’t even get through that intro to data science course, how could I learn data science! That made me feel really down and not know what to do next. The Course That Changed Everything I came across this course while I was checking out other data scientist’s Instagram pages. I was scrolling around and found mention of Jose Portilla’s Python for Data Science and Machine Learning Bootcamp on Udemy. It seemed to be a popular course and people were saying good things about it in comments, so I decided to look further into it. I was skeptical at first, thinking that I’d fall into the trap of a fancy and catchy title. So I did a lot of research by checking its reviews and comparing them with other online learning platforms. It seemed to be worth the money. I decided to take the plunge and dive head first into the course. This is a paid course but you can always find discounts on Udemy so the price can probably come out to around $15. It is well worth it though as it organizes all the main aspects of data science in one package that is easy to follow along. This course improved my understanding, skills and confidence in data science tremendously. It thoroughly goes over all the main concepts of numpy, pandas, matplotlib, seaborn, and many others through lectures you can follow along in Jupyter Notebooks and then gets you to do a project on your own based on the lecture. You can even re-purpose those projects for your portfolio! Check out my Github that includes projects from the course. A big chunk of the course is dedicated to introducing you to machine learning as well and how you can get started in it. I like how the course focuses on implementation and not theory as that is how I learn best, by doing. It does go over theory, but more brief so you can get enough understanding to apply it to projects. I am still working on the course and hope to use the knowledge I’ve gained to work on my own projects!
https://towardsdatascience.com/how-i-learned-data-science-and-the-1-course-that-changed-everything-16912ccbab2b
['Oscar Kwok']
2020-02-13 22:24:03.942000+00:00
['Motivation', 'Education', 'Data Science', 'Programming', 'Towards Data Science']
If you are new at Medium and have few or no followers…
My first story got 18 reads. My 2nd story got 7. My 3rd story got 2. Which, I suppose, is better than none. But, the downhill trend wasn’t looking good. After that, I didn’t write here for 10 months. When you’re new, and no one reads what you write, it can feel a little sad. If you were just writing for yourself, why not just save it on your computer? It would get the same mileage, but with less of the accompanying sad. We writers aren’t just writing for ourselves. When we’re fingers on keyboard, sure. But once we hit the publish button? There’s a whole new hope... Please God, let someone like what I wrote. So, I thought I’d share a couple of tips… It’s hard for people to like your writing if they don’t even know you’re out there — If a tree falls in the forest…you know? Start with tags. Follow tags that interest you. Follow tags that you’d write about, too. Think of it like finding your people. Brave it up and write your first story. Don’t worry if no one reads it. Remember, tree in the forest… but you want something for people to read when they do find you. Follow people whose writing you like. Some of them will follow back. Share the love. Click on the green heart when you like a story. Leave comments to tell the writer what you liked, too. It really does help. Some people say the little green ❤s don’t matter. They’re wrong. They do. Because “reads” are nice — but according to Medium, the little green hearts are how we tell writers we liked what they wrote. See? A little secret about liking and sharing... Those little green hearts made SF Ali Medium’s resident cheerleader. When you show appreciation for the people writing here, some of that love will be returned. That’s just how it works. Treat others how you want to be treated. It’s the old Golden Rule. Old because it works. Always has, always will. Also… If you’re brand new, or could count your followers in fingers and toes, feel welcome to paste a link to your first story in the comments here. If the comments don’t blow up, I’ll come say hi and welcome. If you liked this or found it helpful, please click the ❤ to share. Thanks!
https://medium.com/linda-caroll/if-you-are-new-at-medium-and-have-few-or-no-followers-c47819a36754
['Linda Caroll']
2017-01-23 18:02:00.607000+00:00
['Writing', 'Hello World', 'Writing Tips', 'Medium', '100 Naked Words']
I’m Booking A 6-Day Vacation This Summer
Work-life balance is an enormous struggle for me. To one extent, I have made peace with the fact that I never have been good at balancing anything and I don't expect to ever become some multitasking maven. Yet, as a single mom who works from home by writing online, I can't exactly ignore the quest for balance either. Instead, I'm trying to be realistic about where I'm currently at and where I want to be. Case in point? A couple of weeks ago, I took a mini vacation and secluded myself in a hotel for 2 nights. During my break, I took advantage of doing nothing but relaxing and I didn't write a thing. There was a very good book I began reading in the spa called The Power of Off, and I ordered myself a copy. I was so inspired to return home with a better focus on work-life balance... yet I haven't even taken the book out of the envelope it arrived in. I came home resolved to work less and balance better, yet I still haven't even opened the book about balance to finish reading it. There are countless occasions where I simply don't do the work it takes to achieve whatever results I want. Why? Because every goal requires some amount of sacrifice, and that's not always something I'm willing to do. Balance doesn't magically happen because we decide we want it. We actually have to work to make it happen. That's kind of a bummer. I don't know about you, but my natural state of being is pretty much as imbalanced as one gets: stay up late working, wake up too early to work some more, forget about self-care, and put off housework. It's not healthy, but it is my natural inclination. However, it's also up to me (alone) to find a way to earn a good living and raise a great kid. And part of that means giving us each some downtime. My little vacation was brief, but it showed me how much I want to get away more often and give my daughter the same opportunity. As it happens, my brief stay in the Waldorf Astoria left something to be desired. There were a few issues... like finding a cockroach in my room (twice), having a spa treatment canceled because the therapist didn't show up, and getting charged for using the private bar for a bottle of champagne I never took. For my very first experience in a 5-star hotel, those problems made a dent in the overall experience. Not that the staff wasn't extremely apologetic. They were, but it simply wasn't enough to resolve those issues right then and there. But I've been emailing with the hotel manager who assures me that my next stay will be dramatically better. And I'm willing to try, particularly since the Waldorf Astoria has offered us a free night and my next spa service on the house. To that end, I'm planning a 6-day vacation for my daughter and myself at the same hotel in early July. And honestly? I'm scared as hell about it. Six straight days without working. Six straight days not making money... But spending it. Shiver. It's hard to put my finger on the real reason taking a vacation scares me. But I suppose it's tied up with a ton of guilt. I feel guilty about spending money on experiences because I grew up dirt poor. Spending money on vacations definitely isn't something that poor people do. And downtime? That's honestly not something I do anymore. Not since I began relying upon my own writing to support me and my kid. Before I began working on my own writing career, I used to have downtime when my social media work was done. In between caring for my daughter, of course. I used to watch TV. And play games on my phone. I used to just veg. I even used to make more time for dating. But once I realized I could actually earn a living with my own writing, it was like a fire was lit within me. When it comes to writing these days, I feel like I can't stop/won't stop until... Well now, I'm not exactly sure. How are you supposed to know when to slow down or take a break as you try to build your own writing career? When do you think, "I've arrived?" Or how about, "I can relax?" I've been working on this writing gig for a year and I quit my job only 5 months ago. I work excessively as much as I can because 1.) I believe it will pay off in the end, and 2.) I'm pretty sure I need to work hard in expectation of the more meager months. That's pretty damn important anytime you have an unpredictable source of income. The good news is that I can even plan a vacation at all. I'm lucky that my work is flexible enough to allow me to take it anywhere. I am fortunate to choose my vacation days, even if I'm terrible about actually taking time off. Ultimately, it's an opportunity for me to get better with balance. And to practice perspective. Taking the vacation means spending money that could have stayed in my savings account. And it means pausing some of my progress as an online writer. That's its own sort of sacrifice but in return? I gain quality time with my daughter, new experiences and memories, and more. It's an opportunity to live in the moment, have fun, and take a time out. Sure, I'll still be on mom duty. My daughter and I will be busy at LEGOLAND and The World of Coca-Cola. Frankly, Atlanta is filled with museums, so we'll have tons to do and it will probably be pretty exhausting. But I think it will be a good kind of exhaustion. And I do plan to visit that spa while my daughter is with a sitter. At any rate, it will be a new experience and something that will ultimately make me proud to have accomplished as a single mom. That might just be enough reason to toss out the guilt, don't you think?
https://medium.com/home-sweet-home/im-booking-a-6-day-vacation-this-summer-1098930b71a1
['Shannon Ashley']
2019-04-29 19:02:38.719000+00:00
['Motherhood', 'Lifestyle', 'Work', 'Parenting', 'Writing']
What is happening with solar energy?
IMAGE: E. Dans Anyone who follows developments in the energy sector will know that solar energy is no longer just the future but the present. According to the International Energy Agency’s World Energy Outlook 2020, photovoltaic solar energy is already the cheapest source of electricity in history. We are not talking about the future, but about the present, about current installations. Under these conditions, the fact that solar energy was able to cover the entire demand in South Australia for the first time on October 12 should not surprise us: you can bet we will see this repeated in many more places, on many more occasions and for increasingly longer periods. The progressive increase in efficiency and decrease in the cost of photovoltaic panels is turning solar energy into the logical alternative for electricity generation. What’s more, the technology continues to evolve and that there are still incipient possibilities, such as perovskites, which promise substantial efficiency increases. As a result, solar panels can now be fitted anywhere, covering water canals in India, on canopies over Germany’s autobahns, or on school roofs in the United States. When the economic variables of a technology change in this way, creating an oversized electricity generation grid based on solar and wind is the logical alternative, and whoever does not do so will be relegated to less efficient and, above all, dirtier energy sources. Solar and wind are the present and future not so much for environmental issues, but economics: the British government admits that solar and wind energy has proved between 30% and 50% cheaper than initially estimated, adding that renewable energies were able, during the first quarter of 2020, to cover no less than 47% of the country’s total electricity demand. In Germany, the figures are similar: between January and June this year, 42% of electricity consumed was generated by the sun and the wind. Decarbonizing energy and the economy is the only logical way forward. Solar and wind energy, supplemented with storage technologies, are already the most efficient way to feed the energy fabric of a country. The mathematics makes it clear: electric vehicles powered by batteries recharged by solar energy are the logical alternative to the internal combustion engine. Everything else you thought you knew about energy, the myths and legends about alternatives that were only viable thanks to subsidies, or the alleged problems with supply when the sun was not shining or the wind blowing are either wrong or outdated. Update yourself, and act accordingly. For your sake, and for everyone’s.
https://medium.com/enrique-dans/what-is-happening-with-solar-energy-91601fdd44b1
['Enrique Dans']
2020-10-24 08:48:44.068000+00:00
['Solar Energy', 'Sustainability', 'Solar', 'Energy', 'Energy Efficiency']
IBM Automation Decision Services wins a 2020 platinum Spark Design Award
The IBM Automation Decision Services (ADS) design team is closing out 2020 on a strong note as Platinum level winner of the Spark Design Awards’ in the Digital category. As we plan to head into the winter holidays, I couldn’t be more excited to see our designers recognized for the hard work they put in this year. The Spark Design Awards is an international design competition that seeks to “promote better living through design.” The evaluation criteria focused on innovation i.e., does the design “Spark,” i.e., innovate, change the game and in some way help humanity or the environment we live in. When their global jury of design experts, looked at submissions, they first and foremost assessed if design broke new grounds and improved quality of life. How Automation Decision Services breaks new ground See how ADS enables business analysts to worker smarter Business users have to analyze documents, extract the policies, and ask their IT department to change the rules in the application code. This is a complicated and lengthy process. IBM Automation Decision Services (available as part of IBM Cloud Pak for Automation) helps business users make the business decisions through a simple interface that requires no coding. Because conditions evolve, business users can also quickly change the way decisions are made and stay aligned with important business requirements. Decisions are made with well-known conditions and well-defined policies. But they increasingly require you have a view into the future. ADS makes the link between AI predictions and concrete decisions simple and actionable for workers without AI expertise. Setting up users for success ADS gives business users an introduction to the primary concepts of diagram building This design team worked to ensure non-technical users had all of the information and guidance they need to succeed. ADS’ clear, simple, and compelling visual layout provide meaningful engagement to users. They also provided sample projects and instructive empty states to give users a starting point and show them how they can use decisions in specific domain contexts. Every detail matters. Empty states are a simple yet powerful way to keep a user informed and supported. The user can easily complete a preconfigured predictive model with the help of a tutorial to configure the invocation to Machine Learning. The team conducted extensive user research to understand the pain points and needs of their existing customers. As the project progressed, we continued to apply the IBM Enterprise Design Thinking framework and involve sponsor users to help us verify our hypotheses, and to gain valuable feedback on our designs. In the initial steps of the design process, they regularly shared wireframes with sponsor users to get early feedback. To ensure great user experience made it to production, they also organized several usability tests with early versions in live code. Helping businesses through uncertain times As businesses constantly adjust and readjust to meet public health needs, IBM is proposing ADS to companies that need to decide in real time how they can make working conditions safe and anticipate risk. Through design that has democratized decision-making, this team has given business users the potential to help their companies and colleagues with some of their greatest challenges to date. Winning team
https://medium.com/design-ibm/ibm-automation-decision-services-wins-a-platinum-spark-design-award-c259b90c467
['Arin Bhowmick']
2020-12-09 05:17:25.874000+00:00
['Design', 'UX Design', 'Automation', 'UI', 'UX']
Netflix vs Deepfake: The Irishman
How is De-aging Technology Works? According to the film’s VFX supervisor, Pablo Helman, who stated that 1,750 shots were required for two and a half hours of shooting, the carefully placed on-set lighting captured the actor's facial performances from different angles, while at the same time shining infrared light on the actors’ faces without being seen on the production camera. Thus, the system was able to analyze the lighting and texture information and created a machinable geometry network for each frame. Working with multiple cameras is indispensable in the process. While shooting, they work with a three-camera rig with a central camera, which also has a director’s camera. The other two cameras are there to record data. Why? “Because we don’t have markers on their faces, the more data we have the better the chance to recreate performance. So the software will keep an eye on these three cameras and the information from them” Helman explains. So more data means a more realistic and accurate image. De-aging technology is not a technique Hollywood is unfamiliar with. We know that it was used in many films before The Irishman (2019), such as The Curious Case of Benjamin Button (2008), Captain America: Civil War (2016), and Blade Runner 2049 (2017). Although this technology has managed to create an astonishing illusion today, it is not yet perfect. But when it does, it is not difficult to foresee that cinema and acting will take on a new dimension. You can check this video if you wanna learn more about how these scenes made.
https://medium.com/predict/netflix-vs-deepfake-the-irishman-1d4754de2701
['Mustafa Yarımbaş']
2020-10-30 13:37:06.860000+00:00
['Machine Learning', 'Technology', 'Artificial Intelligence', 'Film', 'Netflix']
Creating themeable design systems
Themeability is a largely overlooked feature when creating a design system. That’s an unfortunate fact as retroactively adding themeability is an arduous process. Themeability is a complex feature in that it requires forethought when architecting the design library, and the coded component library — but more-so — because it takes a bit of orchestrating between the two to get it just right. What is themeability? Themeability is a feature of design that allows you to easily change the look and feel throughout the entirety of a design, or design system. Dark mode Product specific look and feel White labeling Wireframe components to design with Marketing look and feel Those are all examples of problems a theme can solve. Often, a separate design system — coined a local design system by Spotify Design — is created to deal with this splintering of essential UI aesthetics. Most likely a new design system is overkill and all you really need is a new theme. However, this requires an engineered solution that’s able to apply the theme throughout the entire design system. The theme needs to be available when designing, and when coding, and it needs to be easily shared throughout the organization. What exactly is a theme? A theme is a variable system that determines stylistic features of a design’s visual atoms. These theme variables — which Salesforce calls design tokens — are referenced when building your design. Beyond just colors and other constants, a theme can also contain component specific styles. This is possible by creating a multi-tiered variable system — as defined by Brad Frost. He breaks it down into 3 tiers: Brand definitions High-level application variables Component-specific variables With a multi-tiered system like this you have a lot of power over your design from the theme. But, with great power .. comes a lot of complexities.
https://uxdesign.cc/themeable-design-systems-313898c07eab
[]
2020-02-27 01:04:12.610000+00:00
['Design', 'Design Systems', 'Figma', 'Visual Design', 'UX']
Change an Application’s Theme Dynamically in Vuetify.js
Change an Application’s Theme Dynamically in Vuetify.js Build and change themes to fit your app’s different purposes Photo by Lukasz Szmigiel on Unsplash. Vuetify is a Material Design component framework for Vue.js. It provides a lot of features, including the ability to problematically change theme colors in your application. Enabling users to change themes is a good customization feature that gives their app a more personalized feel. In this article, we will create a Vuetify web application that can dynamically change between themes at runtime and also switch between dark and light modes. To start, we will create a new Vue.js application. We can do this through the Vue CLI on the terminal: vue create dynamic-theme Once that is done, we add Vuetify to our newly created app by changing our current directory to our app folder and running the following command: vue add vuetify Now we can run our application on development mode: yarn serve Open the application folder with the IDE of your choice and create a new Vue component in the src/components folder named ThemeChangerMenu.vue . This component will just be a standard Vuetify menu that will hold our theme choices and the dark mode switch. All the theme-changing logic will also be contained in this component. Let’s start by adding the v-menu component to our template. We will use a button with an icon to open the menu and display our theme choices in a v-card component: Let’s add a switch that will toggle between dark and light modes. To do this, we bind our switch to the $vuetify.theme.dark variable with the v-model directive, which will create a two-way binding with the variable: To display and test our menu, import ThemeChangerMenu.vue to App.vue and place it inside the v-app-bar component: We should now have a functional dark mode switch: OK! We can now move on to the next part, which is to enable toggling between different predefined theme selections. Create a new data property called themes that holds an array of themes: As you can see from the snippet above, we are storing an array of objects, each with a name and color definitions for dark and light variants of the theme. Then we display them on our menu: The only thing we’re missing now is the setTheme method, where we’ll place our theme-changing logic: When a theme is selected from the menu, we close the menu and then iteratively set the theme colors for both the light and dark variants. We also save the name of the theme, which will let us know which theme is currently selected. Now our theme changer is fully functional: Well, that’s all for this article. You will find the source code in this repository. Also, check out the demo on GitHub. Please share your comments or suggestions in the comment section.
https://medium.com/better-programming/changing-application-theme-dynamically-in-vuetify-js-c01d640699c4
['Eyuel Berga Woldemichael']
2020-09-08 17:52:42.395000+00:00
['Vuejs', 'JavaScript', 'Programming', 'Vue', 'Software Development']
How to use Lambda, filter, reduce, and map in Python Programming
Some like it, others hate it and many are afraid of the lambda operator. We are confident that you will like it, when you have finished with this chapter of our tutorial. If not, you can learn all about “List Comprehensions”, Guido van Rossums preferred way to do it, because he doesn’t like Lambda, map, filter and reduce either. The lambda operator or lambda function is a way to create small anonymous functions, i.e. functions without a name. These functions are throw-away functions, i.e. they are just needed where they have been created. Lambda functions are mainly used in combination with the functions filter(), map() and reduce(). The lambda feature was added to Python due to the demand from Lisp programmers. The general syntax of a lambda function is quite simple: lambda argument_list: expression The argument list consists of a comma-separated list of arguments and the expression is an arithmetic expression using these arguments. You can assign the function to a variable to give it a name. The following example of a lambda function returns the sum of its two arguments: f = lambda x, y: x+y f(1,1) The advantage of the lambda operator can be seen when it is used in combination with the map() function. map() is a function with two arguments: r = map(func, map) The first argument func is the name of a function and the second a sequence (e.g. a list) seq. map() applies the function func to all the elements of the sequence seq. It returns a new list with the elements changed by func def fahrenheit(T): return ((9/5)*T + 32) def celsius(T): return (5/9) * (T-32) temp = [36.5, 32, 38, 40] F = map(fahrenheit, temp) C = map(celsius, F) print (list(F)) print (list(C)) By using lambda, we wouldn’t have had to define and name the functions fahrenheit() and celsius(). You can see this in the following interactive session: Celsius = [36.5, 32, 38, 40] Fahrenheit = map(lambda x: (9/5)*x + 32, Celsius) C = map(labmda x: (5/9)*(x-32), Fahrenheit) Filtering The function filter(function, list) offers an elegant way to filter out all the elements of a list, for which the function returns True. The function filter(f,l) needs a function f as its first argument. f returns a Boolean value, i.e. either True or False. This function will be applied to every element of the list l. Only if f returns True will the element of the list be included in the result list. >>> fib = [0,1,1,2,3,5,8,13,21,34,55] >>> result = filter(lambda x: x % 2, fib) >>> print list(result) [1, 1, 3, 5, 13, 21, 55] >>> result = filter(lambda x: x % 2 == 0, fib) >>> print result [0, 2, 8, 34] Reducing a List The function reduce(func, seq) continually applies the function func() to the sequence seq. It returns a single value. If seq = [ s1, s2, s3, … , sn ], calling reduce(func, seq) works like this: At first the first two elements of seq will be applied to func, i.e. func(s1,s2) The list on which reduce() works looks now like this: [ func(s1, s2), s3, … , sn ] In the next step func will be applied on the previous result and the third element of the list, i.e. func(func(s1, s2),s3) The list looks like this now: [ func(func(s1, s2),s3), … , sn ] The list looks like this now: [ func(func(s1, s2),s3), … , sn ] Continue like this until just one element is left and return this element as the result of reduce() >>> from functools import reduce >>> reduce(lambda x,y: x+y, [47,11,42,13]) 113 Determining the maximum of a list of numerical values by using reduce: >>> f = lambda a,b: a if (a > b) else b >>> reduce(f, [47,11,42,102,13]) 102 Calculating the sum of the numbers from 1 to 50:
https://medium.com/the-innovation/how-to-use-lambda-filter-reduce-and-map-in-python-programming-be567ddfc20c
['Jack Dong']
2020-09-24 10:48:26.653000+00:00
['Lambda Function', 'Python', 'Data Science', 'Programming']
Managing AI-Powered Products - Important Principles by Wilson Wong
There has been a lot of talk in the recent years about the use of AI capabilities or machine-learned solutions to improve digital products or services and user experience. If we can cut through the hype and have the necessary building blocks in place as covered in Four Hurdles To Creating Value From Data, there is legitimate value to be had from the use of data and AI to make better products in many verticals. For tech companies that are founded on strong engineering practices and find themselves knee deep in data such as Uber, Twitter and Tesla, the use of AI techniques to extract value is likely to be already part of their culture. As for others facing a different set of circumstances and yet, still see the need to adopt data and AI to remain competitive, these companies may not know where to start or underestimate the changes that are involved. Photo by Ross Findon on Unsplash The previous CEO of GE, Jeff Immelt shared from his experience during the digitization of the industrial world, that the transformation involved more than just software and technology people. He emphasised that success was predicated on the various roles along the value chain such as product managers and sales people to be thinking and operating differently. While the context was different back then, many companies now find themselves in a rather similar situation trying to leverage big data and AI — a situation where the return on the investment depends on more then just hiring a few data scientists, and buying and rolling out the first “AI technology” that they come across from vendors. In this short article, we turn our attention to product teams and their embrace of data and AI to make better products. We discuss the key differences in what makes up an AI-powered product and how teams should approach the building and managing of their software solutions. We begin by understanding how (or where) AI fits in to the broader make-up of software products. We highlight the single biggest difference in the make-up of an AI-powered product when compared to more typical applications. This distinction is what unlocks the value from data as discussed in Investing in Data and AI — When and Why. We conclude with a discussion on three principles that product managers and their cross-functional teams have to keep front of mind when thinking and planning. These principles exist to help reinforce the virtuous data cycle that fuels AI capabilities behind the products. Difference with AI-powered products, structurally A typical make-up of a software product The pyramid diagram above forms the basis of our quick discussion. The actual number of layers, the terminology and the definition aside, it is rather common to visualise and discuss the make-up of an application or software product in this logical fashion. If you have a background in designing and building software, a multi-tier diagram such as this should be all too familiar. The purpose or composition of each layer is described in the diagram. At a high level, the UI layer is what the users of your product see and interact with. The actual ability to execute commands, evaluates the input and produces the output resides in the logic layer. The base data and core services layer is what everything else sits on. If the UI is the sensors and the looks, then the bottom two layers are the brain. As the role of AI in software product is still evolving, this diagram shows the current prevailing make-up The second diagram above shows where the AI capabilities tend to fit in to an application. The layout above is based on my experience of leading teams and being intimately involved in building and incorporating prediction, enrichment, and search and recommendation capabilities into software products for both consumers and enterprises. As you must have noticed by now, the single biggest difference between the two diagrams above lies in the logic layer. The first part of the difference involves the gradual and thoughtful replacement of aspects of the existing logic and workflow that can be automated, simplified or optimised with models learned from data. For instance, assume that you currently rely on heuristics to suggest other items to your customers after they have purchased something (e.g., if a cheap item in category X was purchased, try to cross-sell a randomly selected cheap item in category Y). This logic is a good candidate to be machine learned and has the potential to improve with more data. In another example, imagine there is a step involved in the content posting flow whereby humans review the content to ensure certain fields conform to a standard. This step can be (semi-)automated to reduce the wait time before items can appear in the marketplace. The second part of the difference is about introducing new capabilities to make the application behave “smarter” in ways that were not previously possible or feasible due to the intractable nature of the solution using human defined rules or other reasons. For example, a seller can be given real-time suggestion about the category and the price for the item they will be posting to maximise the chances of attracting buyers. Prediction services can also be developed to infer shoppers’ interests and preferences which can in turn be used in recommendation engines to customise the items the users see. Principles for managing AI-powered products In addition to the distinction in the architectural sense of AI-powered products, how product managers and their teams think about building and moving their products forward will need to adapt as well. In this last diagram below using the same pyramid structure, I highlighted three important principles that product teams need to consider and practice. These principles are here to help unlock further potential of the underlying AI capabilities of the product. Three principles to help unlock further potential of AI-powered products Over the years, I have come to realise that while they are rather intuitive for product managers with a strong data science and engineering background, these principles tend to be unfamiliar to the many others. This can create misalignment between product managers and the data scientists they work with and hampers their effort to solve user problems with AI and data. This was discussed at length in Are You Able To Match The Right Data Science Solutions To Problems? In my opinion, the adherence to the principles to be discussed here is what sets apart product teams that are prepared to leverage AI from the rest. The first principle as spelled out in the diagram above is, if by asking the users for certain data we can improve our AI capabilities, we should. As an example, ML models can get better with feedback from users. Imagine that there is a search product that currently allows the users to flag certain results as not suitable for them. This signal is already being used to tailor the items that the users see. If the data scientist working on it has a strong hunch (and potentially some evidence) that the ML models can improve by getting more specific negative feedback from the users, then the team should seriously consider it. This way of thinking and the ensuing conversations between the data scientist, the product manager, the designer and perhaps the engineer as well can be hard to come by. The reasons can be many. The common ones include (1) the data scientist can be reluctant to present the idea because they do not know how or fearful about the response they might get if they do, and/or (2) the product manager is simply unaware of the opportunities or what data gaps are there that can be filled. The second principle is, for everything that we allow the users to do, we should think about how the data that comes from that can be used to improve the experience, not only for themselves but also for others. This covers all types of data that either the users explicitly provide or we track about how they use the product, with their consent of course which is typically part of the terms of use. With more traditional software products, the value of data is limited to (1) supporting the proper functioning of the human-defined rules in the logic layer and any associated business processes that sit outside of the software, and (2) product analytics, reporting and perhaps A/B testing. Hence, it is not surprising that this principle can be overlooked by product managers or even engineers. If the data scientists are not involved early on in the design phase, it is also unlikely that the team will practice this principle. Moreover, depending on how the data are being used, the value of some of them may take a bit of time to eventuate. This can also be a problem for certain product managers who do not maintain longer term views. The third and last point, which can be more of a reminder than a principle, is to always be cognizant of any dependencies on the data and infrastructure layer. This calls for tight collaboration between engineers, architects and data scientists throughout the designing and development of AI capabilities. For instance, if an algorithm requires access to a graph database to find the nearest neighbours given an item, is that infrastructure readily available? In another example, imagine that you have an ML model that predicts the category for an item, is it more appropriate to use the model to categorise new items posted by sellers on-the-fly or in bulk. While the first two principles may call for close partnership between product managers and data scientists, this last one calls for strong engineering practices.
https://medium.com/swlh/managing-ai-powered-products-65a1c0c036d0
['Wilson Wong']
2020-12-10 13:23:19.948000+00:00
['Product Management', 'Data Science', 'Product Team', 'Artificial Intelligence']
11 Funny Software Licenses You Might Have Never Heard Before
10. The Death and Repudiation License: No use by living persons allowed. Copyright (c) 2003 why the lucky stiff This software is subject to either of two licenses (BSD or D&R), which you can choose from in your use of the code. The terms for each of these licenses is listed below: BSD License =========== Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. D&R (Death and Repudiation) License =================================== This software may not be used directly by any living being. ANY use of this software (even perfectly legitimate and non-commercial uses) until after death is explicitly restricted. Any living being using (or attempting to use) this software will be punished to the fullest extent of the law. For your protection, corpses will not be punished. We respectfully request that you submit your uses (revisions, uses, distributions, uses, etc.) to your children, who may vicariously perform these uses on your behalf. If you use this software and you are found to be not dead, you will be punished to the fullest extent of the law. If you are found to be a ghost or angel, you will be punished to the fullest extent of the law. After your following the terms of this license, the author has vowed to repudiate your claim, meaning that the validity of this contract will no longer be recognized. This license will be unexpectedly revoked (at a time which is designated to be most inconvenient) and involved heirs will be punished to the fullest extent of the law. Furthermore, if any parties (related or non-related) escape the punishments outlined herein, they will be severely punished to the fullest extent of a new revised law that (1) expands the statement "fullest extent of the law" to encompass an infinite duration of infinite punishments and (2) exacts said punishments upon all parties (related or non-related).
https://medium.com/javascript-in-plain-english/11-funny-software-licenses-you-might-have-never-heard-before-87e702d31388
[]
2020-12-03 08:11:48.809000+00:00
['Coding', 'Programming', 'Software Engineering', 'Software Development', 'Web Development']
20/20
Free for commercial use, DMCA I’m waiting for this fever to break God knows we’ve all had As much as we can take The circus clowns are drunk And screaming obscenities Everybody’s sick of The incessant dread If hope don’t break through this time It’s all over I’m waiting for this fever to break People yanking levers across the sea Will decide our fate Masks to stalk tragedy Masks hide hilarity Masks for sanity Masks because you never really knew me Masks to bury me Masks for the new me Soon something new will be revealed Hope it’s not a tragedy Again The screens have it in for me I am waiting for this fever to break A lifetime of waiting Insisting Persisting This fever must break One day soon Like a levee A pouring forth A tsunami Lost light creativity Healing everybody A thousand thousand possibilities Renewed Remaking me Holy electricity The whole world is saved I am waiting for this fever to break Through me.
https://medium.com/an-idea/20-20-9b6d54bb6b6b
['John Horan']
2020-10-30 15:36:10.899000+00:00
['Poetry', 'Elections', '2020', 'Coronavirus', 'Hope']
Error Tracing With ES6 Classes and Sentry
How to Handle Sensitive Information As a company, we do not want personally identifiable data from our customers being shared with a third party. These tools are a way for us to help debug and trace back through a user journey to improve our product, and users trust us not to share this information. There are a few ways that we can go about protecting ourselves, but one example I will give today is how we can implement our own “deny” or “block” list. Let’s make some small updates to our SentryError.js and index.js files. For index.js , let's update the info passed into main to include some dummy user data (and my public email): Let’s say that we do not wish to share the name, user email, user’s manager email, or their address, but we do want to keep the ID for debugging issues. We can add a helper method to our class and set up a denyList that we can use in this method to recursively alter our breadcrumb data. Why keep denyList outside of the class? There is no particular reason, but I find it makes it easier to write unit regex tests if this is abstracted, and it can be used for other third-party block lists you may want to set up. redactSensitiveInformation could also have been pulled out of the class for the same reason if it was reusable elsewhere. Update SentryError.js : redactSensitiveInformation uses the power of recursion. We basically want it to recursively check through an object to redact information that matches a regex. This means that the following: …will become redacted to the following with our current deny list: denyList.some iterates through our regex array and if any regex matches, it will return "true." This helps us identify from our list which data to redact. Let’s run node index.js again and confirm this in Sentry. Victory!
https://medium.com/better-programming/error-tracing-with-es6-classes-and-sentry-116b3c95946b
["Dennis O'Keeffe"]
2020-08-13 15:35:08.874000+00:00
['Tutorial', 'Nodejs', 'JavaScript', 'Programming', 'React']
10 Signs Your Mother is a Narcissist
10 Signs Your Mother is a Narcissist Is it you or her? You’ve always wondered what’s wrong. Is it me or her? A narcissistic mother can cause you to doubt yourself. Her manipulation is often skilled and subtle. She doesn’t brook disappointment, or dissent. Top Ten Signs That Your Mother is a Narcissist She wants to control you. Trying to assert yourself results in anger, rejection and hostility. She doesn’t appreciate your attempts to individuate as it means you are going to be less available to serve her needs. Does she get angry when you disagree or don’t want to do what she wants you to do? Does she try to make you feel guilty for having separate interests, hobbies, desires and opinions? 2. Her love is conditional. A mother who is narcissistic is interested in how you (and your achievements) reflect on her. She wants you to succeed, but only so that she looks good. She may even become jealous if she feels you are doing too well. Daughters of narcissistic mothers will often be perfectionistic in a misguided attempt to win their mother’s love. 3. She can’t or wont validate your feelings. There is very little room in her emotional consciousness for your feelings. If they do something that upsets you, narcissists generally won’t be prepared to acknowledge their mistake or soothe your upset. They are too focused on trying to manage the shame elicited by your implied criticism. She may sometimes be there if you need support, but most often she will turn it around so that it becomes about her. For example: “That reminds me of the time…” “You think you have problems, I remember when…” “I can’t listen to you when you’re like this, it upsets me…” “I do/have done everything for you, why can’t you appreciate it, you ungrateful…” 4. She belittles you. A narcissistic mother will be full of praise in one moment, hypercritical and judgmental the next. They can make your head spin! A narcissistic mother knows where it hurts. She will often use sarcasm or belittling language to humiliate you, perhaps in front of others. She may fob off your concern with excuses such as “can’t you take a joke?” 5. She tries to manipulate you. The manipulation can be quite subtle, causing you to question your doubts and fears. She may call you “selfish” because you don’t want to be her maid or chauffeur 24/7 Being afraid to say no to her because you fear her disapproval or anger is definitely not a good sign. 6. She thinks she is above the rules. Narcissists prefer not to have to follow the rules that apply to us lesser mortals. The sense of entitlement that accompanies narcissism can manifest in expectations of special treatment. She might try to get out of a parking ticket through manipulation or flirtatious behavior, then she gets angry. She can embarrass you in the takeaway line at your favorite coffee shop. If she is not allowed to jump the coffee queue or secure her favorite table at a popular restaurant, she may become disproportionately angry. 7. She is unpredictable. Narcissists often wax and wane in terms of their attention and availability. She may shower you with affection and attention (love-bombing) when she wants something from you and ignore you when she is going OK. Her ability to care about you is dependent on her own needs rather than any genuine commitment to you as a separate and autonomous being. 8. It’s all about how things look. Because they are largely dependent on social cues to manage their self-image, narcissists will be focussed on how things appear, and most importantly, how they appear to those whose opinion matters to them. Narcissistic mothers will generally like to appear socially successful, keeping a nice-looking home, wearing expensive clothes and hobnobbing with the rich and famous. Your mother might spend a lot of time trying to impress the neighbors, her employers and others whom she considers worth her time. 9. She cannot see your point of view. In general, narcissistic mothers will be unwilling to understand or even acknowledge your point of view. She may ignore, belittle or undermine you, often using manipulation or guilt-tripping to get her way. 10. She is emotionally volatile Narcissists are often emotionally unstable, swinging between cold rage and collapsed fragility depending on environmental cues. Mothers with these characteristics have very low self-esteem underneath their bluster and will become teary or desperate if they meet ongoing resistance. I THINK MY MOTHER IS A NARCISSIST, WHAT CAN I DO? When you have grown up in a narcissistic environment it can be hard to have any perspective. Often children of narcissists will adapt to the parenting they receive, losing contact with their authentic self. They are so used to being exploited and dominated they don’t know how healthy relationships work. If you have come to the conclusion that your mother is a narcissist, then the best option is to talk it through with someone you trust. She probably won’t change unless she sees it as being in her interest. Confronting her may be cathartic, but it generally won’t change anything and it may make things worse. The fundamental problems which cause narcissism are not something that can be fixed through self-reflection, although that would be a good start. People suffering from narcissism tend not to seek therapy, unless they fear that they will lose something important to them or reach a crisis point. Staying in contact with a narcissistic parent is a choice. If you decide that you want to stay in contact with your mother, you will need to accept that you may never receive the acknowledgement you long for in your relationship with her. You will need to validate your own feelings and accept the grieving process that accompanies a realization of her profound limitations. For daughters of narcissistic mothers it can be a long road to recovery. Because they have grown up under the tyrannical rule of a woman with severe character flaws, they will often have a depleted sense of self. It can take a lot of work in therapy to gain the self-awareness and compassion that will help heal your neglected inner child. But there is hope. Therapy can help you start the journey to a more joyful life. Sign up for your free ebook here.
https://medium.com/family-kids/10-signs-your-mother-is-a-narcissist-71b1d9c6cf1f
['Amanda Robins']
2020-07-28 07:00:33.515000+00:00
['Narcissism', 'Mental Health', 'Narcissistic Mother', 'Family', 'Trauma Recovery']
Remembering a Night Sky
I have two childhood memories about the stars. The first was a warm summer evening when my sister, brother and I talked our parents into letting us sleep outside. We spread our blankets and pillows on the balcony and lay down under a night sky filled with stars. In hushed voices we searched for meteors and counted satellites until slowly and one-by-one we drifted off to sleep. The second memory was with my Dad. If we were outside at night he had a habit of looking up at the sky — to check the phase of the moon or find a planet. One night, we were returning home from soccer practice (or maybe we were walking our dog — details have faded with time) and he did what he usually did. He stopped to look up at the night sky. But this time he had a question for me. Did I know how far away the stars were? I remember it was a bitterly cold evening. Our feet crunched on frosted ground and his frozen breath hung in the air. When I stood next to him and also looked up at all those stars, I was thinking, ‘this is a question that might have a long answer,’ which means staying out here in the cold instead of going on into the house where we could sit near the fireplace and drink hot chocolate. And then he explained. The light from the stars that we see now has been travelling for thousands and sometimes millions of years to reach us. “If,” he said, pointing at a one part of the sky, “that star is a thousand light years away and it exploded today, we wouldn’t see the explosion for another thousand years.” I was staggered. I forgot I was cold. This truth about the size of our universe unfolded in my mind like a flower before the sun.
https://medium.com/descripter/remembering-a-night-sky-e47b85eab9ed
['Craig Brett']
2020-08-26 08:14:18.779000+00:00
['Life Lessons', 'Space', 'Life', 'Science', 'Parenting']
How to Work from Home Effectively on Quarantine
These days we’ll be spending a lot of time online. Some of us are stuck home by choice and others because of the law. Whatever your situation might be, one thing is for certain, we have to try to make the best out of it. Working from home for such a long period of time is a first for many people, and some are getting easily bored, unmotivated or too distracted to work as effectively as they might’ve hoped for. That’s why I’m compiling a list of tips and recommendations to improve your home office routine and stay safe online as much as in real life. Take it from me, I’ve been working from home for more than a year and I still remember the unexpected hardships I faced at first. Let’s get into it! Conditionate Your Home Office Here’s the thing, when you’re working from an office you have a strict schedule and a boss who’s constantly checking on you, but when you’re doing it from home you’re the only boss of your time. It sounds like a good thing, and it can be, you have more freedom after all, but it can also lead to a loss of your usual performance on the job. Take care of your time. You can wake up whenever you want, work on your pajamas all day and take a nap if you’re tired. If there’s no one watching it’s also easier to procrastinate with social media, YouTube and Netflix. Try to do this for a week or just a few days and you’ll lose a lot of your momentum, leaving everything for the last minute and rushing your work to reach a deadline. Those small distractions can amount to a whole lot of time when you combine them all together, and they’ll leave you feeling unproductive in the long run. What can you do online and offline to improve your home office routine? Improve your Offline Habits Let’s start with some offline habits. First, you need a self-imposed schedule. Without a schedule, you’ll be wandering around through your day trying to get things done without clear goals. Make a list of all the things that you need to accomplish this week, then distribute them day by day. Only set two or three big important things at most for a single day. If you try to put too much on your plate you’ll end up doing nothing right. Once you’ve done this set a time for each task and divide those big goals into smaller goals that are achievable within a few hours. Using a physical or online calendar is a good way to keep track of your progress. For example, if you need to write an article or a marketing campaign, you can divide those tasks into pages. Page 1 from 9:00–10:00, Page 2 from 10:00–11:00 and so on. Make sure to write this down on paper or digitally and check every task you complete, that way you’ll feel the progress as you go through it. You also need to establish some breaks within every few hours so you can take a breath, relax and go back to work with a fresh mind. Remember that your schedule should also limit your working hours as if you were still in the office. Many people tend to work until the late hours of the night just because they can or they feel they haven’t made enough progress. I recommend you respect your own time and treat yourself just like a good boss would treat his employees. Besides working on a daily schedule you also need to work on your habits around the house. Take a shower and get out of your pajamas before you start your working hours. Why? Because you need to tell your brain that you’re not here to sleep. If you can, try to separate a room for work and another for pleasure. It’s not a good idea to work a few feet away from your bed as it will be a tempting distraction whenever you get stressed. If you need a break, by all means, take a nap to recover your energy. However, you should have a specified time to do so on your schedule, otherwise, it will be counterproductive. Productive Solutions for your Online Habits Okay, so you’ve got your home workspace covered, now let’s get into your actual working tool, your computer. The internet is a vast place where you can have one tab with your work open and five other tabs with funny memes that your friends sent you. How do you avoid procrastination then? This will not be easy, of course. We tend to do it even when we’re working from the office, so it will take some will power. Here’s my advice, just like you need to separate one room for work and one for pleasure, you should do the same with your browsers. For example, if you tend to use Google Chrome to consume all your entertainment, then download another browser like Firefox or Safari and use that one specifically for work. Train your brain to separate work from pleasure and apply that to your browsing experience. Stay away from distractions. Here comes the hard part, your phone. Yeah, you need to get rid of that. I know that you probably have to answer some messages or emails from work, but you can do that from your computer anyway. The thing with your phone is that maybe you answer a message from work, but then it leads you to see a notification for that new game you were expecting or that comment on your Facebook post, which will finally lead you to a downwards spiral into procrastination. Leave your phone outside the room you work in or at least outside your immediate reach. Have it with the sound on in case you’re expecting a call, but otherwise don’t pay attention to it until you’re on your break. Trust me, this will increase your productivity like nothing else. Another thing, when it comes to emails you should be extra careful during this time. The COVID-19 pandemic has increased the concerns of most users and any news about the topic are easy bait for malware. There’s no magical cure for the coronavirus, don’t fall for it. Hackers are exploiting the fear of the virus to spread misinformation and malicious software that will infect the computers of their victims, so you should protect your devices from these threats. If you would like to know more about this topic, you can read our previous article “Hackers are Using the Coronavirus to Spread their Own Malware”. Another distraction, while we’re trying to focus on our work, are the annoying ads we tend to encounter while doing research on the web. If you want to get rid of ads and malware for free, you can try our own solution Online.io. We know a thing or two about blocking nasty stuff, so take it from us. You can stay home and take care of your health while we take care of your online security. 👉 Try it out for Chrome, Brave or Firefox. Hopefully, this advice was helpful to get some motivation and work done from home effectively. And remember, it’s not all work. Try to separate some time to relax, spend time with your loved ones and make the best of this situation. Stay healthy and help to flatten the curve! #StayHome Want to know more about us? 🔥 Check out our Website for updates! 🐦 Follow us on Twitter. 🗨️ Join our Telegram Group. 📢 Give us a shout-out on Facebook.
https://medium.com/online-io-blockchain-technologies/how-to-work-from-home-effectively-on-quarantine-69b3d48481ae
['Tyler B.']
2020-03-23 16:33:34.638000+00:00
['Work From Home', 'Quarantine', 'Work', 'Productivity', 'Covid 19']
Sarah Fuller, Owned the Football Kick
Sarah Fuller, Owned the Football Kick A poem on the first female kicker who broke the male phenomena Photo by Dave Adamson on Unsplash Photo by CNN Sarah Fuller, the Vanderbilt kicker awoke one morning as a football player by the day, her kicking changed her life became the first female, to score a Power Five college football game with a new name, the Vanderbilt Kicker one kick was all it took, but a life of training paved the way, as she kicked her way into sports’ history when opportunity knocked, her foot answered the door and by the end of the play, Vanderbilt Kicker, she now reigns. For additional reads:
https://medium.com/illumination/sarah-fuller-owned-the-football-kick-e929ca9d068e
['Ep Mcknight']
2020-12-15 14:04:21.381000+00:00
['Self Improvement', 'Startup', 'Life Lessons', 'Sports', 'Education']
Instant Gratification & Data Science
Instant Gratification & Data Science When we want it? NOW A Change in Habit We live in a world of instant gratification where old habits are slowly but certainly is fading away. In a world of Netflix, we abandoned the habit of going out and actually buying/renting a movie. Why all the hassle when we can start watching in a click of a button at our home comfort? When we have Spotify and SoundCloud like platforms then why go buy music records? Even the food home delivery has gotten faster than ever. This spoils our habits of expectation of such an instant gratification towards other services and aspects of life as well. The instant gratification effect Don't get me wrong, it's not necessarily a bad thing. It pushes all the other industries to find a relatively innovative way to attract attention and audience. Take the gaming industry for instance; its slowly moving to cloud game streaming, meaning you just log in and start playing, no installation no download whatsoever, again instant gratification. Its time to find how data science has coped up with today's fast-moving world. An Unaddressed Issue in Data Science Data science has come a long way and we are dependent on it in more ways than ever. More tech giants; like Google & Microsoft; are investing in it and some have perfected it while SaaS companies are stepping up with more innovation. However, one factor is still hanging that no one has been able to address it yet. Accessibility. While these data-analytics platforms; like R, Python, SQL, SAS, SPSS, etc; are very versatile and robust, its very hard to get started unless you have very strong knowledge about either statistics or programming and sometimes both. This can be very daunting and restricts accessibility. The Solution We Deserve This is where Enhencer comes in. Its a predictive analytics SaaS platform that addresses the accessibility problem. Now let me explain how do they innovatively answer that. Enhencer takes a different approach by reducing all these to 3 steps. That’s insane in a good way. This is where their innovative ways pay off. Once you upload the data to Enhencer, it cleans the data then creates relative features, trains many models, and chooses the best one based on the model performances automatically. Enhencer uses Machine Learning Algorithms to achieve this. All you need to do is to connect the data and all these things are taken care of automatically. In about 3–5 minutes (Depends on the data size) you are presented with all the predictive analytics results like predictions, predictions errors, and model performances like the picture below. Then all you need to do is implement the model in your own system. Enhencer Predictive Analytics Output Destroying the Accessibility Barrier This answers the accessibility issue that others failed to do. Enhencer takes away all the traditional data-analytics steps and makes it as easy and fast as it can be. It's the groundbreaking innovation that the data-science field has been craving silently for long. Enhencer; Requires no coding whatsoever. Everything is drag and drop and interface based. Requires no statistical or programming knowledge. Requires only a few minutes to take you from data to results. It can be accessed by both the cloud platform and the local installation of the software. While everything is provided automatically by default; if somethings are not up to your desire then by all means you can go ahead and tune the models to your liking. Instant Gratification & Data Science Enhencer is here to change how we think when we think about data-science, change how we analyze our data, change how fast we can analyze our data, and change the accessibility so that anyone can get into the analytics. Why spend hours if not days analyzing a data when we can do the same with few clicks and a few minutes, hence the instant gratification we deserve for the data analytics and predictive analytics. Break the old habit and embrace the new fast innovative way. If you require more information; Visit their website: https://enhencer.com/ https://enhencer.com/ See a few case studies here: https://medium.com/@tayibgetup/churn-prediction-in-5-minutes-1c24602fd9f3 https://medium.com/@tayibgetup/churn-prediction-in-5-minutes-1c24602fd9f3 Read Some Interesting Blogs about data science: https://enhencer.com/blog See how to upload the data to Enhencer Platform:
https://medium.com/a-world-full-of-data-science-powered-by-enhencer/instant-gratification-data-science-7ba79cd9cdd
['M Ahmed Tayib']
2020-06-18 08:16:01.898000+00:00
['Machine Learning', 'Data Analytics Tools', 'Artificial Intelligence', 'Data Science', 'Predictive Analytics']
Your Time and Talent. What’s It Worth?
Your Time and Talent. What’s It Worth? I speak with consultant and coach Ted Leonhardt about smart marketing and negotiating strategies that will help you earn what you deserve Illustrations by Ted Leonhardt A true story: In my quest to find the right kind of clients — organizations with interesting projects and good budgets — I redesigned my website and posted about it. Instead of calls from prospective clients, the response was a barrage of emails from a well-known web provider offering “free consulting by top marketing experts.” After about the 200th email, I broke down and dialed the 800 number. What were they really offering? The friendly salesperson asked a few questions and then recommended a Facebook business profile page. Really? He assured me that clients for high-end design services are looking for designers on Facebook. Really? And that for only $199 a month (cancel any time, money-back guarantee), they would create an awesome Facebook business profile page for my firm. Not only that, included in the cost was an ad campaign that would put me directly in front of targeted decision-makers including CEOs and communications directors. I took the bait and sent them my credit card number and ten images. The results were so embarrassing that I immediately removed the cover photo they’d made from cheesy stock images of globes and atoms. I was totally embarrassed by the way-too-promotional posts and ads. The thing I like least about Facebook is getting ads in my news feed, and now my ads were in my friends’ news feeds! I apologized publicly, deleted the posts, and tried to make the page more relevant and useful. After 30 days — with not one inquiry or call from a CEO or anybody else — I cancelled the service. And got them to refund my $199. What is the RIGHT way to attract clients? The right person to ask is Ted Leonhardt. Photo of Ted by Mike Folden A former big-agency head based in Seattle, Ted is a top consultant to creatives. He helps people learn good negotiating skills so they can make more money. I mean, a lot more money. In Ted’s world, designers get $195-an-hour gigs and projects for which the client readily agrees to a $150,000 fee. He tells all in a sweet little book, Nail It! Stories for Designers on Negotiating with Confidence. It’s a book that’s relevant to anyone who needs to negotiate to get their fair share of money and respect. Not just print and web designers, but anyone looking for a full-time job or part-time gig. Topics include real-life stories that illustrate strategies for negotiating job offers, salaries, contracts, and raises. Ted writes: “Don’t be intimidated by negotiation. Nail It shows you how to stand your ground and ask for what you’re worth!” Ted and I recently enjoyed several Zoom calls, and I’d like to share his point of view with you: Q: Ted, everyone wants to find clients now. If the good clients aren’t looking at Facebook ads or posting their requirements on online contests, where are they to be found? A: Good clients do post their requirements on online contests. A freelance designer client of mine uses one of those sites himself. He posts the requirements of an assignment he’s working on, uses the results to broaden his own view of possible solutions, and shares the results, along with his own solution, with the client. Ah, but that’s a different use. A smart one. What is your definition of a good client? Good clients are a pleasure to know and work with. Good clients have assignments you love to be a part of and contribute to. And they have the money you need, both for income and for respect, so you know that you’re valued. How can we find clients like that? The best clients will find you. Good clients search for designers who have a reputation for doing the kind of work that they want and need. To be found by good clients, your community must be aware of your reputation. In my experience, the designers and small firms that don’t have enough work simply have not done a good job of getting the word out. I’ve encountered this situation many times. The firm, or freelancer, is kept busy for an extended period by a couple of clients, often for a few years or more. Then for one reason or another, the work dries up. While they were busy, they were too busy to do any self-promotion, so their community is simply unaware of them. In the short term, the solution is to reach out to the opportunities that are most likely to provide work: clients in the same category as your past clients, individuals you’ve met through your work who already appreciate your expertise, clients you meet through professional and industry associations, and so on. In the long term, you need to create a continuous chain of outbound messages that lets your community know how your expertise helps people and businesses succeed. Most creatives aren’t natural self-promoters, and that can make this task seem difficult. But I’ve found that if you think of self-promotion as your next creative project — a challenging problem to solve and one that’s every bit as interesting as any client assignment — the effort can be fun. And of course, the results — a few inbound calls — can be very motivating. You are a major advocate of storytelling. How does that work in the context of self-promotion? Here’s the formula: You do great work for your clients. From doing the work you gain insights and examples. Then you create stories about how the insights you gained from doing the work helped your clients achieve success. Post the stories in places where your community will see them. The stories can be rendered in any form that’s digital: videos, images, cartoons, narratives, or whatever tells the tale. They just have to display in a compelling manner how your expertise helps others. The result of this effort will be what I call ‘inbound opportunities’ —phone calls, emails, texts — from prospects that are somewhat qualified because they are responding to your messaging. This forms a virtuous cycle in which your work and messaging lead to opportunities for more work. Once a prospect contacts you, what’s the next step? Your priority is to differentiate yourself from your competitors. But first, spend some time qualifying the prospect and the assignment. You need to see if there’s a good fit. Do this in person, via Zoom or Google Hangout. By phone, if you have no other choice, but never through email. Why not through email? Because, to win the gig, they must like you. You must develop a personal relationship with them to win their trust. That’s the hook, and that won’t begin to happen it they can’t see your face or at least hear your voice. When you speak in person, ask: “I’d like to ask you a few questions to see if there is a fit. Is that all right with you?” “First, how did you hear about me? What was it that prompted you to call?” You want to know if someone referred you or if they are responding to your outbound messaging. Referrals can be extremely powerful. If the referral was to you only, that is very differentiating. Asking reminds the prospect of that. If they included you in their search because of something they saw or read, your questioning needs to uncover what it was, precisely, that got their attention, because that, too, is differentiating. 3. Ask: “Tell me as much as you can about the assignment.” You need to know what their expectations are for the project. Asking questions helps them see your experience and expertise in action. Questions also show your genuine interest, and as a result are very flattering. Questions honor their knowledge and expertise. 4. Ask: “What do you expect to spend on the project? What is your schedule? and “What do you need us to deliver?” The answers will give you new insights on their expectations that will further the conversation. Or it may be that they are not qualified and you will choose not to pursue the relationship further. Now they’re hooked. But we’ve gotten to the tough part. Clients almost always refuse to state a number. How can you get them to reveal their budget? There is nothing more effective than telling them what a project will cost to get them to reveal their budget! If their answer seems unreasonable, ask: 5. “How did you arrive at that number?” Remember, if the opportunity is a good one, before you even get to the budget, you need to inspire them. That is the most important step. Inspiration. That is where your creative skills will be most effective. From your questioning and the ensuing conversation, you’ll sense who they are. You’ll feel what they are feeling. You’ll intuit how they would like to see their future, and the future that the project you create for them will help bring that future to life. You can demonstrate all this when you describe the opportunity in a way that dramatizes the results they are seeking; when you explain what you can add to the effort that will make their project a success. Start your inspirational remarks with, “In my experience…” and keep it short. You need them to know that you and only you will approach this project in your unique way and get the results they need. Inspiration is how you show them your passion and win. Then you can talk budget, schedule and deliverables. And then, if necessary, you can successfully negotiate to get the time and the money you need to do the job right. Right? Yes! However, when designers ask about the budget, clients often say, “We don’t know. You tell us.” And then when you state a number you think is reasonable, they say that it’s way too much. Or, they might agree and then ask for a proposal. When they get the proposal, there are no more ‘inbound calls’ from them. I often suspect that they agreed to the number just to get the proposal and now they’ll take every idea you proposed and use it — or find someone else to do it for less. Sometimes they get proposals from several designers and cherry-pick everyone’s ideas. That’s why I advise qualifying the prospect! And I always advise summarizing in person before giving them anything in writing. Summarizing costs, schedule, and deliverables extends the conversation and gets an immediate response, allowing you to adjust as required or decline the assignment long before writing a proposal — saving you a lot of time and effort. If they push back on your summary, ask a few more questions, clarify, refine your approach, and summarize again until you both agree on what’s to be accomplished. Let’s say they do state a number. How can you get them to understand that the $300 they had budgeted is not enough, and that they need to spend more? Ask again: 6: “Can you explain how you arrived at the $300 budget?” Maybe it’s an appropriate figure and you could actually do something good for $300. Or maybe not. In any case, you want to know if $300 is all there is before, not after, you put in more effort. What scope of project is worth $300 to both the designer and the client? I’m figuring two to four hours. Doesn’t a number that low start the relationship on the wrong foot? My Photoshop tech routinely helps me make a photo glorious and prints it out large for $300. We have a great relationship. As a matter of fact, I’m heading out to his office this afternoon to have him add his touch to some giant wall prints. His fees for a couple of hours will be in the low hundreds. When it comes to the larger projects that design firm principals need, are there general price guidelines that can be used as a reference? The Graphic Artists Guild Pricing & Ethical Guidelines is the best source. I’ve used it for many years. Also, just Google, “What should I pay for…” Or ask anyone you know in the industry. Seriously? I just googled, ‘What should I pay for a logo design?’ Site #1 said, “One should expect a simple logo design to cost approximately $200… a logo design with intricate patterns and fonts will cost twice as much as a simple design. Expect to pay around $400 for a design of this type.” Another site advises owners of startups to do it themselves by picking a nice font and a color. A third site has a chart with $200 at the low end and $1,000,000 for ‘world-famous designer.’ Yep, it’s all over the board. And yes, you can pick a color and a font all on your own. And if you have good taste you might do all right. It’s all a matter of context. If the client wants to design it themselves or pick an off-the-web solution, that’s okay. Because if they can’t see the difference between your work and the off-the-web design, they will never need you. Move on. Back to the original dilemma, do you think there’s value to having a business Facebook profile, and that clients might be looking for designers on Facebook? Yes, yes and yes! Just design your own page! FB, LinkedIn, Google+, Skype and Twitter are all excellent for getting the word out. I’m a fan and frequent user of all, and get a great response in return. Thanks to them, and to Amazon, Kindle, Apple and iTunes, I have connections all over the world and clients in South America, Europe, and all over the USA. And so do many of my consulting clients. Thank you, Ted. Let’s hope readers follow your advice, and that it works for them. Readers, let me know! I really want to get responses to this story, and I’ll keep talking to experts in order to offer more advice Medium readers can use.
https://ellenshapiro.medium.com/your-time-and-talent-whats-it-worth-24d92e1cc195
['Ellen M. Shapiro']
2020-02-27 17:45:45.437000+00:00
['Money', 'Marketing', 'Negotiation', 'Job Hunting', 'Freelancing']
Understanding the Confusion Matrix and How to Implement it in Python
Introduction Anyone can build a machine learning (ML) model with a few lines of code, but building a good machine learning model is a whole other story. What do I mean by a GOOD machine learning model? It depends, but generally, you’ll evaluate your machine learning model based on some predetermined metrics that you decide to use. When it comes to building classification models, you’ll most likely use a confusion matrix and related metrics to evaluate your model. Confusion matrices are not just useful in model evaluation but also model monitoring and model management! Don’t worry, we’re not talking about linear algebra matrices here! In this article, we’ll cover what a confusion matrix is, some key terms and metrics, an example of a 2x2 matrix, and all of the related python code! With that said, let’s dive into it!
https://towardsdatascience.com/understanding-the-confusion-matrix-and-how-to-implement-it-in-python-319202e0fe4d
['Terence Shin']
2020-05-13 03:15:50.164000+00:00
['Machine Learning', 'Statistics', 'Artificial Intelligence', 'Data Science', 'Programming']
Build a Login System in Node.js
Save Users to Database The user schema and model As usual, you have to define a schema before creating documents and saving them to the database. First, create the models directory and there create a file called user.js such that it is models/user.js . First, let’s create a schema in user.js . Next, we’ll create a model from that schema. In models/user.js : This means that your model will have a name, an email, its associated password, and the date of creation. On line 20 , we’re using that schema in your User model. All of your users will have the data in this format. Handling the POST request to the register directory: validation checks In this step, you will create a document in the MongoDB database with the client-provided data. At the beginning of the routes/users.js file, import the Mongoose library, and import the User model that you just created. const User = require("../models/user.js") Now, in the routes/users.js file, find the following piece of code: Handling the register POST request Within this handle, write the following code: Line 1 : Extract the values from the elements in the form. You are taking out the email, the name of the user, his password. The last field ( password2 ) is a validation check to ensure that the user has confirmed that the password they are typing is correct. : Extract the values from the elements in the form. You are taking out the email, the name of the user, his password. The last field ( ) is a validation check to ensure that the user has confirmed that the password they are typing is correct. Lines 4-6 : If any of the fields has not been filled, add an appropriate message to the error array. : If any of the fields has not been filled, add an appropriate message to the array. Lines 8–10 : If the passwords don’t match, add the appropriate message to the error array. : If the passwords don’t match, add the appropriate message to the array. Lines 13–15 : Check whether the password has a minimum of six characters. : Check whether the password has a minimum of six characters. Lines 16–23 : If any content within the error array is present, re-render the register.ejs page and then send the appropriate data along with the errors array. All of the contents of this array will be displayed on the page. : If any content within the array is present, re-render the page and then send the appropriate data along with the array. All of the contents of this array will be displayed on the page. Lines 24–36 : You have successfully passed the validation! Now you will first find if there is already a user present in the database via the email. If there is, then you will re-render the register.ejs page and display the relevant error. Otherwise, create a new user with the provided email, name, and its associated password. Bear in mind that you haven’t saved it yet. You will save it after encrypting the password. As an example, let’s run the code and enter a single-character password so that it re-renders the page: Output of the code after you deliberately enter the wrong options You now need to display the errors, that is, display the contents of the errors array. Display the errors Now go to views/register.ejs and find the following piece of code (this will be around the beginning of the file): Code to find at views/register.ejs When you’ve found it, write the following line just under this h1 tag: <%- include ('./partials/messages') %> This line means import a new file that will help display the messages. Go to your views directory, create a new directory called partials , and within partials , create a file called messages.ejs . There, write the following code. views/partials/messages.ejs <% if(typeof errors!= 'undefined') { %> <% errors.forEach(function(error){ %> <p> <%= error.msg %></p> <% }) %> <% } %> This code says that if there are any errors present, then display the contents of the array with its own paragraph( p ) element. Re-run the code and write a single character password. This will be the output: Notice that you have now displayed the error message If you leave any of the fields empty, then you will get this error message: Two error messages displayed in this form Make both password fields contain different values: Three error messages displayed in this form Handling POST requests to the register: save the users to the database In this step, you will save the registered users to the MongoDB database. You will first encrypt their passwords and then save them to the database. In routes/users.js , find the following piece of code: Code in routes/users.js Within this else block, write the following piece of code: And on that note, end the else statement here. Lines 2–7 : Generate a salt, hash the user’s password. Assign this encrypted value to the user’s password and then save the client to the database. : Generate a salt, hash the user’s password. Assign this encrypted value to the user’s password and then save the client to the database. Line 10 : When the document is saved without errors, then redirect the user to the login directory, where the user will now log in with their credentials. This will be the output of the code when you register yourself as a user: Code output Since you have logged out the contents of the document in your console, here’s the output in the console: Contents of the newUser document Enter a registered email. As expected, you get an error telling you that the email has already been stored: Code output showing that you have an error With this step, you’ve finally saved users to the database. Let’s move on to creating flash messages with Express. At the end of this section, this is what your code looks like:
https://medium.com/better-programming/build-a-login-system-in-node-js-f1ba2abd19a
['Hussain Arif']
2020-07-21 16:42:32.821000+00:00
['Programming', 'JavaScript', 'Startup', 'Nodejs', 'Cybersecurity']
The Most Important Number in Your Medium Stats Isn’t What You Think
Stats are funny animals. Don’t look at them, most people will tell you. And for sure, don’t obsess on them. You know why they say that, right? Because the ones you’re looking don’t matter. And the ones you aren’t looking at do. For example, nevermind the green bars… they’re the flying monkeys from the Wizard of Oz. All they do is distract you so you stumble around lost and confused. Also, they’re spastic. Hard to make sense of them because they’re all over the place. One weird day with 2500 views can make all the other days look pathetic by comparison. Like this… Never-mind the fans, too. You know what happens if you pay attention to fans? You end up writing what you think people want to read. Oh, that article did so well, I should write about that topic more. Sound familiar? I’ve been down that road. It sucks. Because “what people like” is all over the darn place. It’s a crap shoot. Just look at the popular reads. You’ll see. Ignore the number of fans. Views Don’t Cut It, Either… Wow…. that got a LOT of views. I should write about that more!!!!! Right? No. Views are another crap shoot. Maybe you wrote for a big publication and whammo — lots of views. And then you write for that same publication again, but someone with twice your audience publishes at the same time and pffft. Views are affected by a lot of hidden factors. Like who else published at the same time. Like whether you got curated, and in which tags. Where you got placed in the publication or whether someone with a big audience shared. Drumroll… The Most Important Number in Your Medium Stats is… Read Rate Here’s what read rate means. You had them. They saw the title. They clicked. And then what? Read rate is kind of like retention rate on a website. It astounds me to see the number of people who bust their buns trying to “drive traffic” to a website that has a 69% bounce rate. Why would you do that? Bounce means people who “look at leave.” Or, as a Google Analyst put it, Look, puke, leave. Know what happens when your read rate is consistently low? People get to remembering. Because they’ve clicked your stuff before and now they know. Oh. Her. Yeah — her. Her writing is always rambling and boring. Nah, I won’t click. Fool me once, shame on you. Fool me thrice, shame on me. People get to know what to expect. If you hope to improve as a writer, read ratio is how you learn… Now the sleuthing begins. Because you had them. They clicked. Why did they lose interest? Was the writing boring? Did you start slow and they didn’t have the patience to slog through? Was it rambling and pointless? Was the title misleading? There’s only 3 ways to grow as a writer… We all want to grow as writers. If we didn’t, we wouldn’t be publishing in a public space. We’d write in a notebook and be done with it. Right? And there’s only 3 ways to grow as a writer. 1. Persistence & Dogged Determination If you show up to the keyboard often enough, you will improve. You cannot help it. That’s the theory behind that 10,000 hours thing we love to talk about. But it was never about the number of hours. It’s simply that when you do a thing over and over, you get better at it. 2. Luck We all love when luck shines on us. A big publication features your work at the top of the publication page. Or you get curated into the best and most popular tags, and not just one of them. Someone with 50K followers shares your work or gives you a shout out. Luck is a lovely lady, but she can also be fickle. Luck teams up with persistence really well. You know the old saying, luck often shows up wearing overalls and looks a lot like hard work. Yes, some people get lucky on their first post. But the more you show up, the more likely luck is to become a factor, too. 3. Self Analysis Taking a step back to look at your work is perhaps the surest way to accelerate growth. Because there’s millions of reads happening here every day. Can you ask for more generous feedback? Because, imagine you hit that sweet spot where 10,000 people click your title. There’s a world of difference between 75% read rate and 40% read rate. You don’t even need to pay attention to haters and trolls to get feedback to learn how to be a more engaging writer. It’s right there in your stats. Two simple steps. 1) Go look at the pieces with great read rate. 2) And then look at the pieces the lowest read rate. Can you learn anything? I bet you can!
https://medium.com/linda-caroll/the-most-important-number-in-your-medium-stats-isnt-what-you-think-872455b98b70
['Linda Caroll']
2019-08-19 16:32:54.531000+00:00
['Statistics', 'Writing', 'Advice', 'Writing Tips', 'Medium']
Write for An Idea
Write for An Idea Let your idea touch many hearts! Photo by Aaron Burden on Unsplash An Idea (by Ingenious Piece) is an online publication that doesn’t limit itself to a specific subject. It is interested in anything that is genuine, comes from the heart, and can benefit others, from writers all over the world. We encourage the writers to come up with an article reflecting their own thoughts and opinion. Each writer is free to work at her/his pace, there are no commitments on the frequency at which articles are to be submitted, but please note that we accept only drafts. We support the author to publish their work in a simple and splendid way by following the below guidelines: Short Captivating Title with Sub-Title (Guideline No. 1) The title could cover informative words of the central topic to catch the readers and subtitle helps to connect with them more quickly. As an example, check the title and subtitle of this article (as shown in the below screenshot): Screenshot by Author Make sure to use the title case for the title and sub-title. Images (Guideline No. 2) Images inspire to articulate thoughts, so the use of high definitions photos will help in making the article more gripping. Please include at least one high definition image and use it as a featured image of your article. Photo by Mark Harpur on Unsplash Below are some guidelines related to using images (A) Always include the source of the image It is always good to include the source of used images, to make it immediately clear its licensing requirements. In general, this info will look like: [photo by (Name of source)] For example, (B) Using images which are licensed as free for personal and commercial use Please go ahead and use such images, but make sure to provide their source information (as suggested previously). You can make use of the next section to know about sources from where you can obtain such images. (C) Using an image with a purchased license Please include the below details along with the image: Licensed Provider The Name of User (D) Using your own images That’s awesome! Please go ahead and give credits to yourself (mentioning your name or using the term “author”) such as: Photo by Author Photo by Author <name> Photograph Copyright <name> (E) Using images which are NOT free for personal and commercial purposes, or you haven’t bought its license or it’s not your own image Do not use such an image. Below are some of the sources from where you can get free-to-use images (A) The https://unsplash.com/: Unsplash provides freely-usable images. (B) Creative Commons by Google: Google provides you to use images with “creative commons licenses”. Here is how you can filter them: Screenshot by Author Note: When using creative common images from Google, please do mention so as the caption (along with a link to the image). (C) The https://www.pexels.com/: Pexels makes most of its content available to be used for free for personal and/or commercial purposes but subjected to some limitations, as the below screenshot describes: Screenshot from https://www.pexels.com/terms-of-service/ (D) The https://pixabay.com/: Pixabay also makes most of its content available to be used for free for personal and/or commercial purposes but subjected to some limitations, as the below screenshot describes: Screenshot from https://pixabay.com/service/terms/ (E) https://www.freepik.com/: Most of the images from Freepik are free for personal and commercial purpose with attribution: Screenshot from Freepik (F) Other sources for open-source Images: Other open-source image sources include StockSnap, Flickr, Burst, The Stocks Other Image Sources: (A) https://www.istockphoto.com/: iStock offers two types of licenses: standard and extended. Please go through istockphoto.com’s license agreement to make sure if you have enough rights to use the image the way it is being used. Screenshot from https://www.istockphoto.com/legal/license-agreement (B) https://www.shutterstock.com/: shutterstock.com offers two types of licenses: standard and enhanced. Please thought shutterstock.com’s license agreement to make sure if you have enough rights to use the image the way it is being used. (C) https://www.canva.com/: Make sure to adhere to Canva’s terms before using any image downloaded or designed from Canva. The Textual Content (Guideline No. 3) Please follow the below guidelines regarding textual content. It also helps you to catch the reader's attention by focusing on some key factors: What is the main idea behind writing and what you want to convey? Keep your piece at least 2 mins long (except for poetry). Make sure to check on spellings and grammar. You may like to use the Grammarly browser plugin. browser plugin. Try to use constructive vocabulary. Try to include your personal experiences as examples (if possible) to make it easier to convey your message. Use Github Gist to share any software code. This will look like the below screenshot: Make sure to avoid plagiarism. You may reference text from other sources, but it must be referenced ([1][2] and so on). In-line with Ad-Free Medium philosophy, please don’t send articles endorsing a particular product. But, it is fine to include links to your other relevant articles. Follow Medium’s Distribution Guidelines. Don’t accuse anybody without referencing sources proving those as facts. ..and finally, it’s always good to re-read your piece before submission. Graphs Graphs provide more perception of the data and hence make it easy to understand. Photo by Isaac Smith on Unsplash You have full rights to your work An Idea gives the platform to the writers and readers to join it for free. They can add and delete the work at their own pace. Your earning will be yours. We don’t take a share to associate with us. By publishing new articles we attract more viewers, and thereby also provide more views to added articles. As part of the review process (or later), we may edit/change the text, images, and tags of your articles to improve readability (grammar, formatting, etc.) and its categorization to make it easily reachable to the target audience. In submitting your article to An Idea, you declare that it is an original piece (that is, it is not plagiarised) and that you have all the permissions to submit it to An Idea for publishing. Keep writing! Keep sharing! What can you do to enhance the reachability of your work? We pay special attention to the below tags: So, if possible, tag your articles/stories/poems with at least one of the relevant tags from above. For example, Screenshot by Author Submission (Guideline No. 4) To join us as a writer, send your interest through the Contribution to An Idea form, so that our editors can review your profile. If you have already joined us as a writer, you can follow the steps on https://help.medium.com/hc/en-us/articles/213904978-Add-a-draft-or-post-to-publication and submit your drafts. We review all submissions within 3 business days. If your article is a good fit for our publications and follows all guidelines then it will be accepted and added to the queue to be published soon. We rarely take longer than that to respond. But if we do, please forgive us — we’re having a hectic week.
https://medium.com/an-idea/how-to-write-for-an-idea-53b502cbb4f5
['Ritika Mittal']
2020-11-26 14:46:03.735000+00:00
['Guidelines', 'Publishing', 'Ideas', 'Submission', 'Writing']
The welcome party recap Pt 2
If you missed Part 1, you can check here for it so that you get a full grasp of the discussion going forward. We had an awesome session during our first virtual meetup. Participants asked questions and got answers for all, and the facilitators made known some facts to the participants. This is a breakdown of the whole event. Check it out. Since it was our first meetup, we expected that participants ask questions as it relates to the subject matter. More like, questions any inquisitive person will like to get answers to, before embarking on this Data Science journey. Responding to questions from community members, the first question from Bayo one of the community member. Bayo: Is there a special tool(software) needed for data science?. Adeniyi Temidayo: Taking care of data, arranging or filtering them out can be achieved with Excel while studying and checking how valid or noting pattern can be achieved with some few software with coding via languages like Python and R for efficient use and portability. Meanwhile, he asked another facilitator to buttress/answer this question too. Find Bunmi’ response below: Jolayemi Olubunmi: “Old but still relevant tools: SPSS, Excel, etc Modern tools: Data analysis using Python, R programming language, etc. They have useful libraries embedded that aid data analysis processes and help achieve results faster with less stress. But of course, you need to learn and understand these programming languages to use them and get expected results” Bayo confirm and acknowledge satisfaction of the response given, In the absence of questions more questions from the community, Adeniyi Temidayo chose to ask few questions. One of the questions asked is; “what is readily available data that can be extracted or obtained from a hospital and will as well be relevant in other hospitals that can help in the quick diagnosis of patients?” Few supplied responses are listed below: Patient Name Patient Address Patient Age Patient Health Conditions History (like BP, weight, height, sugar level, scan etc) Adeniyi Temidayo butressing his point. Find his comments below, on questions and valid point that can be drawn from the participant’s response; so look into this provided data, we need to split these data into chunk to study them individually. Age is necessary if we need to check to see if a particular disease is age-related or specific to a particular age range. Blood Pressure is another one, what are the no-go areas for some BP values, what are the safe area for patients that exercise daily or a sportsman/women? when is weight an issue with regards to age for a patient? is a patient height a function of the hormones? when is sugar level normal for an average person, what is the safe sugar level for old people, what is the range of sugar level for a person that treks or most time on the move that keeps burning sugar when it is available and when it is not… these are the questions that will possibly help to examine each data in their different chunk and later look into which among them is related and can be used together, such that we can see and identify trends and pattern. ( further study might require address but the geographical location will be the right word to use in that case).” This discussion wraps up the session. It was an awesome session, looking forward to the next. Do not forget to share among colleagues and friend that will benefit from this, clap and share, drop comment in the comment section below.
https://medium.com/ds4africa/the-welcome-party-recap-pt-2-1c58d03e7342
['Temidayo Adeniyi']
2019-06-08 18:47:56.071000+00:00
['Analysis', 'Data', 'Data Science', 'Data Visualization', 'Analytics']
Have You Tried to Pass the Time Today?
Have You Tried to Pass the Time Today? What about yesterday? Here’s what that means. Photo by RODOLFO BARRETO on Unsplash Imagine you’re watching a movie. Comfortably seated on your couch, perhaps with ice cream or a huge cup of iced tea. You press play and start to relax as the first images appear on the screen. After a few minutes, you suddenly press the fast forward button. Instead of 2 hours, the movie ends after 20 minutes. You haven’t understood the story, and you haven’t enjoyed your moment. You have no reason to do this, right? Looks like we’re doing it anyway, not with a movie, but with our lives. I see you looking at the clock When was the last time you did something and were amazed at what time it was once you were finished? When was the last time you didn’t see time go by? At some point in my life, I realized that I couldn’t remember the last time it happened. When, on top of that, I found out that everything I was doing was almost solely to pass the time, I thought that there was a problem somewhere. I kept working because I didn’t know what else to do. I was scrolling through social media in an endless quest for nothing because I needed a break from work and didn’t know what else to do. I was constantly checking my e-mails waiting for a distraction to come. I waited for meal times not because I was very hungry, but because it was a break from having to fill my time. In short, I would look at the clock and think about how to fill the time gaps before one event or another. All this unconsciously. It is only now that I am no longer doing all of this that I realize the magnitude of the problem. And that I realize how much my life has changed, and how much I prefer it now. Now that I’m not bored anymore. Do you recognize yourself in some of the patterns I mentioned earlier? If so, here’s what it could mean. Enjoy your time instead of trying to get rid of it If you are constantly or often trying to pass the time, this implies that you don’t want to be in the moment you’re in now That this moment does not suit you. You are not interested in it. Source: waitbutwhy.com This is an average human life. Each dot represents a month. Remember how fast a month can go by? This image slaps me in the face every time I see it. Because it makes something visible. Something that everyone knows but few people grasp. Our time is limited. So, what will you do with your tiny handful of months? Take them and throw them out the window? Constantly trying to pass the time is a strong sign that something is wrong with the way you spend your time. It’s annoying, but it’s not a big deal. All you need to do is refocus your life. Refocus your life around the things that put you in a state of flow. Now that you’ve found the elephant in the room, all you have to do is analyze the situation. Identify the things you don’t like about your life; Describe in writing a perfect ordinary day; Get out of your box, and find new things to do. Start a project, learn something new, get out more, meet more people, hang out more with your friends. Get out of your routine, so much so that you can’t remember what it was like before. By doing so, you’re going to turn the whole thing upside down. It will allow you to break the routine, break your habits. You will then be free to choose only the things you like and put them together, leaving aside those that make you look at the clock. One last thought… Our time is limited. Not taking it and, worse, trying to get it through faster, is a huge waste. The opposite of trying to make time go by is not seeing it go by. And there is only one moment when you can no longer see time passing: when you do something in alignment with yourself. Break that pattern that doesn’t suit you, and rebuild a life you love. You’ll know when you’ve reached your goal: you’ll feel like you can no longer touch the ground.
https://medium.com/live-your-life-on-purpose/have-you-tried-to-pass-the-time-today-326cf6d8e761
['Auriane Alix']
2020-08-17 22:01:01.393000+00:00
['Self-awareness', 'Self Improvement', 'Self', 'Time', 'Life']
Hear to Heart We Will Win
Photo by Justin Luebke on Unsplash You will recognize your path when you come upon it because you will suddenly have all the energy and imagination you will ever need. Sara Teasdale This quote is a remarkable testimony of faith and understanding of one’s destiny. Imagine the feeling of renewed energy at the dawning of awareness, the uplifting surge of serotonin, endorphins, and other chemical youth reactions within the body. Enjoy the surge, and live a heart filled with joy. Our Heavenly Father shows love for us in many ways. Jesus Christ is the first and most powerful sign of his love, and it makes sense that Jesus is the one to go through so we can reunite with our Heavenly Father. Heavenly Father blesses us with the strength to endure our trials. Do not look upon the less mastered lessons of life as failures, when you learn something of yourself, Jesus’ Atoning sacrifice, and understand it, you are a winner. When you consider that the world is not perfect, that humans are not always on solid ground, and there is compassion for them still, then the outcome is knowledge of Jesus Christ, insight into the world’s beliefs and the way different peoples handle them, and compassion rises like a boat of hope, you win again. God shows us love with this mercy. He never put onto us more than we can handle. Our lack of belief is what makes us weak so pray, take a deep breath and listen with your heart, He will tell you what to do. God shows more of his love by blessing us with wealth, no, not necessarily in the financial sense, but richness in forgiveness and a desire to want to know more of him. The subtle and gentle words of encouragement, which motivates us to action, that lives within our souls that assure us that we can and do, make a difference. Those random acts of kindness bring recognizable evidence of the truth that Jesus does live, as does our Heavenly Father. Do not ignore that small nagging voice from inside whispering in the night, “Fear not, for I am with the.” This same inner voice will lead us in the right direction to the right choice for our spiritual growth. However, there are many times in which we come to a crossroad in our lives. What do we do? Pray. Ask for guidance and then listen. We must choose our path; Jesus will fulfill the greatest need in your heart as well as your life to proceed on this journey of life. Words of comfort and encouragement will come to you in a whisper, “Go, for you will be alright.” This article was four days in the making. Quite honestly, I have been struggling with a bout of depression for the last four days and only parts of this article came out per day. I wanted to give up, go to bed and bury my head in the covers. My usual routine of scripture study in the Holy Bible and my beloved Book of Mormon wasn’t fulfilling me-this was devastating to me. In my heart, I find comfort during bible study time, but nothing reached out to me. Yesterday was a difficult day for me, everything hit me at one time; all the anger, confusion, and boredom almost drowned me in sorrow. All day long, in spurts of gleaming hope, a nagging voice-prompted and instructed me to get up. “There are things to be done, dogs to feed, a tree to decorate…” Lingering items on my task list that needed doing and that weighed on my mind. Giving in to the voice, I got up, worked the plan for the day. In a matter of a few moments into my accomplishments: feeding the dogs, unloading the dishwasher, cleaning the kitchen, my spirits now lifted and I succeeded. I am a Warrior! I turned on some holiday music, sang Christmas carols on Stingray Music, and worked on the tree. (It is decorated, but I could not find an extension cord. URG) when does this rollercoaster ride end? A lack of motivation or depression is one of the rollercoaster rides that we will live from time to time. All is going well, and then there is a SNAFU-Situation Normal All Fouled Up. Don’t give in to the frustration. Look at what was accomplished. What can you control at this time? In my case, I got out of bed, cleaned up the living room and kitchen, and decorated the tree. When I could do nothing more on the plan, I mopped the floor and danced around like a crazy loon with my mop (I like to lead). That reminds me of one of my favorite bible verses, Ecclesiastes Chapter 3:4 There are a time and a season for all things; A time to weep, and a time to laugh; a time to mourn, and a time to dance Work the plan has so many meanings. Is it a to-do list? Can we call it a commotion priority list? Maybe some people need this tangible list to see which tasks need immediate attention and the ones that can wait a while. Prioritizing is helpful when in a mental crisis. So, what is your plan? Mine includes Jesus Christ, few moments of love and glory in his name to calm my inner storm. It brings me to a small window of time to listen to those voices in the night echo back into my heart. Here is a plan template that you can use to focus your energy on your life choices. When necessary, please feel free to add item categories or subtract them; adjust to your needs. 1. What is God telling me? (What is that little voice inside telling you to do? Is the message, feeling, or aura is mean and cruel, deviant, or demented? If you answer yes, then it’s not God. It is the adversary talking to you; forcing his will on you. Keep in mind, messages, and actions of evil are his job. If it causes you to stress out, may I suggest you check-in with your soul? What forces (I use this term lightly.) are causing the stress? Write them down and pray. If you feel confused, sit down, and work out the confusing parts of the problem. Confusion, Sadness, and Pain are signs of inner turmoil. These causes of conflict are from the choices made in your journey or the action of others. Find the answer in the quiet of the storm inside you. Listen to the foghorn of God reaching out for you. 2. What act of kindness is more important to me? Talk with an elderly family member; this action choice can be educational and entertaining. If you are a history buff or are doing Genealogy research, you will love this task. Spend one day, sit with your grandmother or grandfather. These relatives have first-hand experiences and history of the specific event they witnessed. Learn how they lived and dealt with the changes that came along with this incident. Get their opinion, insight, and creative ideas. 3. Keep home life basic. Food, clothing, shelter are the top priorities of human life. Yes, if you add water and wind, and you have five elements of life. (I love that show). 4. Always keep God first. Pray day and night for guidance. Pray at the end of the day for forgiveness for not following through those missed opportunities. Show compassion to others’ plight and worry. Do not assume that homeless men or women are on the street due to addiction, laziness, or stupidity. These social thoughts or theories are not always facts. In some cases, their situation is due to the choices of others. These actions’ could be anything from a job loss for the breadwinner, to drug addiction. The cause of this unfortunate life change does not matter; the facts are that they do try to make the best out of a terrible situation. This is a tie for listening to your heart and help when possible. Sadly, jobs are not on the horizon for many at this time. A faltering economy and a partial hiring freeze leave many industries at a standstill or out of reach for the unemployed. This pandemic has changed many things in our society adaptations from the home front to the world impact. One day the future youth will come to us, the elderly relative to share our experiences. What will we say? On the home front, people in the hundreds of thousands had to go to a local food bank to get groceries. Doctor’s appointments were no longer face-to-face visits. Instead, they were virtual visits, which meant there was no physical examination. Townspeople had to wear masks to shop, eat at restaurants when they were not closed down for a positive contact of the Corona Virus. How did we get through it? We showed more compassion and respect for our brothers and sisters in not only our lives but in the community. We supported the nation’s advice to do what was necessary to survive. I hope that some if not all of us learned something from this hardship. One thing I will expand on, we did not have to go it alone. Back in the day, we wore creative masks during the seasons: mix-matched, color-coordinated, cloth masks-a proper lady does not go out without her mask. High school principals and administrations all around the nation supported virtual proms and class graduations. We did not lie down and die. We turned to God, prayed, and lived on the best we could. However, one of the downsides to our fight to survive was the selfish egomaniac who chose to bitch and grope about wearing masks. Their tantrum turned into a compromise of the health environment when they stopped wearing masks. One person, Bless his heart, insisted that he was an American citizen and did not have to wear a mask. Well, find Nathan Hale, do not wear one, stay at home, and shut your mouth. I prefer that keep your gems to yourselves. You see, good and bad collide. There are pitfalls in the world that affect a family; their beliefs in God are tested. Keep the faith; he is keeping him in you. Human beings within the community refuse to follow instructions or lend help. This is a degree of evil working against the good. Take what is given and do the best that is possible with it. A person’s respect may be tested, this is okay, and this is a test of your strength, build upon it. Benefits are abundant to showing compassion and respect to our fellow man and woman, but as with that straight and narrow road, it will have testing points, and resting points, keep the faith, listen to your heart, and press on. The righteous choices will come back to us many times over. How? Keep the faith and believe in Jesus Christ for a stronger knowledge. Hold on to it as you endure the lessons and grow in their meaning. Suffering the bigger consequences of our actions is hard; hang in there because God will send help your way. Many resources are here for you, people who can assist you, guide you, teach you. Work and live out the incidents until the end. I want to add to the subject of helping the homeless. There is a crucial need for tents, bedding, air mattress, and air pumps. Old camping gear can help give them a sense of security on the streets — a tool for their survival. Oh, I want to mention that swimming pool floats can have a second purpose in the homeless community. It can substitute as a sleeping matt for babies and young children, and sometimes a stray dog adopted by the family. Consider adding socks, toiletries, and used clothes in a box to give out. If there are enough of you in a group, this can make up several totes. Your old tents and camping gear give them a sense of normalcy in a world gone nuts. Cooking for the family is a feeling a tangible task of normalcy. Having a sheltering place to sleep for the night gives comfort and a potential place to rest and heal. Having a sense of normalcy or a healthy daily routine is one of the keys to survival. These families are trying to find their way home a hand up and out of the pit is more useful and needed than a hand-out. The devil will intrude as much as possible to make SNAFU in our lives. In a demented way, this is his mission, his job. He places temptations in our path so we can make the wrong choices that take us away from Heavenly Father and Jesus Christ. View this temptation as an obstacle to overcome, to make us successful in finding our way to Jesus Christ, not away. One of my favorite older sayings, “That that doesn’t kill us will make us stronger.” is so true. I choose to stand strong and give help when needed, not roll over and die, nor give in to my depressive episodes. I am tougher than that and I will win. In my winning, I will see you in Heaven. At the end of the day, when all is said and all is done, the choice to go the long and curvy road, or the Scenic Route, will get you to God and Heaven, but you may not like the adventure. Ask yourself, why take the hard road? A detour in life will give your insight — maybe a quick trip through hell and the dangers can be overwhelming. Listen to your heart. Why follow the temptations of the adversary? The devil will screw up your plans. He is like a fly in the ointment. Although the straight and narrow may sound boring to some, it is not so easy. When one chooses his direction, we all pray for him or her to have a safe journey. When the opportunity arises, we will give advice, words of wisdom, and the comfort of knowing they are loved. Actions are done in love and the name of Jesus will help us out of the darkness of temptations and into the light of knowledge. This road allows us to be heroes and compassionate. No matter the crossway, you take, for it is of your own free will, and choosing you will not be lost. God is watching you, Jesus will stand beside you, and the Holy Spirit is talking to you. Listen and step loudly and be proud you are a warrior.
https://medium.com/writers-blokke/hear-to-heart-we-will-win-30c7ee712178
['Sandi Sipe']
2020-12-22 12:17:29.818000+00:00
['Depresion', 'Sandi Sipe', 'Charity', 'Medium', 'Psychology']
Ensemble methods: bagging, boosting and stacking
What are ensemble methods? Ensemble learning is a machine learning paradigm where multiple models (often called “weak learners”) are trained to solve the same problem and combined to get better results. The main hypothesis is that when weak models are correctly combined we can obtain more accurate and/or robust models. Single weak learner In machine learning, no matter if we are facing a classification or a regression problem, the choice of the model is extremely important to have any chance to obtain good results. This choice can depend on many variables of the problem: quantity of data, dimensionality of the space, distribution hypothesis… A low bias and a low variance, although they most often vary in opposite directions, are the two most fundamental features expected for a model. Indeed, to be able to “solve” a problem, we want our model to have enough degrees of freedom to resolve the underlying complexity of the data we are working with, but we also want it to have not too much degrees of freedom to avoid high variance and be more robust. This is the well known bias-variance tradeoff. Illustration of the bias-variance tradeoff. In ensemble learning theory, we call weak learners (or base models) models that can be used as building blocks for designing more complex models by combining several of them. Most of the time, these basics models perform not so well by themselves either because they have a high bias (low degree of freedom models, for example) or because they have too much variance to be robust (high degree of freedom models, for example). Then, the idea of ensemble methods is to try reducing bias and/or variance of such weak learners by combining several of them together in order to create a strong learner (or ensemble model) that achieves better performances. Combine weak learners In order to set up an ensemble learning method, we first need to select our base models to be aggregated. Most of the time (including in the well known bagging and boosting methods) a single base learning algorithm is used so that we have homogeneous weak learners that are trained in different ways. The ensemble model we obtain is then said to be “homogeneous”. However, there also exist some methods that use different type of base learning algorithms: some heterogeneous weak learners are then combined into an “heterogeneous ensembles model”. One important point is that our choice of weak learners should be coherent with the way we aggregate these models. If we choose base models with low bias but high variance, it should be with an aggregating method that tends to reduce variance whereas if we choose base models with low variance but high bias, it should be with an aggregating method that tends to reduce bias. This brings us to the question of how to combine these models. We can mention three major kinds of meta-algorithms that aims at combining weak learners: bagging , that often considers homogeneous weak learners, learns them independently from each other in parallel and combines them following some kind of deterministic averaging process , that often considers homogeneous weak learners, learns them independently from each other in parallel and combines them following some kind of deterministic averaging process boosting , that often considers homogeneous weak learners, learns them sequentially in a very adaptative way (a base model depends on the previous ones) and combines them following a deterministic strategy , that often considers homogeneous weak learners, learns them sequentially in a very adaptative way (a base model depends on the previous ones) and combines them following a deterministic strategy stacking, that often considers heterogeneous weak learners, learns them in parallel and combines them by training a meta-model to output a prediction based on the different weak models predictions Very roughly, we can say that bagging will mainly focus at getting an ensemble model with less variance than its components whereas boosting and stacking will mainly try to produce strong models less biased than their components (even if variance can also be reduced). In the following sections, we will present in details bagging and boosting (that are a bit more widely used than stacking and will allow us to discuss some key notions of ensemble learning) before giving a brief overview of stacking.
https://towardsdatascience.com/ensemble-methods-bagging-boosting-and-stacking-c9214a10a205
['Joseph Rocca']
2019-05-05 12:13:40.499000+00:00
['Machine Learning', 'Artificial Intelligence', 'Data Science', 'Deep Learning', 'Towards Data Science']
Architecture & Style
We build here upon a previous piece, where our emphasis revolved around the strict organization of floorplans and their generation, using Artificial intelligence, and more specifically Generative Adversarial Neural Networks (GANs). As we refine our ability to generate floorplans, we raise the question of the bias intrinsic to our models and offer here to extend our study beyond the simple imperative of organization. We investigate architectural style learning, by training and tuning an array of models on specific styles: Baroque, Row House, Victorian Suburban House, & Manhattan Unit. Beyond the simple gimmick of each style, our study reveals the deeper meaning of stylistic: more than its mere cultural significance, style carries a fundamental set of functional rules that defines a clear mechanic of space and controls the internal organization of the plan. In this new article, we will try to evidence the profound impact of architectural style on the composition of floorplans. Reminder: AI & Generative Adversarial Neural Networks While studying AI and its potential integration to the architectural practice, we have built an entire generation methodology, using Generative Adversarial Neural Networks (GANs). This subfield of AI has proven to yield tremendous results when applied to two-dimensional generation of information. As any machine-learning model, GANs learn statistically significant phenomena among data presented to them. Their structure, however, represents a breakthrough: made of two key models, the Generator and the Discriminator, GANs leverage a feedback loop between both models to refine their ability to generate relevant images. The Discriminator is trained to recognize images from a set of data. Properly trained, this model is able to distinguish between a real example, taken out of the dataset, from a “fake” image, foreign to the dataset. The Generator, however, is trained to create images resembling images from the same dataset. As the Generator creates images, the Discriminator provides it with some feedback about the quality of its output. In response, the Generator adapts to produce even more realistic images. Through this feedback loop, a GAN progressively builds up its ability to create relevant synthetic images, factoring in phenomena found among observed data. Generative Adversarial Neural Network’s Architecture | Image Source We specifically apply this technology to floorplan design, using image representations of plans as data format for both our GAN-models’ inputs and outputs. The framework being employed all across our work is Pix2Pix, a standard GAN model, geared towards image-to-image translation.
https://towardsdatascience.com/architecture-style-ded3a2c3998f
['Stanislas Chaillou']
2019-06-30 21:27:10.033000+00:00
['Machine Learning', 'Artificial Intelligence', 'Neural Networks', 'Towards Data Science', 'Architecture']
A Simple Introduction to Read Simulators
A Simple Introduction to Read Simulators Introduction to read simulation tools with examples and usage Read simulators are widely being used within the research community to create synthetic and mock datasets for analysis. In this article, I will introduce some recently proposed, commonly used read simulators. Screenshot from running InSilicoSeq DNA Sequencing and Reads If you have come across my previous article on DNA Sequence Data Analysis, you may have read about DNA sequencing. Sequencing is the process that determines the precise order of nucleotides of a given DNA molecule. We can determine the order of the four bases adenine, guanine, cytosine and thymine, in a strand of DNA. DNA sequencing is used to determine the sequence of individual genes, full chromosomes or entire genomes of an organism. Special machines known as sequencing machines are used to extract short random DNA sequences from a particular genome we wish to determine (target genome). Current DNA sequencing technologies cannot read one whole genome at once. It reads small pieces of between 100 and 30,000 bases, depending on the technology used. These short pieces are called reads. Read Simulators Sequencing machines may not be available as we wish and we may not be able to get hold of real-world samples to sequence. This is where read simulators come in handy for research purposes. Read simulators can mimic sequencing machines to simulate reads. They have pre-defined statistical models to mimic the error rates relevant to the particular sequencing machines. Furthermore, we can provide our own error models as well (different rates of insertions, deletions and substitutions). Estimating sequencing coverage Sequencing coverage is defined as the average number of reads that covers each base of the reference genome. Estimating the sequencing coverage is very important when you are simulating datasets. The coverage equation is defined as follows. C = LN / G C is the sequencing coverage G is the length of the genome L is the read length N is the number of reads For example, if you have a genome of length 5Mbp and you simulate 1,000,000 HiSeq 2000 reads (read length is 100bp), then we will get a sequencing coverage of 20x as follows. C = LN / G = 100 * 1,000,000 / 5,000,000 = 20x Here, at least each position of the reference genome is covered by 20 reads. Estimating Abundance The abundance of a species in a dataset is considered as the fraction of reads that belong to that species. For example, if there is a dataset with 10,000,000 reads and 1,000,000 of them belong to E. coli, then the abundance of E. coli will be 0.1. Note that coverage and abundance are not the same. Short Read Simulators With the popularity of next-generation sequencing (NGS) technologies, many NGS read simulators have been developed. Currently, many of the popular short read simulators are designed to simulate reads mimicking many Illumina, 454 and SOLiD platforms. Listed below are some popular short read simulators. Links to their publications are provided as well. Long Read Simulators With the advancements in sequencing technologies, scientists have shown an increasing interest in using third-generation sequencing (TGS) technologies. Currently, many of the popular long read simulators are designed to simulate reads mimicking the two main TGS technologies; (1) Pacific Biosciences (PacBio) and (2) Oxford Nanopore (ONT). Listed below are some of the popular and recently introduced PacBio and ONT simulators. Links to their publications are provided as well. PacBio Simulators ONT Simulators InSilicoSeq I have been using InSilicoSeq in my work a lot and I find it very intuitive and easy to use. I will walk you through some sample commands to simulate reads. You can easily install InSilicoSeq using conda or pip . conda install -c bioconda insilicoseq OR pip install InSilicoSeq Simulate reads by providing the number of reads Assume that you have a single reference genome and you want to simulate 1 million Illumina MiSeq reads. Given below is a sample command you can run using InSilicoSeq. iss generate --model miseq --genomes ref.fasta --n_reads 1M --cpus 8 --output reads Simulate reads by providing the coverage Assume that you have two reference genome files ref1.fasta and ref2.fasta . You want to simulate 30x coverage from ref1 and 10x coverage from ref2 . You will need to create a tab-separated file named coverages.tsv and add the coverage details as follows. red1_id 30 ref2_id 10 ref1_id and ref2_id refer to the identifiers of the files ref1.fasta and ref2.fasta . If you download the reference genomes from NCBI, the identifies will consist of letters and numbers and for example, may look something like this NC_007712.1 or CP001844.2 . These identifiers are NCBI accession numbers provided for each reference genome. Now you can simulate the reads using the following command. iss generate --model miseq --genomes ref1.fasta ref2.fasta --coverage coverages.tsv --cpus 8 --output reads Simulate reads by providing the abundance Assume that you have two reference genome files ref1.fasta and ref2.fasta . You want to simulate 0.4 abundance from ref1 and 0.6 abundance from ref2 . Note that the sum of all the abundance values should be 1.0 . Similar to coverage, you will need to create a tab-separated file named abundance.tsv and add the abundance details as follows. red1_id 0.4 ref2_id 0.6 Now you can simulate the reads using the following command. iss generate --model miseq --genomes ref1.fasta ref2.fasta --abundance abundance.txt --cpus 8 --output reads You can read more details from the InSilicoSeq documentation. PBSIM PBSIM is a PacBio reads simulator which provides both sampling-based and model-based simulations. I will walk you through some sample commands to simulate reads using PBSIM. Model-based simulation For model-based simulation, you can run the following command. pbsim --data-type CLR --depth 100 --length-min 10000 --length-max 20000 --prefix test --model_qc data/model_qc_clr ref.fasta The model can be found in the PBSIM folder PBSIM-PacBio-Simulator/data/model_qc_clr . The data type CLR refers to Continuous Long Read which simulates long and high error rates. The other data type CCS refers to Circular consensus Read which simulates short and low error rates. Sampling-based simulation For sampling-based simulation, you can run the following command. pbsim --data-type CLR --depth 100 --sample-fastq sample/sample.fastq sample/sample.fasta The sample FASTQ file can be found in the PBSIM folder PBSIM-PacBio-Simulator/sample/sample.fastq . You can use your own FASTQ file as well. You can read more details from the PBSIM documentation. SimLoRD SimLoRD is a TGS read simulator based on the Pacific Biosciences SMRT error model. I have frequently used SimLoRD to simulate PacBio datasets for my work. I will walk you through some sample commands to simulate reads using SimLoRD. Simulate fixed-length reads by providing the number of reads Assume that you have a reference genome and you want to simulate fixed-length reads with 60x coverage. Given below is a sample command you can run using SimLoRD. simlord --read-reference ref.fasta --coverage 60 --fixed-readlength 5000 output_prefix Simulate fixed-length reads by providing the coverage Assume that you have a reference genome and you want to simulate 2000 fixed-length reads. Given below is a sample command you can run using SimLoRD. simlord --read-reference ref.fasta --num-reads 2000 --fixed-readlength 5000 output_prefix You can also set a minimum length for the reads using the --min-readlength parameter during the simulation. You can read more from the SimLoRD documentation. Final Thoughts Read simulators have given us the opportunity to simulate reads ranging from zero errors to very high error rates. Also, they have allowed us to create synthetic and mock datasets mimicking different sequencing machines and different species compositions. Hope you found this article useful and informative as a starting point towards using read simulators. Feel free to use these tools for your projects and research work as they are freely available. Cheers, and stay safe!
https://medium.com/computational-biology/a-simple-introduction-to-read-simulators-bbeff4f0c0c6
['Vijini Mallawaarachchi']
2020-08-14 02:48:21.348000+00:00
['Simulation', 'Software', 'Bioinformatics', 'Data Science', 'Science']
Changing My Language, Changed My Outcome
It’s hard to imagine what iPhone I would have been using in the year 2012, but it was that year that I began writing yearly resolutions in the Notes app of my phone. I kept it up for a good few years until, according to my digital record, I apparently just stopped setting goals altogether. While I’m sure that isn’t entirely the case, it wasn’t until December 2017 that I picked this culturally-applauded habit back up — except that it looked a bit different when I did. I stopped writing run of the mill resolutions, and I started writing intentions. Lest you assume that the point of this article is to tell you that this singular shift in verbiage will lead you to miraculously achieve all of your goals, rest assured this is NOT the case. When faced with the task of writing “resolutions”, too often we head straight for what I’m calling, The List of the Obvious; spend more time with family, eat out less, be better about saving money, floss, and, of course, it wouldn’t be an American New Year without capping off the list with, lose weight. But it’s in calling out these banal activities that we miss something crucial to the process; reflection. Instead of writing yourself the same to do list year after year, get into the ritual of really confronting the year that has passed. How do you plan to grow beyond where you’ve been, if you haven’t taken time to assess where exactly that is? Where were your boundaries last year, and how can you push past them? Mentally, physically, financially? In your career, relationships or craft? In my experience over the last 3 years, it seems that in order to actually cross things off the list come December, we must commit time to thought and reflection, and then rethink the language we use when writing our resolutions. Since 2017, I‘ve started each year with a list of 8 intentions, all based around 3 core words, or themes. In 2019, those words were patience, fear and impermanence. The intentions that branched from them were both broad mantras and specific goals. Number 1 on my list was to, “Ride the Waves” — to welcome challenge and impermanence as a part of life, and to go towards things that make me fearful, rather than away from. This also represented a desire to be more spontaneous, so when a friend asked me to book a trip to Mexico on a whim, I did. This checked my box to travel more. A week ago when I sat down to reflect, I genuinely felt I had successfully invested in 3/4, if not more, of the intentions I’d set out for myself. Of course this required work and revisiting my intentions often (I hung them next to and above my bed where they couldn’t be missed), but I also attribute my mind’s ability to hold these intentions present throughout the year to having taken time for deep reflection, and careful selection of personally meaningful words and phrases. I want you to give it a try: Clear an open space, physically and mentally, where you feel comfortable and distraction free. Definitely put your phone away. Wear something that makes you feel good. Fill your favorite cup with your favorite beverage. Settle in. Take a few breaths. Light a candle if that speaks to you. On a piece of paper, challenge yourself to write down a minimum of 7 things you achieved, succeeded at, or overcame last year. You can write down as many highlights as you want. The only rule is that you genuinely spend time reflecting on the high’s (and low’s) of the last 365 days of your existence. Once you’ve finished, let it sink in. We tend to speed through everything these days; life comes and goes at an alarming pace. How many new shows came out on Netflix today? How many social media posts are you not caught up on? Moments of calm are crucial for clear reflection. Flip over your piece of paper. In thinking about the year past, you may have already started to formulate some ideas about what it is you would like to achieve in the year to come. Come up with three words or themes that seem to underline those advancements you want to make. Consistency. Trust. Awareness. Patience. Ecstatic joy. Gentleness. Accepting uncertainty. Confronting fear. Invest. Slow down. Choose any words or themes that speak to you. They can be literal or abstract — as long as they highlight the ways in which you would like to evolve. Underneath each of your three words, expound on what they mean to you. Under consistency you could write remembering to floss and getting to the gym more often, but challenge yourself to also go deeper than surface level. Maybe a desire to floss is really about wanting to spend more time care-taking yourself. The idea of picking up a new hobby may be a call to do more things that are unexpected and step outside of your own box. Hang it up! Ideally place your intentions somewhere you’ll easily see them every day. Write it again: write all your words and intentions down a second time by hand on a separate piece of paper. By writing two copies you give your brain a chance to commit your goals to memory. Hang the second copy in another space you inhabit often and you’ll have even more opportunity to be reminded of where it is you want to go. So, take your pick; it doesn’t matter what word you use to define your journey towards betterment. Light a candle, put on your most comfortable pajamas and spend some quality time setting those goals/resolutions/intentions/aims/ destinations/ambitions, etc. What matters is that you’ve taken quality time to reflect, and to carefully articulate for yourself broad themes and specific calls to action for your own personal growth. My words for 2020 are; trust, accepting uncertainty and ritual. You can read more about what they mean to me here.
https://medium.com/swlh/changing-my-language-changed-my-outcome-d69ebb9843f6
['Micole Rondinone']
2020-01-18 05:36:01.776000+00:00
['Reflections', 'Resolutions', 'Productivity', 'Intentions', 'Personal Growth']
Keycloak, OpenShift, and Emails: A Tale of Links With Wrong Base URLs
Links in Keycloak emails that are sent using its admin API can have some funky URLs if the actor who triggers the sending is a neighboring service and our play takes place inside OpenShift or behind a reverse proxy. Here’s how to give our story a happy ending. Consider the following ecosystem, in which we have Keycloak and a Node.js worker service deployed and running on OpenShift: Let’s assume that, for whatever reason, the Node.js service runs a poor man’s cron job that triggers the sending of one of the required actions emails to a certain subset of users; it could be any of the available required actions, but for this example we will use the “update password” one. This Node.js service will be making a call to Keycloak’s admin REST API — with or without the help of the keycloak-admin module — . The endpoint we are hitting will trigger the sending of an email with a link that the user has to click in order to perform one or more required actions (e.g. update your password because it’s getting old). Because both the Node.js service and Keycloak are running on OpenShift, a Keycloak with an out-of-the-box configuration will send an email that contains a link to update the user’s password and will be pointing to a faulty URL (e.g. http://keycloak:8080 ), which is obviously not accessible for the user. Before we dive into the solution of our problem, let’s try to have a better understanding of what’s happening behind the scenes. Understanding how Keycloak builds email links If we take a look at some of Keycloak’s source code, we can come across something like this: The builder in line 2 is created based on the context of the current session, originating in the incoming request from our Node.js service to Keycloak’s admin API. This request fired by our Node.js service is pointing to the internal Keycloak address in OpenShift — for example, http://keycloak:8080 . The link rendered in the resulting email will be generated with this base URL, which will lead to nowhere once the user clicks it. We need to tell Keycloak what the right base URL should be. If you’ve dug deep in Keycloak’s FreeMarker templates… <html> <body> ${kcSanitize(msg("emailVerificationBodyHtml",link, linkExpiration, realmName, linkExpirationFormatter(linkExpiration)))?no_esc} </body> </html> …you’ve probably found out that there’s no elegant way of change the base URL of the link. Luckily, there’s a proper way to tell Keycloak what base URL to use. Setting up the frontend URL in Keycloak The concept of a “frontend URL” was explicitly enabled in Keycloak version 8.0.0. This configuration property is used to set a fixed base URL for frontend requests; by default, its value will be derived from the (incoming) request. For such an ecosystem as the one described at the beginning of this article, that won’t be a good idea. It’s also good to consider that the official Keycloak documentation encourages setting a frontend URL in production environments. There are several ways to set a frontend URL: If you are using a Keycloak as a Docker image, you can globally set the frontend URL using the environment variable KEYCLOAK_FRONTEND_URL . Otherwise, you can add the following to the startup: -Dkeycloak.frontendUrl=https://my.keycloak.instance.com/auth . It can also be added to Keycloak’s standalone.xml configuration. If you are using the jboss-cli tool, the instruction you need to issue is /subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=properties.frontendUrl,value=”https://my.keycloak.instance.com/auth") . Finally, you can override it for individual realms by setting it in the admin console: Keycloak’s admin console, where the frontend URL can be overridden per realm. If you want to add it in the standalone.xml , the end result could look something like this: Client side configuration If by any chance your ecosystem includes any Node.js service that interacts with Keycloak on a more client side way, and it uses the module keycloak-connect , then you have one more thing to configure!
https://medium.com/swlh/keycloak-openshift-and-emails-a-tale-of-links-with-wrong-base-urls-15f445d4b6a1
['Arturo Martínez']
2020-08-12 09:33:04.424000+00:00
['DevOps', 'Keycloak', 'Openshift', 'Programming', 'Kubernetes']
3 to read: Surviving tech | Improving subscriptions | Follow the Texas Trib’s money trail
By Matt Carroll <@MattCData> Dec. 1, 2018: Cool stuff about journalism, once a week. Get notified via email? Subscribe: 3toread (at) gmail. Originally published on 3toread.co How to survive the next era of tech (slow down and be mindful): Farhad Manjoo at the NYT is one of the more thoughtful commentators on the world of tech. In this, his last ‘State of the Art’ column, he has advice for consumers on to swim when the sea of technology always seems stormy. (btw: It’s a big swing in advice from his first column five years ago.) Whether you agree or not, he is a reasoned voice and is always interesting. Logo: @Leighzaaah on Instagram How to improve subscription registration & payment forms: The devil is in the details, as the saying goes. And it is always an unpleasant surprise to me how often registration and payment forms for news sites are clunky, too long, and confusing. Hello! Newsrooms, wake up, please. Reader revenue is the future. Make it as easy as possible for those readers to subscribe. Some nice examples for API by Gwen Vargo. Where the Texas Tribunes revenue comes from: As advertising-based revenue models for media collapse, it has become increasingly clear that newsrooms need to lean on a variety of different revenue streams. The Texas Tribune is a shining example of that. Here’s how they do it. Interesting story by Freia Nahser for the Global Editors Network.
https://medium.com/3-to-read/3-to-read-surviving-tech-improving-subscriptions-follow-the-texas-tribs-money-trail-9cb95906beba
['Matt Carroll']
2018-12-01 13:56:00.729000+00:00
['Farhad Manjoo', 'Matt Carroll', 'Media Criticism', 'Media Literacy', 'Journalism']
A Guide to Video Marketing
“To get a good spot, the best way is to do location scouting. If you have enough time, go to the event location a day before the event takes place so that you can have a clear image of what the location will look like and what angles are best.”
https://medium.com/better-marketing/a-guide-to-video-marketing-afc0b3302328
['Brittany Jezouit']
2020-12-03 20:36:12.855000+00:00
['Marketing', 'Tiktok', 'Video', 'Better Marketing Archive', 'YouTube']
A critical analysis of notification systems
What effect could the iOS notification system have? iOS notification system respects the user’s behavior of ignoring a notification. It does so by pushing notifications to the lock screen once and if the user ignores it (by unlocking the phone without interacting with any notification), it resets the lock screen to its clean state. This; in my view, is a form of balanced interruption — it interrupts the user once but doesn’t keep getting in the way every time the user interacts with the phone. However, it doesn’t do a good job at managing and organizing those notifications; especially important ones. Here’s a scenario – Let’s say Bob is on an iPhone. He has a dozen Instagram notifications above a text message notification from mom. To worsen things; in a few hours, there are so many notifications in the notification center that he cannot go through each of them. He clears them all. Badges are sprinkled across the springboard (image from https://images.homescreen.me/images/user5688/screens/screens_2x/564eb646544418ff.png) This creates an imbalance — the most important notifications are lost in the shuffle as it is overwhelming to look at a huge stream of alerts in a single place (on top of that, iOS treats every notification as its own cell and does not group multiple notifications into one). Now app badges are designed for this very purpose — so that each app can tell you that it needs your attention. However, too many badges could lead to divided attention, distraction and anxiety. Badges could have a similar effect to Android’s persistent notifications but only worse — they cannot be cleared easily. How Might We...? Create a notification system that has balanced interruption but also provides powerful techniques for notification management. BONUS How might we..? Create a notification system that does not make notifications hard to manage by stashing them all in a drawer. Triggers — what leads to a notification? Currently, notifications are organized by time and by app. Android goes a step ahead though — in my observation over a few months, notifications seem to be organized by priority. For example, a call or a reminder may stay on top but other items get shuffled to the bottom of the list as more items accumulate over time. In order to truly understand the different types of notifications, we must know the different types of triggers that cause a notification. A trigger (in this context) is any human or system that causes a notification on the end-user’s smartphone. When we look at a notification, we know the person or system behind it; and that is what makes triggers important. Our smartphones do not categorize our notifications based on triggers. Here are some examples of the different kinds of triggers (and this just scratches the surface)–
https://uxdesign.cc/a-critical-analysis-of-notification-systems-4956ed86a804
[]
2017-06-14 16:09:57.814000+00:00
['iOS', 'Design', 'User Experience', 'Android', 'UX']
How You Practice With Leetcode for Interviews Is Probably Bad
Conclusion To recap, Leetcode is not inherently bad. I think that Leetcode is kind of like riding your bike with training wheels, and in an interview, you won’t have those training wheels for support. It’s always great to practice in an environment that mirrors what the real setting will be like. More often than not, the people interviewing you will probably be the same people you’ll be working with if you were to get the offer. You interviewers want to make sure you are smart, but also want to make sure you don’t have a big ego and are reasonably easy to work with. Additionally, it’s 100% completely okay if you can’t solve a LeetCode problem and look at the solution. At the end of the day, we do all of this practice for a real technical interview, and I want to showcase what you can expect. Data structures and coding questions being asked haven’t changed that much in the past couple of decades, nor will they change any time soon. Any effort you put in preparing for technical or behavioral interviews today will help you down the road when you interview again (or even at your job/making side projects.) The next time you are practicing Leetcode questions, try remembering some of the points above and start treating practice like an actual interview. I used to record myself when I did Leetcode to see how I sounded. I had a bad habit of rambling or making stuff up (saying “Ummm” or “ugh”) when I didn’t know the answer. Over time I’ve gotten better at it, but I would have never noticed it without recording myself. It’s probably going to be awkward watching yourself, but self-reflection is arguably one of the best ways to get better at these things.
https://medium.com/better-programming/how-you-practice-with-leetcode-for-interviews-is-probably-bad-d4ee2bd7b05f
['Nathan Patnam']
2020-08-28 11:32:23.762000+00:00
['Software Engineering', 'Interview', 'Programming', 'Leetcode', 'Technical Interview']
Part 2: Spin up virtual machine on AWS and use it to host your flask server
2.33 Open the necessary ports add SSH rule for port 22 with source ::/0 add custom TCP Rule for port 3333 and source 0.0.0.0/0 add custom TCP Rule for port 3333 and source ::/0 After you have added the rules, your security should look like this Before anything more, navigate back to the running instances, and find your Public DNS (IPv4) number: You might also want to copy down your IPv4 Public IP 2.4 Set up the Config file in .ssh folder on your local machine Navigate to .ssh folder Open config file (if you don’t have one, make one without extension) Put the following text into the config file Host virtual HostName Your_Public_DNS_goes_here User ubuntu IdentityFile ~/.ssh/VirtualMachine.pem <- path_to_pem_file Save your config folder and go to terminal By typing the following command, you should be able to access your virtual machine: ssh virtual 2.5 Adding your flask server to the new machine So now flashback to part 1 of the tutorial. We made a Flask Server! Lets get this on the new machine and run our server remotely! On your local machine go to the directory above flask_website. Type commands: # Copy the files over scp -r flask_website/ virtual: #ssh into the virtual machine ssh virtual #change directories into the website folder cd flask_website #start the server python server.py To now access the website, get your ip, and add :3333 to it, so in my case it would be: http://54.209.214.30:3333/ Or alternatively, you can use the following commands so that you can access the site at any time! #ssh into the machine ssh virtual #Go into the website directory cd flask_website #start a tmux session tmux #Run the server python server.py
https://medium.com/future-vision/part-2-spin-up-virtual-machine-on-aws-and-use-it-to-host-your-flask-server-b06edb7e00ee
['Elliott Saslow']
2018-07-23 22:03:54.987000+00:00
['Flask', 'Amazon Web Services', 'Developer', 'Cloud Computing', 'Ec2']
Splitting a string column and getting unique values and frequency in Python
Splitting a string column and getting unique values and frequency in Python With two methods: For loop and Counter container Photo by Jon Tyson on Unsplash What the problem and target are: In order to simplify the problem, I take out two columns from my working file which is a Stackoverflow yearly Survey file, as below: You can download the file from StackOverflow survey 2017, use the survey_results_public.csv file, and extract two columns: Respondent and CousinEducation to test the following codes. My target is to get unique values for the column CousinEducation. The data in this column is a string type, separated by semi-column, but how many items (or semi-columns) in one row is none-fixed. I will introduce two methods to do it. Let’s start the journey. Have a look at the data: Import the library, read the file, and check the size of the file as below: Have a look at how the data looks like, with value_counts method, which is a very common method to deal with categorical data. Very good. It groups according to the answers of CousinEducation, but still they are still far away from what I want. Let’s rename “index” into method and CousinEducation into count to make it much more meaningful. Method1: use for loop and list(set()) Separate the column from the string using split, and the result is as follows. Let’s check the type. Making sure the data type can help me to take the right actions, especially, when I am not so sure. 2. Create a list including all of the items, which is separated by semi-column Use the following code: Now how df1 looks like: Great! We get much closer. Now search for the method of getting unique values. 3. Get the unique values As you know, df1 is a list. We use list(set()) to get the unique value from df1: It seems that there are some leading spaces for the same content, now delete the space. Now we need to get a unique value again, use the same method: list_3=list(set(list_2)). Great! We get unique values. Everything goes well, let’s write it into a function to make it modularization. Let’s have a test: Super, it gets the unique values! Now let’s try the second method. Method 2: use of Counter container Counter is a container that keeps track of how many times equivalent values are added. The value can be accessed by dictionary API. First, import the library: 2. Instead of use for loop, we use Concatenate strings in the Series/Index with a given separator at the beginning. 3. The next two steps: split the string and replace the space as before. 4. Now use Counter container which keeps track of how many times equivalent values are added. 5. Use keys() to get the unique values The result is: 6. The most exciting way is that we can get the unique list and the frequency through method most_common(). By given i=13, most_common() method will get a list as follows: 7. Now change the list into DataFrame. 8. Let’s modulize it. The result is: Now you can see the evolution of the data. Lesson learned: In order to get the unique text from the Dataframe which includes multiple texts separated by semi-column, two methods are introduced here: Method1: Use two for loops to get the list Use list(set()) to get the unique value from the list Use strip() to delete the leading or trailing spaces for the string in the list Method2: Use Counter to get the container Use keys() method to get unique values Use most_common() method to get the unique values and frequency Feel free to choose the one you prefer. Of course, you can replace the separator in your situation. With the unique values, depending on your question, you can do a lot of further analysis. Thanks for your reading and happy coding.
https://towardsdatascience.com/splitting-the-text-column-and-getting-unique-values-in-python-367a9548d085
['Xue Wang']
2020-12-12 09:15:54.293000+00:00
['Data Wrangling', 'Python', 'Split Testing']
How to Solve Problems with Iteration and Good Design Research
Dave Breske is a UX designer who is passionate about simple, empathetic software. Through his work at Galaxy Digital, he makes software that helps connect volunteers with the causes they care about. How do you approach your work? I like to start by solely focusing on the problem, because without a really good idea of what your users actually need, it’s easy to solve the wrong problem. There’s a classic Einstein quote that goes something like “If I were given one hour to save the planet, I would spend 59 minutes defining the problem and one minute resolving it.” Especially in the corporate world, it’s not uncommon to get handed a project that already has some assumptions stapled on to it, and it can be tough to pump the breaks and spend some time really getting to the bottom of things. But it’s even harder to do that months later when devs are coding, and technical writers are documenting, and marketers are getting ready to sell. I specifically like to do personal interviews with our prospective users to get as close to the source as possible. Once I feel like I really understand the problem on an intimate level, then I start ideating. From there, you can go back to real users to test your ideas and see what has legs. What are the most important skills to develop in young designers? There are a bunch of good answers to this, but one that I don’t see talked about enough is to “think with a paper trail”. And by that I mean that you should do sketching, take notes, make diagrams and charts, and then keep all of it. And there are a few reasons why that’s a good practice. First, it helps justify that you’ve done the work. Sometimes you’ll get challenged on a solution, and you can say “here’s how we got here and why we think it’s the correct answer” with a lot more authority if you can show your work. Second, I’ve often found that there’s something valuable in an earlier draft of a project when I go back and look at my note that I’d forgotten about. Like, you are on concept #10, but you look back on concept #2 and see something you were playing around with a few weeks ago that didn’t quite work. But now it fits in perfectly. I also think that you can get a lot further in less time with pen and paper than you can by going into Figma or Sketch right away, but that might just be because I’m a little old school. How do you quantify your impact? This is a really good question, but I don’t know that I have a great answer off-hand. My company is getting better here, but we haven’t figured out a really quantifiable system yet. Luckily, I’m in a situation where they recognize the importance of design and are willing to kind of trust the process, but it’s tough. Ideally, you’d be doing a/b testing and then seeing which designs perform better on key metrics. Like, if you’re working on a new check-out workflow for an eCommerce site, you can track how many clicks it takes for a use to complete the workflow, or how many customers are dropping out of the process before completion. How do you know when a design or a product is good? This is another really good question. For me, a good product is the result of identifying a real problem, understanding the people behind that need, and then solving that problem as simply and holistically as possible. Good design is thoughtful about who will use the product, and how the product will fit into their lives. And, importantly, good design is the result of a gauntlet of research, development, and testing. You ideally shouldn’t have to think a design is good. You’ll know because you’ve tested the hell out of it. How does data inform design? This is a decent topic in and of itself. There’s a whole industry around tracking everything that your users do, anonymizing that data, and then using that “big data” to figure out where your product can be improved, what do develop next, etc. I’m not an expert on that, but there are a number of fascinating stories to be told there. For example, Netflix uses their algorithms to figure out which types of content their users would be most interested in. They analyze search results, what people watch, how long they watch, and what they skip over, and then go develop original programming to fill the gaps in the marketplace. Really cool stuff, but not my expertise. I use much smaller data in my day to day. We do user interviews, competitive research, and market research, which we use to figure out where there’s a need, and then we do more focused development and prototyping to fine-tune things. How do you think about product development and iteration? Iteration is everything in product development. The goal isn’t necessarily to hit a hole in one, but to win the long game by consistently driving toward the green and making your putts. You’ll get a lot further in a lot less time with a cycle of build-test-build-test-repeat than plan-plan-plan-build, and you’ll start getting crucial customer feedback sooner. Especially with software, the cost of being agile is pretty minimal — it’s not like you have to go spend hundreds of thousands of dollars retooling expensive manufacturing equipment every time you try something different. What is the role of research? What does good research look like? Good design research has to be empathetic. Raw data can tell you where an opportunity is or where there’s a problem, but you can’t rely on numbers to tell you how to fix a problem or how to build a product that your users love. That’s why there’s a human in the mix. It’s our job as designers to synthesize the data into something meaningful. And often, the problem that’s presented by your users is only a symptom of a larger need. Really good research can help you find the solution that the user didn’t even know they wanted. Really good research challenges the conventional wisdom about a project, and can help you find the solution that the user didn’t even know they wanted.
https://uxplanet.org/how-to-solve-problems-with-iteration-and-good-design-research-1d9d0c290d53
['Carbon Radio']
2020-10-15 13:02:54.887000+00:00
['UI', 'UI Design', 'Design', 'UX Design', 'UX']
Podcast Episode #4: HCI, Design, and Portfolios with Philip Guo
Philip Guo is an Assistant Professor of Cognitive Science at UCSD. His most recently taught classes are COGS 121: Human-Computer Interaction Portfolio Design Studio and COGS 127: Data-Driven UX/Product Design. In this podcast episode, Philip reflects on his teaching and research experiences in HCI and design thinking, and offers advice to students who are looking to build their portfolios and start side projects. Background Philip pursued a degree in Computer Science as an undergrad, but while his peers sought out industry positions after graduating, he was more inclined to follow a more research-centric route in academia. It was during his time in grad school when he became interested with how people interact with computers, rather than purely algorithmic work. This interest was expressed through the development of tools specifically for those doing computationally-based research, leading to the creation of Python tutor. Philip created Python Tutor to help students visualize code by creating diagrams that they can view as their code executes. It allows programmers to “step through their code one step at a time” online, much like how a personal tutor would walk us through our code, drawing diagrams and explaining along the way. Python tutor is widely used by beginner Python classes in universities and by individuals learning on their own. Other languages are supported as well, including Java, C, and C++. More on the differences between academia and industry, Philip talks about the trade-off between freedom and structure. With higher education, you are given more freedom to pursue your interests without necessarily needing a profitable outcome. Though of course, there is the aspect of obtaining funding for your research. It is not nearly as structured as industry where the profitable outcome is the general goal. Bigger companies have their own agendas and you are part of a team and have a specific role. Though, you also have more guidance and resources available to you. Like with all things, Philip says, deciding between academia and industry really depends on what your needs and goals are.
https://medium.com/ds3ucsd/podcast-episode-4-human-computer-interaction-design-thinking-and-portfolio-building-with-a20abf8c15cf
['Allison Chan']
2020-06-01 23:53:01.270000+00:00
['Design', 'Advice', 'Podcast', 'Interview', 'Data Science']
Paradox
Is it not paradoxical that the thing I was trying to escape, the thing that I dreaded the most, is exactly the same thing that got me into the right place, where I wanted to be and by that, eventually taking me where I want to go next? People say that you never know where life is going to take you. I can tell you that they are right. For a long period of time, I was trying to escape from a job. Frustration, unhappiness, unsatisfaction, and at times even desperation. Feelings that, although put me down more than once, they also kept me going; they worked as a barometer of who I didn’t want to become, what I didn’t want to be and how I didn’t want to feel, and because of that, to keep looking, to keep working, to keep creating, but more important, to keep acting. Now, one year later, I am here. I broke free. It didn’t happen by accident. It was hard work, commitment, dedication and sacrifice. It was not giving up despite the rejections. The “funny” thing, the paradox, is that, despite now being part of a different company, it is that past that it used to be my present what it is moving me forward. All those things that frustrated me, are now a knowledge base that I can rely on, but that is also valuable. The moral of the story it’s simple. You never know where things are going to take you, yes, but also, never take things from granted. Keep showing up, do the work, keep the goal in mind. As hard as it can get at times, it is important to remember that time goes by. Whatever your present is, it will eventually become your past, but it can, or probably will, dictate your future. Is up to you how you play those cards. It is a matter of action, repetition and stamina. Is a long term game, and you probably won’t get where you want to be on the first attempt, or the 10th, but nor will help you not doing anything, no keep trying and just quitting. You can look back at being at the same place, or look back and contemplate how all the effort paid off. Right now I look back, feel surprised and smile. Who would ever imagine?
https://medium.com/thoughts-on-the-go-journal/paradox-4c1fc64ea9b1
['Joseph Emmi']
2019-03-14 18:10:45.932000+00:00
['Motivation', 'Life Lessons', 'Goals', 'Life', 'Commitment']
Nazi-Normalizing Barf Journalism: A Brief History
by Dorothee Benz In the beginning was the profile of the Nazi next door, an inexplicable decision by the New York Times ( 11/25/17) to profile a right-wing extremist in the most sympathetic light possible. It was the most outrageous example of an outrageous genre of MSM — and particularly NYT — reporting: the never-ending effort to profile, study, explain, excuse and rationalize Trump voters. Without, of course, referring to them as racists. White men are always news that’s fit to print. The article was met with howls of protest across Twitter, but among the many apt responses, Bess Kalb’s description ( 11/25/17) captured my heart and gave me the single most useful phrase of the Trump era: “Nazi-normalizing barf journalism.” Again and again during Trump’s presidency, corporate media have fallen over themselves to find acceptable ways to describe utterly unacceptable behavior, policies and decisions-none more so than the New York Times. In every era, the Times ‘ center of gravity has been the legitimation of power, and the Trump era is no different. The paper’s obvious disdain for Donald Trump is continually cloaked in rationalizing headlines and descriptions. It’s as if they can’t help themselves-the stability of US institutions is more important than their integrity, and so they must normalize what should never be normalized. Just three examples: “Trump’s Embrace of Racially Charged Past Puts Republicans in Crisis” (8/16/17): This headline refers to Trump’s “very fine people” defense of neo-Nazis at the Charlottesville white supremacist rally where James Fields drove a car into a crowd of protesters, killing one (Heather Heyer) and injuring dozens, many seriously. “Racially charged past” = Confederate monuments celebrating the defense of chattel slavery. “Ocasio-Cortez Calls Migrant Detention Centers ‘Concentration Camps,’ Eliciting Backlash” (6/18/19): The headline suggests that the veracity of Alexandria Ocasio-Cortez’s description is up for debate, when, in fact, it is simply accurate terminology. Indeed, the subsequent rise of #JewsAgainstICE underscored that truth with the particular credibility that Jewish people bring to conversations about ethnic cleansing. The Times chose to cover the moment as a “she said, she said” debate between liberal Democrats like Ocasio-Cortez and Republicans like Liz Cheney. chose to cover the moment as a “she said, she said” debate between liberal Democrats like Ocasio-Cortez and Republicans like Liz Cheney. After a Green Bay rally (4/27/19) in which Trump called the media “sick people” and the officials he’s forced out of government “scum,” and accused Democrats of supporting infanticide (Vox, 4/29/19), the Times put out a tweet (4/28/19) saying that with the infanticide charge, “Trump revived an inaccurate refrain.” You get the idea. All of it is Nazi-normalizing barf journalism. In wrapping human rights abuses, lawbreaking, lies, corruption, cruelty, racism, misogyny and other abhorrent dimensions of the Trump administration in the familiar language and themes of Washington politics, the Times is actively helping stabilize the regime. We read these headlines and think “business as usual” rather than “this is intolerable, I must act.” In a recent example I find particularly troubling, the New York Times ( 10/13/19) reported on a video meme mashup, shown at a pro-Trump conference at one of Trump’s resorts in early October, showing Trump massacring members of the media and political opponents. In an era where both hate crimes and domestic terrorism (including mass shootings) are rising at an alarming rate, the celebration of violence in the name of the Trump brand is a disturbing escalation in the normalization of political violence. Trump has long invited his followers to violence. On the campaign trail, he promised to pay legal costs if his supporters beat up protesters, and advocated for torture “ much stronger than waterboarding.” In September, he suggested that whistleblowers should be executed. He has pardoned war criminals and other human rights abusers (e.g., Michael Behenna, Joe Arpaio). Trump also admires and glorifies violent authoritarians, like Rodrigo Duterte, Recep Ergodan, Kim Jong-un and, of course, Vladimir Putin. All this in addition to the violence his policies are wreaking. There is not a direct line between Trump referring to immigrants as vermin who will “infest our country” and the massacre of immigrants at an El Paso Walmart. Neither the antisemitic conspiracy theory that George Soros was funding migrant “caravans” from Central America, nor Trump’s lie about Middle Easterners infiltrating the caravans, is solely responsible for the murderous attack on a Jewish synagogue in Pittsburgh. Yet if these bigoted tropes did not cause massacres, surely they are part of the environment that has fueled them. Speaking of antisemitism, Trump’s “fake news” has always been one shade shy of Hitler’s “Lügenpresse” (“lying press”). White supremacists have long referred to the paper of record as the “ Jew York Times.” Given Trump’s constant description of the media as “the enemy of the people,” the possibility is ever-present, in this age of mass shootings, that someone will walk into a newsroom and open fire. If I’m honest, it surprises me that this has happened only once since Trump took office. None of this found its way into the Times ‘ coverage of the video. Instead, there are denials by lots of people, saying they neither saw nor knew about the video; and then this at the end: Throughout his 2016 campaign and presidency, Mr. Trump has sought to demonize the news media, partly out of frustration about the coverage of his administration and partly because he likes to have an opponent to target. This is Nazi-normalizing barf journalism. Poor Trump; he’s just frustrated by the bad press. Responding by labeling journalists liars and enemies of the people is just what most of us would do in the same situation. Plus, he likes having a foil. A reasonable strategy. I want to know how Michael Schmidt and Maggie Haberman (the bylined reporters) know that these are Trump’s reasons for demonizing the fourth estate. It’s a pretty definitive sentence, assigning motivation without any source or documentation. (Unlike the following sentence, which has at least anonymous sources: “Mr. Trump has also sought to undermine confidence in the mainstream media, some of his advisers acknowledge privately, to make people doubt the accuracy of less favorable accounts of what goes on in his administration.”) But more importantly, no, it is not OK for the president of the United States to baldly claim that documented reporting is “fake news”; that media are the enemy; and that journalists are bad people. It is, in fact, extremely dangerous. It undermines one of the most important checks on government power, and, as the video itself attests, it invites violence against journalists. The New York Times should say so. You can send a message to the New York Times at [email protected] (Twitter:@NYTimes). Please remember that respectful communication is the most effective.
https://drbenz3.medium.com/nazi-normalizing-barf-journalism-a-brief-history-9e39b17f8121
['Dorothee Benz']
2019-11-04 15:30:23.227000+00:00
['Journalism', 'Media Criticism', 'Politics', 'Media']
I Need to Know — 40 Years of Narcissistic abuse have taken their Toll.
I Need to Know — 40 Years of Narcissistic abuse have taken their Toll. Markus Scorelius Follow Dec 25 · 4 min read I see other people. Their world is different than mine. Child alone parents house from pixabay.com I need to know that I’m not a loser. I need to know that I’m on a burden on everyone I talk to. I need to know that people aren’t embarrassed to be seen with me. I need to know that you know what happened in my life.
https://medium.com/illumination/i-need-to-know-40-years-of-narcissistic-abuse-have-taken-their-toll-5ab449c80e70
['Markus Scorelius']
2020-12-25 14:46:11.005000+00:00
['Npd', 'Narcissism', 'Mental Health', 'This Happened To Me', 'Gods Presence']