title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Why journalism schools are teaching students artificial intelligence
“These technologies have much more in common with power tools than sci-fi superhuman robots — in the hands of a skilled user, they act as a force multiplier in the speed, breadth and scale of content that can be produced.” We opened the session at Columbia Journalism School with an overview of best practices when deploying new technology, followed by hands-on training on two tools widely used by newsrooms around the world: Automated Insights and Wibbitz. Representatives from each platform gave students an overview of their software, answered questions and provided suggestions for how students could improve their automated content. “Learning AI tools before hitting the newsroom will hopefully put me one step ahead in understanding how to save time on simple tasks and concentrate on more complex ones,” said Cecilia Butini, a Columbia Journalism School student who attended the workshop. What the workshop facilitators found most surprising was how quickly every aspiring “automation editor” understood how to use these tools in such a short period of time. Wibbitz CEO Zohar Dayan presents the principles of video automation “It was amazing to see a room full of future journalists dive in and start using a brand-new tool that they were just introduced to minutes prior,” said Zohar Dayan, the co-founder and CEO of Wibbitz. “Not only did they create great videos, but we also saw that they really enjoyed the process.” Text Automation Wordsmith is Automated Insights’ text automation platform. During the workshop, participants learned how to turn data into a text report using Automated Insights, an AI tool that enables journalists to develop dynamic templates that convert structured data into human-readable articles. This approach works well for sports, finance and any other form of structured story Video automation Wibbitz’ Control Room let’s journalists automatically turn story URLs into videos Students learned how to operate Wibbitz, a platform that utilizes image recognition to create rough-cut videos composed of archived images and videos that match a given text automatically. Here’s a great example of an automated video produced by two Columbia Journalism students, Deanna Paul and Taryana Odayar:
https://medium.com/tow-center/why-journalism-schools-are-teaching-students-artificial-intelligence-5db423701dc7
['Francesco Marconi']
2017-10-18 18:28:52.533000+00:00
['Journalism', 'Artificial Intelligence', 'Innovation', 'Media', 'University']
Developing Your Philosophers Toolkit
Validity & Soundness. These are terms you might have heard before, but in Philosophy — they have a special type of meaning. To discover the truth of the argument, there are two things you’ll need to establish. a. Validity First and foremost — you need to know whether an argument is valid. Validity refers to the logical part of an argument. In short, it asks whether what has been presented logically follows. It does so without concern for the truth of the premises. Instead, it assumes, for the sake of argument, that what is being said is actually true. Therefore, an argument is valid where, if the premises are true, it logically follows that the conclusion is true. A really simple example would be is Aquinas’ cosmological argument for the existence of God — Premise 1: Everything that exists has a cause. Premise 2: Infinite regress is impossible (that is, the chain of causes has to stop somewhere — it can’t go on for infinity). Conclusion: Therefore, there has to be a first causer. We call that being “God.” This argument is valid because if we assume, for the sake of argument, that Premise 1 & 2 are true, it logically follows that the conclusion is true. If the argument isn’t valid, it’s incoherent and doesn’t logically follow — so you should just reject it. But if it is valid, then to determine the argument’s truth; you must go on to assess whether it is sound. b. Soundness You’ve now established the argument is coherent— in effect, it’s logical and could be true. Now you need to ask yourself whether it really is. This is where soundness comes in, it focuses on each individual premise — and evaluates their truth. Therefore, an argument is sound if and only if, the argument is valid, and the premises are in fact true. This is often difficult to establish, of course. In the cosmological case, to determine whether the argument is sound you have to run through each premise, and ask: Does everything really require a cause? Is infinite regress really impossible? As this is a discussion on the existence of God, Philosophers are happy to admit the cosmological argument is valid — but whether the premises are true is widely debated. If you conclude that the argument is sound, then the conclusion is true. In this case, if the cosmological argument is sound — God really does exist.
https://medium.com/the-apeiron-blog/developing-your-philosophers-toolkit-9d0bca338319
['Jon Hawkins']
2020-05-31 17:21:33.469000+00:00
['Philosophy', 'Psychology', 'Politics', 'Books', 'Self']
What Brands Can Learn From Stans
What Brands Can Learn From Stans A Lesson in Customer Loyalty from Eminem WTF is a stan? According to not UrbanDictionary, but the actual Oxford dictionary, a ‘stan’ is “an overzealous or obsessive fan.” A ‘stan’ is a fan who takes it to the next level. The origin of ‘stan’ can be traced to late 2000 after the release of Eminem’s song “Stan,” which depicts a crazed fan who drives his car off a bridge after not receiving a reply to any of his fan mail. While Stan was just a character’s name in Eminem’s song, many believe the term originates from the combination of “super/stalker” and “fan.” Over the last 20 years the informal term has evolved into a verb — something that people now do for a particular celebrity. But just as celebrities have “super” or “stalker” fans, as do brands and organizations. They’re the 1% of the community. Those who submit the 10’s on the Net Promotor Scores. Those that would die for the brand. Those that are “surefire.” The Psychology of Stans Fan culture is incredibly powerful as it strikes a noteworthy balance between collectivism and individualism. With intense fan culture, belonging effortlessly coexist with the self-perception of uniqueness. For example, fans of a sports team or artist have each other, which breeds community and safety with power in numbers, but a fan can also feel distinct enough in a sea of other team or artists’ fans. In this sense, fandom is an effective strategy in identity construction. One can be unique but also comfortable in their uniqueness. From a brand POV, it’s worth noting that brand stans also help create identity. Perhaps one only wears Adidas stripes or drive Jeeps or reads The New Yorker. Their obsessive affinity helps denote a sense of self for themselves and others. The Disconnect & Opportunity But what many sport teams, artists or brands miss out on is fostering those obsessive fans. They’re taken for granted. While these factions don’t need to be inspired to lean in, that doesn’t mean that they don’t deserve any attention. If fact, the opposite could be argued: these fans deserve more attention than anyone else to thank them and keep them motivated. There’s an opportunity for brands to see the parallels between celebrity stan culture and their own fervent customers. After all, every brand has their own “stans.” Especially for brands today who ask ad nauseam, “How can we get people to engage with us?” it’s worth noting that people out there are already doing it. You just don’t know where to look or aren’t giving them attention. You don’t need to pay anyone to talk about you, people are already willing to do it for free. While these true influencers may not be as sexy, they’re at least organic. Connect with them, listen, and make them feel as special as they actually are. Here’s the lesson: Just as Eminem had his overzealous fan, Stan, by the time Eminem recognized the passion of Stan and finally found the time to write back, it was too late. Stan drove off a bridge. Ignore or take your zealots for granted, and you too may lose them. Brands must constantly be asking themselves, “What are we doing for our Stan, and can we ensure they he or she feels recognized?” And if nothing, be ready to lose them.
https://medium.com/on-advertising/what-brands-can-learn-from-stans-d7fcd4466aa8
['Matt Klein']
2020-03-19 21:14:10.538000+00:00
['Branding', 'Advertising', 'Entrepreneurship', 'Marketing', 'Digital Marketing']
Java Development with Microsoft SQL Server
Java Development with Microsoft SQL Server Calling Microsoft SQL Server Stored Procedures from Java Applications Using JDBC Introduction Enterprise software solutions often combine multiple technology platforms. Accessing an Oracle database via a Microsoft .NET application and vice-versa, accessing Microsoft SQL Server from a Java-based application is common. In this post, we will explore the use of the JDBC (Java Database Connectivity) API to call stored procedures from a Microsoft SQL Server 2017 database and return data to a Java 11-based console application. View of the post’s Java project from JetBrains’ IntelliJ IDE The objectives of this post include: Demonstrate the differences between using static SQL statements and stored procedures to return data. Demonstrate three types of JDBC statements to return data: Statement , PreparedStatement , and CallableStatement . , , and . Demonstrate how to call stored procedures with input and output parameters. Demonstrate how to return single values and a result set from a database using stored procedures. Why Stored Procedures? To access data, many enterprise software organizations require their developers to call stored procedures within their code as opposed to executing static T-SQL (Transact-SQL) statements against the database. There are several reasons stored procedures are preferred: Optimization: Stored procedures are often written by database administrators (DBAs) or database developers who specialize in database development. They understand the best way to construct queries for optimal performance and minimal load on the database server. Think of it as a developer using an API to interact with the database. Safety and Security: Stored procedures are considered safer and more secure than static SQL statements. The stored procedure provides tight control over the content of the queries, preventing malicious or unintentionally destructive code from being executed against the database. Error Handling: Stored procedures can contain logic for handling errors before they bubble up to the application layer and possibly to the end-user. AdventureWorks 2017 Database For brevity, I will use an existing and well-known Microsoft SQL Server database, AdventureWorks. The AdventureWorks database was originally published by Microsoft for SQL Server 2008. Although a bit dated architecturally, the database comes prepopulated with plenty of data for demonstration purposes. The HumanResources schema, one of five schemas within the AdventureWorks database For the demonstration, I have created an Amazon RDS for SQL Server 2017 Express Edition instance on AWS. You have several options for deploying SQL Server, including AWS, Microsoft Azure, Google Cloud, or installed on your local workstation. There are many methods to deploy the AdventureWorks database to Microsoft SQL Server. For this post’s demonstration, I used the AdventureWorks2017.bak backup file, which I copied to Amazon S3. I then enabled and configured the native backup and restore feature of Amazon RDS for SQL Server to import and install the backup. DROP DATABASE IF EXISTS AdventureWorks; GO EXECUTE msdb.dbo.rds_restore_database @restore_db_name='AdventureWorks', @s3_arn_to_restore_from='arn:aws:s3:::my-bucket/AdventureWorks2017.bak', @type='FULL', @with_norecovery=0; -- get task_id from output (e.g. 1) EXECUTE msdb.dbo.rds_task_status @db_name='AdventureWorks', @task_id=1; Install Stored Procedures For the demonstration, I have added four stored procedures to the AdventureWorks database to use in this post. To follow along, you will need to install these stored procedures, which are included in the GitHub project. View of the new stored procedures from JetBrains’ IntelliJ IDE Database tab Data Sources, Connections, and Properties Using the latest Microsoft JDBC Driver 8.4 for SQL Server (version 8.4.1.jre11), we create a SQL Server data source, com.microsoft.sqlserver.jdbc.SQLServerDataSource , and database connection, java.sql.Connection . There are several patterns for creating and working with JDBC data sources and connections. This post does not necessarily focus on the best practices for creating or using either. In this example, the application instantiates a connection class, SqlConnection.java , which in turn instantiates the java.sql.Connection and com.microsoft.sqlserver.jdbc.SQLServerDataSource objects. The data source’s properties are supplied from an instance of a singleton class, ProjectProperties.java . This class instantiates the java.util.Properties class, which reads values from a configuration properties file, config.properties . On startup, the application creates the database connection, calls each of the example methods, and then closes the connection. Examples For each example, I will show the stored procedure, if applicable, followed by the Java method that calls the procedure or executes the static SQL statement. For brevity, I have left out the data source and connection code in the article. Again, a complete copy of all the code for this article is available on GitHub, including Java source code, SQL statements, helper SQL scripts, and a set of basic JUnit tests. Use the following command to git clone a local copy of the project. https://github.com/garystafford/mssql-sp-java-jdbc-2020.git git clone --branch master --single-branch --depth 1 --no-tags \ To run the JUnit unit tests, using Gradle, which the project is based on, use the ./gradlew cleanTest test --warning-mode none command. A successful run of the JUnit tests To build and run the application, using Gradle, which the project is based on, use the ./gradlew run --warning-mode none command. The output of the Java console application Example 1: SQL Statement Before jumping into stored procedures, we will start with a simple static SQL statement. This example’s method, getAverageProductWeightST , uses the java.sql.Statement class. According to Oracle’s JDBC documentation, the Statement object is used for executing a static SQL statement and returning the results it produces. This SQL statement calculates the average weight of all products in the AdventureWorks database. It returns a solitary double numeric value. This example demonstrates one of the simplest methods for returning data from SQL Server. /** * Statement example, no parameters, returns Integer * * @return Average weight of all products */ public double getAverageProductWeightST() { double averageWeight = 0; Statement stmt = null; ResultSet rs = null; try { stmt = connection.getConnection().createStatement(); String sql = "WITH Weights_CTE(AverageWeight) AS" + "(" + " SELECT [Weight] AS [AverageWeight]" + " FROM [Production].[Product]" + " WHERE [Weight] > 0" + " AND [WeightUnitMeasureCode] = 'LB'" + " UNION" + " SELECT [Weight] * 0.00220462262185 AS [AverageWeight]" + " FROM [Production].[Product]" + " WHERE [Weight] > 0" + " AND [WeightUnitMeasureCode] = 'G')" + "SELECT ROUND(AVG([AverageWeight]), 2)" + "FROM [Weights_CTE];"; rs = stmt.executeQuery(sql); if (rs.next()) { averageWeight = rs.getDouble(1); } } catch (Exception ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.SEVERE, null, ex); } finally { if (rs != null) { try { rs.close(); } catch (SQLException ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.WARNING, null, ex); } } if (stmt != null) { try { stmt.close(); } catch (SQLException ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.WARNING, null, ex); } } } return averageWeight; } Example 2: Prepared Statement Next, we will execute almost the same static SQL statement as in Example 1. The only change is the addition of the column name, averageWeight . This allows us to parse the results by column name, making the code easier to understand as opposed to using the numeric index of the column as in Example 1. Also, instead of using the java.sql.Statement class, we use the java.sql.PreparedStatement class. According to Oracle’s documentation, a SQL statement is precompiled and stored in a PreparedStatement object. This object can then be used to execute this statement multiple times efficiently. /** * PreparedStatement example, no parameters, returns Integer * * @return Average weight of all products */ public double getAverageProductWeightPS() { double averageWeight = 0; PreparedStatement pstmt = null; ResultSet rs = null; try { String sql = "WITH Weights_CTE(averageWeight) AS" + "(" + " SELECT [Weight] AS [AverageWeight]" + " FROM [Production].[Product]" + " WHERE [Weight] > 0" + " AND [WeightUnitMeasureCode] = 'LB'" + " UNION" + " SELECT [Weight] * 0.00220462262185 AS [AverageWeight]" + " FROM [Production].[Product]" + " WHERE [Weight] > 0" + " AND [WeightUnitMeasureCode] = 'G')" + "SELECT ROUND(AVG([AverageWeight]), 2) AS [averageWeight]" + "FROM [Weights_CTE];"; pstmt = connection.getConnection().prepareStatement(sql); rs = pstmt.executeQuery(); if (rs.next()) { averageWeight = rs.getDouble("averageWeight"); } } catch (Exception ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.SEVERE, null, ex); } finally { if (rs != null) { try { rs.close(); } catch (SQLException ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.WARNING, null, ex); } } if (pstmt != null) { try { pstmt.close(); } catch (SQLException ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.WARNING, null, ex); } } } return averageWeight; } Example 3: Callable Statement In this example, the average product weight query has been moved into a stored procedure. The procedure is identical in functionality to the static statement in the first two examples. To call the stored procedure, we use the java.sql.CallableStatement class. According to Oracle’s documentation, the CallableStatement extends PreparedStatement . It is the interface used to execute SQL stored procedures. The CallableStatement accepts both input and output parameters; however, this simple example does not use either. Like the previous two examples, the procedure returns a double numeric value. CREATE OR ALTER PROCEDURE [Production].[uspGetAverageProductWeight] AS BEGIN SET NOCOUNT ON; WITH Weights_CTE(AverageWeight) AS ( SELECT [Weight] AS [AverageWeight] FROM [Production].[Product] WHERE [Weight] > 0 AND [WeightUnitMeasureCode] = 'LB' UNION SELECT [Weight] * 0.00220462262185 AS [AverageWeight] FROM [Production].[Product] WHERE [Weight] > 0 AND [WeightUnitMeasureCode] = 'G' ) SELECT ROUND(AVG([AverageWeight]), 2) FROM [Weights_CTE]; END GO The calling Java method is shown below. /** * CallableStatement, no parameters, returns Integer * * @return Average weight of all products */ public double getAverageProductWeightCS() { CallableStatement cstmt = null; double averageWeight = 0; ResultSet rs = null; try { cstmt = connection.getConnection().prepareCall( "{call [Production].[uspGetAverageProductWeight]}"); cstmt.execute(); rs = cstmt.getResultSet(); if (rs.next()) { averageWeight = rs.getDouble(1); } } catch (Exception ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.SEVERE, null, ex); } finally { if (rs != null) { try { rs.close(); } catch (SQLException ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.SEVERE, null, ex); } } if (cstmt != null) { try { cstmt.close(); } catch (SQLException ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.WARNING, null, ex); } } } return averageWeight; } Example 4: Calling a Stored Procedure with an Output Parameter In this example, we use almost the same stored procedure as in Example 3. The only difference is the inclusion of an output parameter. This time, instead of returning a result set with a value in a single unnamed column, the column has a name, averageWeight . We can now call that column by name when retrieving the value. The stored procedure patterns found in Examples 3 and 4 are both commonly used. One procedure uses an output parameter, and one not, both return the same values. You can use the CallableStatement to for either type. CREATE OR ALTER PROCEDURE [Production].[uspGetAverageProductWeightOUT] @averageWeight DECIMAL(8, 2) OUT AS BEGIN SET NOCOUNT ON; WITH Weights_CTE(AverageWeight) AS ( SELECT [Weight] AS [AverageWeight] FROM [Production].[Product] WHERE [Weight] > 0 AND [WeightUnitMeasureCode] = 'LB' UNION SELECT [Weight] * 0.00220462262185 AS [AverageWeight] FROM [Production].[Product] WHERE [Weight] > 0 AND [WeightUnitMeasureCode] = 'G' ) SELECT @averageWeight = ROUND(AVG([AverageWeight]), 2) FROM [Weights_CTE]; END GO The calling Java method is shown below. /** * CallableStatement example, (1) output parameter, returns Integer * * @return Average weight of all products */ public double getAverageProductWeightOutCS() { CallableStatement cstmt = null; double averageWeight = 0; try { cstmt = connection.getConnection().prepareCall( "{call [Production].[uspGetAverageProductWeightOUT](?)}"); cstmt.registerOutParameter("averageWeight", Types.DECIMAL); cstmt.execute(); averageWeight = cstmt.getDouble("averageWeight"); } catch (Exception ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.SEVERE, null, ex); } finally { if (cstmt != null) { try { cstmt.close(); } catch (SQLException ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.WARNING, null, ex); } } } return averageWeight; } Example 5: Calling a Stored Procedure with an Input Parameter In this example, the procedure returns a result set of type java.sql.ResultSet , of employees whose last name starts with a particular sequence of characters (e.g., ‘M’ or ‘Sa’). The sequence of characters is passed as an input parameter, lastNameStartsWith , to the stored procedure using the CallableStatement . The method making the call iterates through the rows of the result set returned by the stored procedure, concatenating multiple columns to form the employee’s full name as a string. Each full name string is then added to an ordered collection of strings, a List<String> object. The List instance is returned by the method. You will notice this procedure takes a little longer to run because of the use of the LIKE operator. The database server has to perform pattern matching on each last name value in the table to determine the result set. CREATE OR ALTER PROCEDURE [HumanResources].[uspGetEmployeesByLastName] @lastNameStartsWith VARCHAR(20) = 'A' AS BEGIN SET NOCOUNT ON; SELECT p.[FirstName], p.[MiddleName], p.[LastName], p.[Suffix], e.[JobTitle], m.[EmailAddress] FROM [HumanResources].[Employee] AS e LEFT JOIN [Person].[Person] p ON e.[BusinessEntityID] = p.[BusinessEntityID] LEFT JOIN [Person].[EmailAddress] m ON e.[BusinessEntityID] = m.[BusinessEntityID] WHERE e.[CurrentFlag] = 1 AND p.[PersonType] = 'EM' AND p.[LastName] LIKE @lastNameStartsWith + '%' ORDER BY p.[LastName], p.[FirstName], p.[MiddleName] END GO The calling Java method is shown below. /** * CallableStatement example, (1) input parameter, returns ResultSet * * @param lastNameStartsWith * @return List of employee names */ public List<String> getEmployeesByLastNameCS(String lastNameStartsWith) { CallableStatement cstmt = null; ResultSet rs = null; List<String> employeeFullName = new ArrayList<>(); try { cstmt = connection.getConnection().prepareCall( "{call [HumanResources].[uspGetEmployeesByLastName](?)}", ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY); cstmt.setString("lastNameStartsWith", lastNameStartsWith); boolean results = cstmt.execute(); int rowsAffected = 0; // Protects against lack of SET NOCOUNT in stored procedure while (results || rowsAffected != -1) { if (results) { rs = cstmt.getResultSet(); break; } else { rowsAffected = cstmt.getUpdateCount(); } results = cstmt.getMoreResults(); } while (rs.next()) { employeeFullName.add( rs.getString("LastName") + ", " + rs.getString("FirstName") + " " + rs.getString("MiddleName")); } } catch (Exception ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.SEVERE, null, ex); } finally { if (rs != null) { try { rs.close(); } catch (SQLException ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.WARNING, null, ex); } } if (cstmt != null) { try { cstmt.close(); } catch (SQLException ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.WARNING, null, ex); } } } return employeeFullName; } Example 6: Converting a Result Set to Ordered Collection of Objects In this last example, we pass two input parameters, productColor and productSize , to a stored procedure. The stored procedure returns a result set containing several columns of product information. This time, the example’s method iterates through the result set returned by the procedure and constructs an ordered collection of products, List<Product> object. The Product objects in the list are instances of the Product.java POJO class. The method converts each results set’s row-level field value into a Product property (e.g., Product.Size , Product.Model ). Using a collection is a common method for persisting data from a result set in an application. CREATE OR ALTER PROCEDURE [Production].[uspGetProductsByColorAndSize] @productColor VARCHAR(20), @productSize INTEGER AS BEGIN SET NOCOUNT ON; SELECT p.[ProductNumber], m.[Name] AS [Model], p.[Name] AS [Product], p.[Color], p.[Size] FROM [Production].[ProductModel] AS m INNER JOIN [Production].[Product] AS p ON m.[ProductModelID] = p.[ProductModelID] WHERE (p.[Color] = @productColor) AND (p.[Size] = @productSize) ORDER BY p.[ProductNumber], [Model], [Product] END GO The calling Java method is shown below. /** * CallableStatement example, (2) input parameters, returns ResultSet * * @param color * @param size * @return List of Product objects */ public List<Product> getProductsByColorAndSizeCS(String color, String size) { CallableStatement cstmt = null; ResultSet rs = null; List<Product> productList = new ArrayList<>(); try { cstmt = connection.getConnection().prepareCall( "{call [Production].[uspGetProductsByColorAndSize](?, ?)}", ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY); cstmt.setString("productColor", color); cstmt.setString("productSize", size); boolean results = cstmt.execute(); int rowsAffected = 0; // Protects against lack of SET NOCOUNT in stored procedure while (results || rowsAffected != -1) { if (results) { rs = cstmt.getResultSet(); break; } else { rowsAffected = cstmt.getUpdateCount(); } results = cstmt.getMoreResults(); } while (rs.next()) { Product product = new Product( rs.getString("Product"), rs.getString("ProductNumber"), rs.getString("Color"), rs.getString("Size"), rs.getString("Model")); productList.add(product); } } catch (Exception ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.SEVERE, null, ex); } finally { if (rs != null) { try { rs.close(); } catch (SQLException ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.WARNING, null, ex); } } if (cstmt != null) { try { cstmt.close(); } catch (SQLException ex) { Logger.getLogger(RunExamples.class.getName()). log(Level.WARNING, null, ex); } } } return productList; } Proper T-SQL: Schema Reference and Brackets You will notice in all T-SQL statements, I refer to the schema as well as the table or stored procedure name (e.g., {call [Production].[uspGetAverageProductWeightOUT](?)} ). According to Microsoft, it is always good practice to refer to database objects by a schema name and the object name, separated by a period; that even includes the default schema (e.g., dbo ). You will also notice I wrap the schema and object names in square brackets (e.g., SELECT [ProductNumber] FROM [Production].[ProductModel] ). The square brackets are to indicate that the name represents an object and not a reserved word (e.g, CURRENT or NATIONAL ). By default, SQL Server adds these to make sure the scripts it generates run correctly. Running the Examples The application will display the name of the method being called, a method description, the duration of time it took to retrieve the data, and any results returned by the method. Below, we see the results. SQL Statement Performance This post is certainly not about SQL performance, demonstrated by the fact I am only using Amazon RDS for SQL Server 2017 Express Edition on a single, very underpowered db.t2.micro Amazon RDS instance types. However, I have added a timer feature, ProcessTimer.java class, to capture the duration of time each example takes to return data, measured in milliseconds. The ProcessTimer.java class is part of the project code. Using the timer, you should observe significant differences between the first run and proceeding runs of the application for several of the called methods. The time difference is a result of several factors, primarily pre-compilation of the SQL statements and SQL Server plan caching. The effects of these two factors are easily demonstrated by clearing the SQL Server plan cache (see SQL script below) using DBCC (Database Console Commands) statements. and then running the application twice in a row. The second time, pre-compilation and plan caching should result in significantly faster times for the prepared statements and callable statements, in Examples 2–6. In the two random runs shown below, we see up to a 5x improvement in query time. USE AdventureWorks; DBCC FREESYSTEMCACHE('SQL Plans'); GO CHECKPOINT; GO -- Impossible to run with Amazon RDS for Microsoft SQL Server on AWS -- DBCC DROPCLEANBUFFERS; -- GO The first run results are shown below. SQL SERVER STATEMENT EXAMPLES ====================================== Method: GetAverageProductWeightST Description: Statement, no parameters, returns Integer Duration (ms): 122 Results: Average product weight (lb): 12.43 --- Method: GetAverageProductWeightPS Description: PreparedStatement, no parameters, returns Integer Duration (ms): 146 Results: Average product weight (lb): 12.43 --- Method: GetAverageProductWeightCS Description: CallableStatement, no parameters, returns Integer Duration (ms): 72 Results: Average product weight (lb): 12.43 --- Method: GetAverageProductWeightOutCS Description: CallableStatement, (1) output parameter, returns Integer Duration (ms): 623 Results: Average product weight (lb): 12.43 --- Method: GetEmployeesByLastNameCS Description: CallableStatement, (1) input parameter, returns ResultSet Duration (ms): 830 Results: Last names starting with 'Sa': 7 Last employee found: Sandberg, Mikael Q --- Method: GetProductsByColorAndSizeCS Description: CallableStatement, (2) input parameter, returns ResultSet Duration (ms): 427 Results: Products found (color: 'Red', size: '44'): 7 First product: Road-650 Red, 44 (BK-R50R-44) --- The second run results are shown below. SQL SERVER STATEMENT EXAMPLES ====================================== Method: GetAverageProductWeightST Description: Statement, no parameters, returns Integer Duration (ms): 116 Results: Average product weight (lb): 12.43 --- Method: GetAverageProductWeightPS Description: PreparedStatement, no parameters, returns Integer Duration (ms): 89 Results: Average product weight (lb): 12.43 --- Method: GetAverageProductWeightCS Description: CallableStatement, no parameters, returns Integer Duration (ms): 80 Results: Average product weight (lb): 12.43 --- Method: GetAverageProductWeightOutCS Description: CallableStatement, (1) output parameter, returns Integer Duration (ms): 340 Results: Average product weight (lb): 12.43 --- Method: GetEmployeesByLastNameCS Description: CallableStatement, (1) input parameter, returns ResultSet Duration (ms): 139 Results: Last names starting with 'Sa': 7 Last employee found: Sandberg, Mikael Q --- Method: GetProductsByColorAndSizeCS Description: CallableStatement, (2) input parameter, returns ResultSet Duration (ms): 208 Results: Products found (color: 'Red', size: '44'): 7 First product: Road-650 Red, 44 (BK-R50R-44) --- Conclusion This post has demonstrated several methods for querying and calling stored procedures from a SQL Server 2017 database using JDBC with the Microsoft JDBC Driver 8.4 for SQL Server. Although the examples are quite simple, the same patterns can be used with more complex stored procedures, with multiple input and output parameters, which not only select, but insert, update, and delete data. There may be some limitations of the Microsoft JDBC Driver for SQL Server you should be aware of by reading the documentation. However, for most tasks that require database interaction, the Driver provides adequate functionality with SQL Server.
https://towardsdatascience.com/java-development-with-microsoft-sql-server-ee6efd13f799
['Gary A. Stafford']
2020-12-03 13:51:33.509000+00:00
['Microsoft Sql Server', 'Java', 'Development', 'Programming', 'Microsoft']
3 Honorable Traits That Can Stand in the Way of Your Happiness
The people we admire often share similar traits. We tend to look up to them and try to mimic their behavior. We want to reach their social standing, live a life similar to theirs, and become more trustworthy and respected ourselves. But every human trait has its shortcomings. Imitating them seems like a sure-fire way to get to where we want to be. They might hold the same values as us, but seem just better at going about life. But no matter how much we’d like it to, not every trait will come naturally to us. Trying to be more likable and trustworthy is actually not that hard to do. But it can easily make us lose sight of our intrinsic needs. People might like and trust us more, but we may be still feeling miserable. The more we look at positive traits from just one side, the harder it gets to stay objective. 1. Humility We are drawn to humble people because they seem more non-judgmental than most and make us feel respected. A study found humility can even help strengthen relationships because it’s connected with more trust and acceptance. Being humble is often seen as being modest, rather than self-righteous. But the Oxford Dictionary describes humility as not only having “a modest” but also a “low view of one’s importance”. Trying to be more humble means you put other people’s needs before your own. This can lead to putting yourself down to value those needs. Which means it can be self-deprecating in its nature as well. When you encounter narcissistic people, humility can easily lead to being treated as a push-over. If humility isn’t an integral part of your personality, it might be hard to know where to draw the line. How to keep the balance Research from Oxford’s Handbook of Positive Psychology shows how to use humility in a way that doesn’t have to be self-deprecating. The key elements being: Having an accurate view of your abilities. Acknowledging your’s but also other people’s imperfections. Appreciating and valuing people, including yourself. Being humble doesn’t mean you shouldn’t speak up. If you feel you’re being taken advantage of or not appreciated, reconsider how you can make sure your needs are taken into account as well. 2. Harmony Whether it be at work, at home, or with friends, a harmonious person will try to keep peace wherever possible. Usually people like being around this calm energy. You‘re known for being easy-going or someone people never have to argue with. But to keep the peace, you have to make and keep the people around you happy. This can lead to people-pleasing. Your own opinions or needs take the hit because you’re fearful of conflict or aren’t able to say no. But trying to keep the people around you peaceful as well can be exhausting, as you need to keep up with everyone’s needs. It adds reason to put your needs before your own, to rejection or abandonment. Trying to keep up with everyone except yourself, will lead to losing your sense of self. You’re not in tune with yourself anymore and get easily distracted from what makes you happy. It can make you resentful towards the people around you, without really knowing why. How to keep the balance Putting yourself first is hard if you aren’t used to it. And it’s not necessarily the right thing to do every time. But it’s important to show self-compassion. If done right, it can be a powerful practice to maintain lasting relationships. Research from Dr. Kristin Neff on Self-Compassion has shown it’s not about pitying yourself or having more self-esteem. It’s rather acting the same towards yourself than you would other people. Not only is it healthy, but it also helps you better understand yourself and can make lasting changes in your life. Rather than fearing conflict or not being accepted as you are, needs-and-all, you can see it as an opportunity to grow. It’s possible to be both flexible and honoring your needs. As a result, you’re more in tune with what you need and can resolve issues in an authentic manner. “We learnt to associate gloom with safety and joy with risk.“ — The School Of Life 3. Reason Being reasonable is associated with a level of maturity. If you know when to resist or when it’s wise to act, you’re considered trustable. Seeing people you admire make reasonable choices as their way to success, can make you feel like you’re right to be as well. But for some, being reasonable isn’t an act of rationality and intelligence but one of fear. Maybe you’re scared of failing, maybe scared of being judged. Either way, the reasonable choice is almost always the one with less amount of risk involved. Research conducted at the Victoria University of Wellington came up with the “Fear of Happiness Scale”. During a study participants from 14 different countries stated they were avoiding happiness out of one of the following reasons: They feared happy events were followed by bad ones. Excessive joy wasn’t reasonable or had to be earned. For some, it had a cultural background. For most, this was caused by events of the past. They had either experienced a joyful event that was shortly followed by a bad one or had role models with similar beliefs. How to keep the balance Sometimes we are right to play it safe. But a lot of times we do it out of fear. Getting out of your comfort zone can be a major stepping stone to get a better sense of what you should avoid, and what doesn’t have to be. It’s normal to fear the unknown. But it’s important to differentiate rational decisions from fear-based ones. There’s no guarantee to avoid bad experiences. But rather than avoiding them at any cost, you should seek to learn from them. This takes commitment but has immense benefits. A study conducted at the University of Turku in Finland found young adults prone to risk were curious and craved new experiences. This was found to be beneficial for brain development. They were able to make smart choices much quicker than the rest. Risky behavior does not necessarily only put you in risky situations. You also get to learn a lot more in less time and, in turn, can make better decisions.
https://medium.com/be-unique/3-honorable-traits-that-can-stand-in-the-way-of-your-happiness-9fb1391e5e82
['Carine Ru']
2020-10-05 03:33:06.094000+00:00
['Self-awareness', 'Happiness', 'Personal Development', 'Psychology', 'Personality']
The Steps To Giving Yourself Permission
Humans are not famously good decision-makers. Particularly when we are faced with too many options. The more choices we have, the more factors we try to consider, the ‘better informed’ we assume we are, to make the ‘right’ choice. Right? Actually, we’re also less likely to make and follow through on a decision in this scenario. To quote psychologist Barry Schwartz: Something as dramatic as our identity has now become a matter of choice… We don’t inherit an identity; we get to invent it. And we get to reinvent ourselves as often as we like. And that means that every day, when you wake up in the morning, you have to decide what kind of person you want to be… We see the ability to make decisions listed on every management or workplace skills blog, because the ability to make decisions effectively is considered a key skill. And clearly, not just at work. Many decisions that we make are about how we ultimately want to live. And as Schwartz says, the kind of person we want to be. No pressure then. Step One: Know Your Values When life is filled with so many choices, and so many sources of input from our environment, as is the case, fatigue is never far away. We can easily go along with a journey that seems acceptable or available — we can easily end up on the same current as many others do, potentially just because we needed to choose something. There’s nothing wrong with this. Sometimes people think of this as ‘settling’. But there’s no harm in that, if it allows you freedom in other senses. The trouble comes if you turn up one day and realise that actually, you’re not where you feel you should be. What if you do eventually realise enough about yourself to know that you need a change? This is about values. When you’re so busy gathering options and information, values feel a little flimsy, hard to define. A bit woo-woo, maybe. But deciding on a set of values can help cut away all the rest of the noise — all the rest of the options that keep interfering in the small decisions that lead up to the big one: how am I going to live my life? Step Two: Commit One of the outcomes of having so many potential options, so many sources of information to synthesise and points of view to consider, is that even if we do come to a decision about what we value, and what we want to be doing as a result, we can still feel the pang of having to commit. And always knowing that there might be something better, more perfect, if only we knew more, or had another option come up. This is about commitment. What’s holding you back from stepping over the threshold into your decision? Is it fear? Is it imaginative ideas of what might happen as a result? I’ve seen many an exercise that ask you to imagine what the worst possible outcome of your decision might be. What’s the worst thing that could happen? But I want to imagine it another way: What will I think of my life, on my deathbed, if I don’t commit to the life I want? Will I look back and think, ‘Oh well.’ Or will I think, ‘Oh, crap.’ And if you try, and fail, will you think, ‘Well, I wish I’d never tried’? I don’t think so. Step Three: Only You Permit Yourself One thing that seems to hold many of us back is the notion that, even with the info, and the insight into our own values, we might just still fall short. We might not be good enough to do what we really want to do. We might not have the talent, the work ethic, the whatever, to see it through. We might try but just still not make it, so the safety of now is a compromise we should accept. This is about self-belief. Often, giving ourselves permission — to do what’s scary, or what’s right for us — is the hardest part of the decision-making process. Because even if we see the logical value, the financial value, the social value, or whatever other value, that a decision holds for us, we can still struggle to acknowledge that we have to put ourselves on the line. It is a risk. It’s a risk not just because of any other practical factors in play (be it money, resources, time, etc). It’s a risk because we’ll have to face the fact that we don’t believe in ourselves entirely. We’re not sure we are worth putting the decision into practice, and we aren’t convinced that we will be fine, whatever the outcome. We’re used to permission coming from other places — our parents, our teachers, our bosses — but not really from ourselves. When we are faced with the notion of really living our life the way we want to live it, though, who else is there who can really know what outcome we want to achieve? It’s you. It all comes back to you. Of course there will always be other factors to consider, but ultimately, it’s ourselves that we have to face everyday with our choices. Only you can give yourself that final permission to do what really matters. Step Four: Practice Self-Compassion It’s simply not the case that we sit down, decide our values, act on them, and voila, end of story. Life is rarely that linear. Our values, our ideas and how we exert them, are going to change — with age, with experience, with new factors or circumstances. Stuff changes. That doesn’t make you a non-committal fraud, or someone whose values were just too weak to withstand change. Change happens, it’s a certainty. And when it does, it’s up to us to be able to adapt our ideas and values too. This is about self-compassion. You’re not going to get everything right, that’s a certainty as well. Not every decision can be executed with precision. But knowing, for yourself, that you did your very best and that you still love and respect yourself as a result, is extremely powerful. Make it a habit to reflect on how things actually went, once you committed to an action — guaranteed, there will always be good things that came out of your choices, even if the result wasn’t as positive as you’d hoped. I use journaling for this purpose. Keeping track of my emotions, of my thoughts, allows me to figure out what I can really take away from my decisions, and how best to refine the actions I take towards living a life according to my values. Individual Permission Needs No Explanation I’ve sat in meetings with people, explaining my decisions, trying to justify myself… One of the biggest lessons I can reiterate here is that once you’ve decided what you need to do for yourself, don’t explain. Maybe you will want to talk through your decisions with people, particularly when that decisions affects their life too. But don’t let it become a chance for erosion of your commitment, if you’re trying to give yourself permission to take a risk that might really mean living life the way you want. In my case, committing to spending more time writing was one that I mistakenly tried to justify to friends and colleagues, who might not really understand why that is important to me — it’s not that they dissuaded me as such, but if you’re taking a risk and you have been struggling with self-belief, then you know how easily your resolve can dissolve. By looking for justification or permission from others, you set those around you up to fail. Often, no reaction they could give you will be satisfactory, if you’re already looking for reasons not to follow through. You know yourself best. If you know in your heart of hearts that your decision is important and right for you, don’t go looking for reassurance after the fact. Of course, you might want someone nearby to support you, and nobody is an island. But be wary of exposing your ideas to just anyone. Give yourself permission to believe in yourself first. Share only if you’re ready.
https://medium.com/swlh/the-steps-to-giving-yourself-permission-ab2793e8a544
['Christina Hope']
2019-11-26 08:05:55.373000+00:00
['Life Lessons', 'Mental Health', 'Self', 'Psychology', 'Decision Making']
I Wrote my Autobiography by Accident
I Wrote my Autobiography by Accident Right here on Medium Photo by Dan Dimmock on Unsplash For anyone who has struggled to find a narrative thread or any hint of consistency in my Medium posts — join the club. I started in October and have uploaded, or written from scratch, almost 100 articles. And to some extent I have deliberately tried to introduce variety, flipping between novel extracts, present-day vignettes and teenage songs and poems. If I still like them, I figure maybe someone else will — my approach has been akin to throwing spaghetti against the wall and seeing what sticks. It has only become clear to me over the past week that the slices of life I have presented are all pieces of the same jigsaw. This is compounded by the fact that all of my attempts at fiction end up being based on real life situations, with just the thinnest of fictional veneers applied on top. So, when correctly ordered, an ever more complete picture starts to emerge.
https://medium.com/grab-a-slice/i-wrote-my-autobiography-by-accident-452d115a81af
['Mark Kelly']
2020-03-11 11:06:07.813000+00:00
['Short Story', 'Memoir', 'Nonfiction', 'Writing', 'Autobiography']
How to Be Indistractable
After five years of research, writing, and experimentation, I’m proud to share my new book with you. Indistractable is a guide to learning “the skill of the century,” the power to control your attention and choose your life. Some of my readers have asked me about why I decided to research distraction after writing the best-selling book, Hooked: How to Build Habit-Forming Products. Three reasons: First — I wrote this book because I needed it. As someone who has always struggled with distraction, I wanted an answer to the age-old question of why we do things against our best interests. I’d say I’d hit the gym but wouldn’t. I’d promise myself I’d finish that big project but would procrastinate for another day. I’d want to be present with my friends and family yet check my phone instead. I needed to know why I did these things and desperately wanted strategies for how to do what I said I would do. Second — Given my unique experience in the industry as someone who has helped countless companies design habit-forming products, I understand distraction’s Achilles’ heel. To be clear, I didn’t write Hooked for the big tech companies. We wrote it for people building products and services that could truly improve people’s lives if they would only use the product! Hooked taught designers how to use the same psychological principles that make Instagram, YouTube, and online gaming so engaging to help people build healthy habits in their lives. Five years in, countless companies have used the Hooked Model for good. Unfortunately, there’s also a downside to products designed to be so engaging. Sometimes we overuse these products and we find ourselves distracted. However, throwing away our phones and swearing off social media aren’t practical solutions for most of us. “Digital detoxes” and 30-day plans didn’t work for me. I love my devices and wanted practical ways to get the best out of tech without letting tech get the best of me. Third — I think it’s important to fight the notion perpetuated by some tech critics that tech is addicting everyone and “hijacking our brains.” The truth is much more nuanced. The fact is, while some tech does addict some people, the vast majority of us are not pathologically addicted. Believing we’re powerless makes us less likely to do something about the problem. We have way more power than we think. There’s no doubt many products are designed to be engaging. But would we want it any other way? The price of progress is a world of products so good we want to use them! Of course, there are bad actors who use deceptive practices and those companies deserve greater scrutiny. However, if we hold our breath waiting for companies to make their products less engaging, we’re going to suffocate. So why wait? There’s so much we can do right now to become indistractable. In Indistractable, I describe a four-part research-backed model that will help you finally live out your values and do what you say you will do. You’ll also learn:
https://medium.com/the-mission/how-to-be-indistractable-32e01945710d
['Nir Eyal']
2019-09-10 13:01:01.422000+00:00
['Life Lessons', 'Startup', 'Self Improvement', 'Technology', 'Productivity']
Don’t Be Afraid to Be a Pain
Shhh. I’m gonna let you in on a secret. Ready? Doctors don’t know everything. I know, I know — they’re the experts! But it’s true. They don’t know what it’s like when you’re lying awake at 3 am, staring at the ceiling and wishing you could sleep. They don’t know that your new med is helping, but the whispers in the back of your mind saying awful things still won’t go away. They don’t know that, even though you’re bathing more often and you haven’t tried to hurt yourself, you still can’t hold a job or keep your home clean or get back into your favorite hobbies. They didn’t know, before my second hospitalization, that I thought my recovery had gone as far as it could. That it wasn’t good enough. That I didn’t want a future that looked like my current reality. They didn’t know because I hadn’t told them. Doctors can’t read your mind, so self-advocacy is critical. You have to speak out or they can’t help you. If you have a difficult time speaking up for yourself, there are a few things you can try. Practicing what you want to say might help. You could also explain everything to someone you trust and bring them along to your next appointment to back you up. If you can’t articulate what you need to say in the office, try journaling at home and then bringing your journal with you. You can read from it directly, or write a brief summary and have your doctor read it. Also? Your treatment team works for you. If this med helps but the side effects are too much, tell them. You can probably try a different medicine. If you’ve heard of a new treatment you’d like to try, ask them about it. If they can’t provide it themselves, they can probably refer you to someone who can. If they suggest a treatment and it scares the crap out of you, be honest. They might be able to explain how it works and reassure you, or they might know of an alternative that you feel better about. Communication is the key here. Talk to your team, and let them help you. It’s that simple, and that hard. Don’t be afraid to be a pain! I know how depression can convince you that you cause problems for everyone, or that you don’t deserve help, or whatever. Negative thoughts come in many flavors, but I’m here to tell you that they are wrong. You aren’t a problem. Your team wants you to open up so they can help you. You deserve to get help and you deserve to get better. I promise. Do it. Speak up. I’ll be here, cheering you on.
https://medium.com/write-well-be-well/dont-be-afraid-to-be-a-pain-f4aa0bb13f5
['Rianne Grace']
2019-09-23 12:46:01.171000+00:00
['Mental Health', 'Mental Illness', 'Life', 'Psychology', 'Depression']
It’s Okay to Cringe Bro
It’s Okay to Cringe Bro Why it might be time to embrace the cringe we feel inside of us. Photo by JESHOOTS.COM on Unsplash Watching cringe videos is a pastime that I never expected myself to have. But, it just feels good to watch someone else mess up. In fact, I’m going to go as far as to say that it’s therapeutic. Whenever I have to think of a painful memory from my past, I am slightly comforted by the fact that there is an online archive full of minty videos to cringe to. Even better, if the cringe is too much for me, I can always watch someone else react to a video. I can still remember one of my earliest cringe moments. And, we all have them, so I don’t feel particularly embarrassed by it anymore. I was in kindergarten and my best friend Matt had betrayed me on the playground. I can’t remember what he did now, but I do remember running up to my teacher — absolutely flustered. “Mom, Matt just did this stupid crazy thing.” I said. It took milliseconds for me to realize what I had just done. Thankfully, there was no one else in the room. But as soon as I could see my teacher’s grin, I knew I had messed up. I apologized and she laughed it off. The amount of self-awareness I experienced at that moment reached astronomical levels. I’m reminded of the saying that I read somewhere on the internet that “If you’re cringing at something from the past, that means you’ve grown as a person.” And although that’s a really nice way of tucking the cringe to bed underneath a weighted blanket, it doesn’t scratch the itch entirely for me. Cringing Reminds You of Absurdity In an article titled The Unexpected Benefits of Cringing, Melissa Dahl explores the value behind this problematic sensation. After all, if I’m seeking out cringe videos late at night, there must be some reason behind it besides the misfiring of neurons in my sleep-deprived brain. Dahl points out that cringing at something forces ourselves to take the perspective of someone besides ourselves. It makes sure that we can clearly see what we’re doing from an outside perspective. By cringing, we’re kind of shedding dead skin. We realize, holy shit, that’s not how I or this person should be acting. That’s definitely something valuable for society as a whole if not for ourselves at the very least. I’ve come to think of cringe as a necessary reminder of the sheer absurdity of being human. If you can learn to appreciate this feeling — if you can learn to laugh at it, and at yourself — you’ll find more joy in your life. I hope I never stop cringing at myself. So far, so good. (M. Dahl) I sort of agree with Dahl to an extent. It’s important to cringe here and there. It reminds me that I make a much bigger deal over things than I actually have to. It lets me know that I’m not the only one on this planet, and it acknowledges that everything I do has a direct effect on others. It’s a gentle reminder that you aren’t really shit. Specifically, this talk about cringe also reminds me of Tom Robbins’s Fierce Invalids Home from Hot Climates. In it, Maestra (the protagonist’s grandmother) gives him a stern talk about the absurdity of self-importance and the value of being able to take everything a bit less seriously. And that’s why when you’ve exhibited the slightest tendency toward self-importance, I’ve reminded you that you and me — you and I: excuse me — may be every bit as important as the President or the pope or the biggest prime-time icon in Hollywood, but none of us is much more than a pimple on the ass-end of creation, so let’s not get carried away with ourselves. Preventive medicine, boy. It’s preventive medicine. When people take themselves too seriously, we tend to get some unsavory outcomes. Especially today, with the ability to record anything, no one wants to mess up. And no one certainly wants to be labeled as cringe. But, there is an opportunity to take back the cringe and reassign its purpose. That’s why it’s so important to step back and realize the absurd amount of thought you place into each action sometimes. Cringing can be a valuable tool to do so. Cringing Makes You More Comfortable With the Uncomfortable Every inspirational person out there, from David Goggins to Warren Buffett, always has some sort of small piece of advice related to discomfort. They always say something along the lines of “You have to become comfortable with being uncomfortable.” And cringe-worthy situations are just that. They are very uncomfortable. So, by exposing ourselves to more cringe, are we becoming better people? I mean, when I’m watching cringe videos, I’m sort of experiencing a high. In a way, I feel like it’s kind of like watching a scary movie. But as with any scary movie, if you know when the jump scares occur, it takes a bit of the edge off. And after watching enough of them, you kind of know what to look out for. The ghosts and demons don’t seem as scary. I thinking cringing in and of itself is a healthy activity. Cringing is the realization that something embarrassing or awkward has occurred. It allows us to reflect and consider what was so terrible and why it should NEVER happen again. It’s also a nifty tool for picking out certain people that we probably shouldn’t hang out with anymore. I remember hanging out with this one guy in high school who liked to drive really loud hondas, smoke cigarettes, and just basically participate in a bunch of degenerate behavior. After meeting a ton of his mutuals at different parties, I realized that I was reflecting on a lot of these gatherings and cringing. These kids were always edgy dirtbags with a general attitude that the world owed them everything. It was cringe and I was over it. By being able to go through an experience like that, I was able to navigate through social groups a bit better in college. I could pick out the authentic people over the ones who claimed to have everything figured out. It was nice. And I don’t think I could’ve come to understand really good social interactions without having gone through both first-hand and second-hand cringe. Cringing is an emotional experience primarily. It’s visceral. Sometimes it makes me want to disappear entirely. But, it’s also useful. It can show us that we’ve improved. And it can also show us that we still have a long way to go in becoming our ideal selves. Either way, it’s okay to cringe bro. Just let it out.
https://medium.com/casimirmura/its-okay-to-cringe-bro-6ebb52d79e23
['Casimir Mura']
2020-10-17 02:41:23.440000+00:00
['Mental Health', 'Culture', 'Self Improvement', 'Psychology', 'Social Media']
The hidden bias in iterative product development
In his book Thinking, Fast and Slow, Noble Prize-winning economist Daniel Kahneman discusses the psychological phenomenon of loss aversion, which he, along with Amos Tversky, first identified back in 1979. At its core, loss aversion refers to the tendency of the human brain to react more strongly to losses than it does to gains. Or, as Wikipedia puts it, people “prefer avoiding losses to acquiring equivalent gains: it is better to not lose $5 than to find $5.” This phenomenon is so ingrained in our psyche that some studies suggest that losses are twice as powerful, psychologically, as gains. In his book, Kahneman describes a study of professional golfers. The goal of the study was to see if their concentration and focus was greater on par putts (where failure would mean losing a stroke) or on birdie putts (where success would mean gaining a stroke). In an analysis of 2.5 million putts, the study found that regardless of the putt difficulty, pro golfers were more successful on par putts, the putts that avoided a loss, than they were on birdie putts where they had a potential gain. The subconscious aversion to loss pushed them to greater focus. If loss aversion is powerful enough to influence the outcome of a professional golfers putts, where else could it be shaping our focus and decisions? Loss, Gain, and Iterative Product Development Iterative product development is a process designed to help teams “ship” (get a product in front of customers) as quickly as possible by actively reducing the initial complexity of features and functionality. This is valuable because it gets the product in the hands of users sooner, allowing the team to quickly validate whether they’ve built the right thing or not. This makes it less risky to try something new. The alternative process, waterfall, asked teams to build in all the complexity upfront and then put the product in front of customers. A much riskier and potentially costlier proposition. Iterative product development achieves its speed through a Minimum Viable Product (MVP) approach. MVP means taking the possible feature set that could be included in a product, or the possible functionality a specific feature could deliver, and cutting it down to the minimum needed to bring value to the end user. As a simplified example, imagine you are designing the first music streaming app (like Spotify). It could have lots of potential features beyond just streaming music. Things like playlists, search, recommendations, following artists, sharing, offline mode, dark mode, user profiles and so on. Building all of that would take a lot of time and effort. So an MVP streaming app might just have music streaming and search. The goal is to build something quickly that can validate if users even want to stream music in the first place before you go invest in all those other features. Once the MVP of a product is live, a team can then quickly assess if it is successful or not, and with minimum time invested, can move rapidly to build on the initial functionality. It is this step of the process where things can start to go sideways. The problem starts with the concept of an MVP. We aren’t geared to think in terms of MVP. In fact, our mind takes the opposite approach. When we get excited about an idea our brain goes wild with all the possibilities (see our list of music streaming features above). We imagine all the possible value a product could deliver and then we have to lose a significant portion of that value by cutting it down to the bare minimum. It’s never easy. The unintended psychological consequence of this process is that we walk into the first version of our product with a feeling of loss. Even if our MVP is successful, that feeling sticks in our brain. Weakness-based Product Development The MVP process primes us to want to regain the value we believe we’ve lost. As soon as the product is live, we fall into a weakness-based, additive strategy, where we are compelled to add new functionality in order to win back our lost value (real or imagined). This weakness-based mindset gets further reinforced when we start analyzing data and feedback. Because loss aversion causes us to focus on losses more than gains, we are more likely to gloss over positive signals and areas of strength and focus instead on the areas of the product that “aren’t working.” Think about the amount of effort you put into understanding why something is not working versus the effort you apply to understanding why something is working. It is rare to hear someone say “how do we double down on this feature that’s working?” Instead, we strive to deliver value by fixing what we perceive to be broken or missing. In the worst case, we even [subconciously] go looking for signals that corroborate our underlying feelings of loss. To go back to our music streaming app, if you believed that playlist was a critical feature, but it was cut in the MVP, you are primed to put a higher weight on any feedback where a user complains about not having a playlist because it validates your own sense of lost value. Even if that feedback goes against the other signals you are receiving. We focus on areas of weakness because they represent potentially lost value, but weakness-based product development is like swimming upstream. Areas of strength are signals from your users about where they see value in your product. By focusing instead on areas of weakness, we are effectively ignoring those signals, often working against existing behavior in an effort to “improve engagement” by forcing some new value. This is why many product updates only garner incremental improvement. Swimming upstream is hard. Strengths-based Product Development Strengths-based product development means leveraging the existing behavior of your users to maximize the value they get from your product. It’s about capitalizing on momentum, instead of trying to create it. Instagram is a solid example of a strengths-based development approach. For starters, they have kept their feature set very limited for a long time. Especially early on, they did not focus on building new things but instead focused on embracing existing value. They prioritized things like new image filters and editing capabilities, faster image upload processing, and multi-photo posts. Instagram knows that the strength of its product is in sharing photos from your smartphone. They didn’t spend a ton of time enhancing comments or threads. They’ve made minimal changes to their “heart” functionality for liking posts. They never built out a meaningful web application. When they did create significant new functionality they often made it standalone, like Boomerang and Layout, as opposed to wedging it into the core experience. Arguably the biggest change they’ve made over the years was the addition of stories. However, even that feature, while copied from Snapchat, was still an extension of their core photo sharing behavior. And, ultimately, stories increased the value of feed-based photo sharing on Instagram as well. Before stories, all your daily selfies, food shots, workout updates and so on all went into your feed. Now, much of that lower quality posting goes into stories, and feed posts are reserved for higher quality photos, creating an enhanced feed experience. In contrast, take an example from my previous job. I was head of product for a streaming video service for almost seven years. As a subscription-based service, our bread and butter was premium video. However, many competitors in our space focused on written content, which we did not have. As an organization, we saw this weakness as a potential value loss and prioritized implementing an article strategy. Written content did not enhance our core user behavior, but we built up justifications for the ways that it could. This is actually a key symptom of weakness-based product development. When something enhances your core strength, its value is obvious. If you find yourself needing to build a justification, it’s a sign you could be on the wrong track. Articles never gained significant traction with our paying subscribers. They did, however, drive a high level of traffic from prospective customers via platforms like Facebook. But, the conversion rate for that traffic was extremely low. The gap between reading an article and paying a monthly subscription for premium videos was just too big of a leap. We were swimming upstream in an attempt to fill in perceived holes, but never really enhancing our core value. On the flip side, we also developed a feature that allowed subscribers to share free views of premium videos with their friends. Capitalizing on our core strength and an existing behavior (sharing). Like articles, this drove organic traffic but also had a significantly higher conversion rate. The effect of swimming with the current. Shifting Your Mindset The good news is that if you find yourself in a weakness-based mindset, there are a few straight forward things you can do to break out. 1. Analyze what works When you see areas of strength don’t just give yourself a pat on the back and move on, make those areas the key focus of your next iteration. Be the one to ask, why is this working and how can we accelerate it? Stop chasing new value. You are already delivering value. Build on that. 2. Move from addition to subtraction When you look at metrics, stop looking at weak performance as something to be improved. Instead, look at it first as an opportunity to simplify. Instead of immediately asking, how can we make this better? Make the first question, is this something we should get rid of completely? This is especially powerful in existing products. If you’ve been practicing weakness-based development you potentially have a bloated, underused feature set that’s dragging down your overall experience. What if every third or fourth development cycle you didn’t build anything new and instead focused on what you were going to get rid of? How quickly would that streamline your product and bring you back to your core strengths? 3. Understand your strengths Do you know what is valuable in your product? You have to be able to answer that question if you want to step into a strengths-based mindest. If you’re not sure about the answer that’s ok, you can start with this simple matrix. Feature Value Matrix. Image by author. Modified from Intercom Plot your features in the matrix. Features in the upper right quadrant represent your core value. How many of your product cycles in the last three months have focused on the elements in the upper right? If the majority of your work is not happening there then there is a good chance you are practicing weakness-based product development. If you are doing any work in the lower left quadrant you are wasting your time. Don’t waste cycles propping up weak features. Kill those features, move on and don’t fear the fallout. We get worried about upsetting users who have adopted features that aren’t actually driving our success (there’s that loss aversion again :)). It’s ok. Users will readjust, and yes, some might leave. But if you are clear on your product’s strengths and focus your efforts there, the value you gain will more than make up for anything you lose by cutting the things that are holding you back.
https://uxdesign.cc/the-hidden-bias-in-iterative-product-development-21e307a2c327
['Jesse Weaver']
2019-06-19 22:43:48.162000+00:00
['Design', 'Startup', 'Design Thinking', 'Tech', 'UX']
6 Things I Remind Myself to Prevent Anxiety-Driven Overthinking
6 Things I Remind Myself to Prevent Anxiety-Driven Overthinking #2. Don’t feel sorry for yourself. Photo by Nik Shuliahin on Unsplash Introduction If you’re an over-thinker or a perfectionist, it’s very easy to get lost in your own thoughts. I happen to be both. On top of that, I’m still trying to figure out who I am as I navigate through my twenties. I know the feeling of spiraling downwards so deep that you don’t see a way to get out. Sometimes I get so lost in my head that I question everything that I do which is an extremely toxic thing to do. That being said, I’ve also learned a couple of things that I try to constantly remind myself so that I don’t fall into the trap of self-deprecating introspection. While I’m not a psychologist, I have found that nailing these ideas in my head has helped keep me away from my negative thoughts. This article is as much for you as it is for me — I want this to serve as a reminder that anyone can come back to. With that said let’s dive into it!
https://medium.com/the-ascent/6-things-i-remind-myself-to-prevent-anxiety-driven-overthinking-14d2ea70645c
['Terence Shin']
2020-12-27 17:03:20.785000+00:00
['Self Improvement', 'Mental Health', 'Life Lessons', 'Productivity', 'Philosophy']
Treating Transsexuality: To Help or To Harm?
A trans (or, transgender) person is anyone whose gender identity does not align with the gender expectations that are placed upon their assigned sex; a cis (or, cisgender) person, by contrast, is anyone whose gender identity aligns with these expectations. For example, if someone has a penis, and identifies with masculinity or as a man, then they are cisgender; if someone has a penis and identifies with or as a non-masculine gender, then they are trans. Transsexuality refers generally to trans identities and lived experiences, and cissexuality refers to cis identities and lived experiences. Gender itself is a behavioral achievement, while assigned sex is framed as a biological fact. “Be male!” is an unintelligible command, while “Be a man!” is used to motivate male-bodied individuals to “man up” by reaffirming their alignment with masculine gender expectations. Gender is thus performative, sex is assumptive, and there are consequences that occur when the assumption of sex is violated by non-normative gender presentations. In this essay, I will describe arguments for and against the representation of transsexuality in the American Psychological Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM), and answer whether its inclusion as a diagnosis is overall more helpful or harmful to trans people. I will first outline how transsexuality has been represented in the DSM to represent the viewpoint of psychiatric professionals on the topic of transsexuality. I will then present the opposing viewpoint by outlining criticisms of transsexuality in the DSM that have been made by trans activists. There are parallels that can be drawn between how homosexuality and transsexuality have been represented in the DSM, and how activists have responded to this representation, so I will be begin by outlining a brief history of homosexuality’s presence in and removal from the DSM. Homosexuality in the DSM The APA considered homosexuality to be a mental illness from 1952 until 1973, which was when it was removed from DSM-II after gay rights activists lobbied for its removal (Drescher, 2015). Gay rights activists during this time “[believed] psychiatric theories to be a major contributor to antihomosexual social stigma” and so they “disrupted the 1970 and 1971 annual meetings of the APA” (Drescher, 2010, p. 434). A Nomenclature Committee was assembled and they determined that subjective distress and impaired social functioning were recurring elements behind the diagnosis of all mental illnesses, and concluded with the APA that “homosexuality in itself does not meet the criteria for being considered a psychiatric disorder” (as cited in Drescher, 2010, p. 435). This led to its removal, but there were two attempts to keep it in the DSM through inadvertent means: the first was Sexual Orientation Disturbance (SOD)*, which was added to DSM-II in 1973 after the removal of homosexuality (Drescher, 2015). SOD referred to distress caused by same-sex sexual desire along with a desire to change one’s sexuality, and provided the rationale for conversion therapy (Drescher, 2010). Ego Dystonic Homosexuality (EDH) replaced SOD in DSM-III (1980), but this diagnosis was removed in DSM-III-R (1987) because the APA realized that the diagnosis implicated that anyone who was distressed by an element of their identity (e.g., their race, weight, height, etc.) could qualify for this kind of diagnosis (Drescher, 2010). Homosexuality is no longer alluded to as a mental illness in the DSM, and gay rights activism has since resulted in the legalization of gay marriage and the integration of gay people into mainstream popular culture. Transsexuality in the DSM Transsexualism, gender identity of childhood (GIDC), and atypical gender identity disorder first appeared in DSM-III in 1980. All three diagnoses relied upon the persistent presence of perceived incongruence between assigned sex and gender identity, and varied depending on other criteria. Transsexualism emphasized “a persistent wish to be rid of one’s genitals and to live as a member of the opposite sex” (American Psychiatric Association, 1980, p. 261–262). GIDC diagnosed children who insisted that they were a boy or a girl despite having a genital assumed to indicate otherwise, and atypical gender identity disorder referred to any gender-sex-related concerns not accounted for by the other criteria (Beek, Cohen-Kettenis, & Kreukeis, 2016). Gender Identity Disorder of Adolescence or Adulthood, Nontranssexual Type (GIDAANT) was added to DSM-III-R in 1987, and referred to recurring distress about assigned sex without a desire to change sex characteristics (Drescher, 2010). In DSM-IV, GIDAANT was removed, and transsexualism and GIDC were collapsed into gender identity disorder (GID), which had different diagnostic criteria to differentiate symptomology between children, adolescents, and adults (Drescher, 2010). DSM-5 was released in 2013, and in it GID was “renamed ‘gender dysphoria’ in order to imply a diagnostic entity in its own right, not necessarily associated with severe comorbid psychiatric findings, and better characterized the gender incongruence-related experience and discomfort” (Costa & Colizzi, 2016, p. 1954). According to Kenneth Zucker, Chair of the Sexual and Gender Identity Disorders Work Group for DSM-5, GD was chosen as the name of the diagnosis in DSM-5 because it gets at the “core phenomenology” of the condition, which makes the “diagnostic criteria more precise” (Zucker & Duschinsky, 2016, p. 31). Zucker admits that discrimination and social rejection contribute to the experience of subjective distress in trans populations, and “that the distress/impairment criterion has never been well studied when it comes to gender dysphoria or its predecessors” (Zucker & Duschinsky, 2016, p. 29). Symptoms of cross-gender identification and discomfort about assigned sex were also combined for GD, and gender identity was reframed from a dichotomous (male/female) identification to a multi-dimensional spectrum of identifications (Cohen-Kettenis & Pfåfflin, 2010). GD therefore acknowledges both binary (e.g., male-to-female [MtF], female-to-male [FtM]) and non-binary (e.g., genderqueer, transmasculine/feminine, etc.) trans identities, and thus allows for more access to medical treatment for trans populations. This is how I feel about gender. Treatment and Transsexuality There is a 41% suicide attempt rate among trans and gender nonconforming populations, which stands in contrast to the 10–20% prevalence among cisgender lesbian, gay, and bisexual (LGB) adults, and the 4.6% national prevalence (Haas, Rodgers, & Herman, 2014). Trans children who have socially transitioned, or are supported in their trans gender identification by family and peers, show equal levels of depression and slightly higher levels of anxiety than national averages, which contrasts with children who have been diagnosed with gender dysphoria (GD), who show high levels of internalizing disorders (Olson, Durwood, DeMeules & McLaughlin, 2015). Social context and support therefore play an important role in modulating the severity of symptoms associated with GD. The role of social context is a non-constitutive element of the diagnosis of GD, however, which is part of what makes GD a controversial diagnosis. Sex-reassignment surgery (SRS) and hormone-replacement therapy (HRT) are important medical treatments that help reduce the severity of symptoms associated with GD. SRS and HRT are the means by which a trans person can transition, or begin to align their sexed bodies with their gender identities. SRS refers to any procedure that modifies primary or secondary sex characteristics (e.g., breast removal, gynopalsty, phalloplasty, tracheal shaving, etc.), and HRT refers to modification of the distribution of testosterone and estrogen in the body, often to induce the development of secondary sex characteristics (e.g., breasts, facial and body hair, vocal changes, etc.). SRS and HRT are medical procedures that have been shown to significantly improve self-esteem and overall well-being, and significantly reduce levels of anxiety, dissociation, perceived stress, and social distress in TGNC populations (Costa & Colizzi, 2016; Dhejne, Van Vlerken, Heylens, & Arcelus, 2016). I will focus on having access to SRS and HRT as a primary point of concern in this review, as it is possible to get access to these treatments only through diagnosis of GD. Criticisms of Transsexuality in the DSM Trans activists have opposed transsexuality in the DSM in a number of different ways. Some critics argue that transsexualism and GIDC emerged in DSM-III in reaction to the elimination of homosexuality as a mental illness. Their reasoning is that gender atypical behavior is common among children who later come out to be gay or lesbian, and so therefore GIDC was specifically designed to prevent the development of homosexuality by pathologizing gender nonconformity in children (Drescher, 2010). Zucker and Spitzer (2005) argued against this by saying that GIDC was not a “backdoor maneuver”** to replace homosexuality because ego-dystonic homosexuality was included in DSM-III and was removed in DSM-III-R after it was shown that it was conceptually flawed and lacked empirical support, and because there were individuals on the subcommittee for psychosexual disorders who argued for both the removal of homosexuality in DSM-II and for the inclusion of GIDC in DSM-III. Activists challenged this counter-argument by highlighting how early clinical efforts to “treat” trans children who were diagnosed with GIDC “aimed at getting them to reject their felt gender identity and to accept their [assigned] sex” (Drescher, 2010, p. 428). GID was scrutinized because it implied that a trans person’s gender identity itself was disordered, which implicates that trans gender identities are inherently pathological (Drescher, 2010). Reconceptualization stances toward GID took pragmatic positions by agreeing that GID needed to be renamed with less stigmatizing language, but that it should remain in the DSM so that trans people can have access to medical transition. Those in favor of reconceptualization argued that structural reform to the entire health care system was too lofty a goal, and that it was more pragmatic to lobby for a change to the diagnosis instead of a total elimination of it (Drescher, 2010). The fact that GID was renamed to GD in DSM-5 is a reflection of the success of these more moderate reconceptualization approaches to representations of transsexuality in the DSM. Those fully opposed to representations of transsexuality in the DSM have used a few different strategies to argue for its removal. One strategy involves citing evidence from neurological research to show that transsexuality is a naturally-occurring phenomenon. Zhou, Hofman, Goore, and Swaab (1995), for example, found that the bed nucleus of the stria terminals (BSTc), a brain area related to sexual behavior, was of a similar size between transgender and cisgender women, which points toward a neurological basis for gender identity. This research was used to argue against the stigmatization of transsexuality by showing it has a biological basis, which is similar to how research on neurological differences between homo- and heterosexual people were used to argue against the stigmatization of homosexuality (LeVay, 1993). Another strategy involved reevaluating prevalence rates of transsexuality in the United States to show that it is more common than American society has been led to believe. Conway (2002) found that an estimated 1 in 2,500 people in the United States received MtF SRS between the early 1960s and early 2000s, which implies the prevalence of transsexuality is much higher than the 1 in 30,000 psychiatric professionals reported as the prevalence rate. This strategy attempts to normalize transsexuality, as activists believe that part of its stigmatization involves its representation as something that is uncommon. This is similar to how gay rights activists referred to Alfred Kinsey’s research to show that homosexuality was more common than mental health professionals initially claimed. The gatekeeping role that mental health professionals play in providing access to medical care through psychiatric diagnosis of GD is a primary point of contention for trans activists. They argue that a diagnosis of a mental illness should not be required in order to have access to SRS and HRT, and that this decision should be transferred out of the jurisdiction of mental health professionals and over to physicians who can provide direct access to these procedures (Drescher, 2010). Their arguments highlight how it is unethical that transgender individuals must choose between being diagnosed with a stigmatizing mental illness and receiving access to medical care, or avoiding stigmatization through diagnosis and not having access to medical care (Serano, 2007). My Position I take a critical-pragmatic approach to this issue by saying that GD does currently need to remain in the DSM, but this is only because removing it would prevent trans people from being able to transition through SRS and/or HRT. My approval is ambivalent, however, because I also believe that lobbying should continue so that GD is removed entirely from the next edition of the DSM, and that access to medical care is transferred out of the jurisdiction of mental health professionals. I resent the paternalistic gatekeeping role psychologists play in providing access to medical care for trans people, and think that requiring a diagnosis of GD still ultimately upholds cultural investments in the conflation of transsexuality with psychopathology. I believe this connotation leads to the legitimization of anti-trans stigma in the United States, and projects sociocultural problems with gender nonconformity onto trans bodies. As Zucker mentions, there needs to be more research on the factors that contribute to the maintenance of gender dysphoria in trans populations, especially with regard to role of social support and location. There also needs to be critical attention placed on the cultural ideologies that make it the case that gender nonconformity is subject to discrimination in the first place. It seems easier for the APA to assume that someone who does not follow gender norms must be mentally ill than to consider that the mental illness actually belongs to American culture. While I agree with Zucker that GD better captures the phenomenology of distress behind the experience of transsexuality, it still fails to acknowledge how sociocultural investments in maintaining cissexuality as the norm are what make transsexuality appear pathological in the first place. Phenomenology is conceptually limited in its ability to account for the role of context in psychological disorder as it locates disorder within the confines of individual conscious experience, and ignores sociocultural factors that contribute to subjective distress. I, for example, am subjectively distressed whenever I wear lipstick in public because I notice how many people stare at me, and often fear for my safety. Reactions from the public thus play an irreducible role in my experience of subjective distress while wearing lipstick, but according to the DSM, this subjective distress is caused by GD, and the role others play in the distress is tangential. This is a fundamental problem with the diagnosis: it locates the diagnosis within the confines of the trans person who dares violate gender norms, and allows for potentially violent social regulation and enforcement of gender norms to continue unchecked. Gender normativity and cissexism are the real problems underlying GD, and these problems are based in American cultural ideology. I believe that transitioning is something that one should be able to decide for themselves without requiring prior psychiatric evaluation and diagnosis of GD. If the APA’s treatment of transsexuality were ethical, then psychiatric evaluation and diagnosis should be consistently required for anyone who wants to undergo a medical procedure that drastically changes their physical appearance. This should include those who decide to get breast implants, and should have certainly included Erik “The Lizardman” Sprague who medically transitioned into resembling a lizard with green full-body tattoos and subdermal facial implants. It is not fair that trans people need to be psychiatrically evaluated and diagnosed with GD in order to transition while cis people can change their appearance as much as they want without formal psychiatric evaluation or diagnosis. This double-standard reflects how transsexuality is still assumed to be inherently pathological in American society, and how the APA’s treatment of transsexuality is overall more harmful than helpful for trans people in the United States. Activism should aim toward the complete removal of transsexuality in the DSM, just as the work of gay rights activists advocated for the total removal of homosexuality from the DSM. Trans activists should disrupt APA meetings just like gay activists did in the 1970s. Trans psychologists should come out to increase trans visibility within psychiatric institutions, just like gay, lesbian, and bisexual psychologists came out during gay rights activism. Presentations of gender nonconformity should be practiced in psychological research communities so that gender nonconformity can start to be normalized from within the institution itself. The scope of lived trans experience should be expanded to include anyone who experiences discomfort about the expectations placed upon their genitals, including those who are commanded to “Be a man!” when their phallic integrity is questioned. Research should focus on showing how sexism underlies the stigmatization of both homosexuality and transsexuality; it is, after all, through cultural projections of gender non-conformity that gay men are assumed to be effeminate, and lesbian women are assumed to be threateningly masculine. Naming sexism as a sociocultural factor that affects public health on a widely-subversive scale should be the ultimate goal of gender-sexual activism, especially within psychological institutions. Just as anti-gay sentiment was named in relation to SOD and EDH, so too does cissexism need to be named in relation GD. GD continues to implicate that there is something inherently pathological about transsexuality, and that the symptoms that underlie GD are based solely within the phenomenological experience of trans people. Psychological conceptualizations of mental illness should shift away from phenomenology and toward sociology, as American values are implicated by every diagnosis in the DSM — especially when the diagnosed are also stigmatized.
https://dlshultz.medium.com/treating-transsexuality-to-help-or-to-harm-4bcae0ea621a
['D. L. Shultz']
2019-01-24 05:49:13.599000+00:00
['Transgender', 'Psychology', 'Mental Health', 'History', 'Culture']
The Short Form Experiment
The Short Form Experiment It actually works! Photo by Vlada Karpovich from Pexels Too many Medium writers experience the same dilemma: We write something good only to have tumbleweed as our audience. And then months down the road when we manage to attract a few followers, that masterpiece has become ancient history. About a year ago, Medium declared there would be no republishing of old stories unless the writer added something of value to the old piece. And that makes sense to me. There’s no shortage of articles to read on the platform. Including reruns would only serve to bloat the glut. So how would we revive that story we think contains so much value for the reader? Enter Dr. Mehmet Yildiz of Illumination, who made a suggestion and offer I couldn’t refuse. Why not publish a short-form post as a teaser for our long stories we feel might have some value to the reader, and then include an embed at the end to bring new viewers to the old masterpiece? I have followed the marketing advice of several different advisers in what is all too often a vain attempt at attracting an audience to my work. And mostly, it hasn’t worked very well. A drib here — and a drab there is about the best I could do. Facebook groups? Twitter? LinkedIn? Pinterest? Many Stories? All a big “meh.” So anyway…I’m taking a crack at the short-form strategy. And while it’s not miraculous, I am seeing results. This morning, I found a few reads on 24-minute stories that were more or less moribund. I’m quite sure those viewers came from the short-form teasers. Illumination allows writers 3 short-forms per day in each of its 3 publications. And if you really need to exceed that quota, I see no reason not to simply self-publish. Your followers just might find that short form and be interested in pieces they’ve overlooked. I’ve been on Medium with a purpose for almost 3 months now. And with over 200 stories published, it would be really easy for even a diehard fan to miss something good. Why not lead the reader to what you think is your best work? It doesn’t really take all that long to write 100 words. As with all the suggestions on garnering an audience, it’s at least worth a shot. And this one actually is working for me. Just sayin’. Where there’s a will there’s a way and all that positive stuff. Give ’em hell. You’ve but one writing life to live. More writing suggestions:
https://medium.com/illumination/the-short-form-experiment-69fcbc18b1d9
['William', 'Dollar Bill']
2020-12-25 14:37:51.330000+00:00
['Short Form', 'Advice', 'Freelancing', 'Nonfiction', 'Writing']
Code that debugs itself: Fixing a deadlock with a watchdog
Rare, hard to reproduce bugs are the hardest. We recently fixed a deadlock that had plagued our code for nearly a year, but only showed up about once a month. We would notice something was stuck, debug it for a bit, then finally restart the process and the problem would disappear. Eventually something changed and it started happening two or three times a day, which meant we needed to fix it. The technique that found the cause was adding a thread to constantly check for a deadlock and then report one when found. It turns out it is fairly easy to have a process debug itself, which is useful when traditional tools aren’t working. The problem occurred in a Python web application running on Google App Engine. Occasionally, a request seemed to get “stuck”: it logged that it started, but never completed. Eventually, the requests would time out and get killed by App Engine. We suspected we had a deadlock, since locking bugs are a common cause of rare hangs. However, the stack traces printed when the requests were killed were from all over our application. We expected that if we had a deadlock, then the stack should be consistent and show the thread waiting on a lock somewhere. The only hint was that about half the traces were in the Python standard library logging module, but with no consistent pattern. We started by reading the Python logging source code. The module has a single lock that is held while formatting log entries, which seemed promising. To cause a deadlock, we need a cycle of threads waiting on each other. Our theory was that two threads were waiting on each other, as shown in the figure below. Deadlock theory: Thread A holds logging._lock and waits for other_lock, while Thread B holds other_lock and waits for logging._lock In the figure, Thread A calls the logging library and holds logging._lock. At the same time, Thread B locks a lock named other_lock. Next, Thread A tries to lock other_lock, maybe because it calls a custom log formatter or log writer. It has to wait. Then, Thread B tries to log something, which then needs to get the logging lock, which causes the cycle between Thread A and Thread B, so they wait forever. Deadlock! Unfortunately, we couldn’t find anything in the logging code that might grab another lock. This seemed like a dead end. The other hint was that many stack traces were in “startup” code, which executes on the first few requests to a new instance. Searching the Internet for things like “Python startup deadlock” revealed that Python 2 has a module import lock and there have been issues with deadlocks if code that runs at import time tries to grab a lock. At the time, this didn’t seem relevant, since we didn’t see imports in the stack traces. We were starting to get desperate. While brainstorming, someone joked that we should just dump all the thread stacks every second. We would “just” need to find the right one out of the millions of stacks it would record. That crazy, impossible idea made us wonder: could we detect when the requests are stuck and log the thread stacks at precisely the right moment? In general, detecting deadlocks correctly is challenging: You need to know what all threads are waiting for, which requires adding instrumentation to the lock implementation. However, we didn’t need to be 100% correct. If we detected requests that seemed to be stuck for a long time, say 10 seconds, it could be because of our bug. This is not perfect: If a thread is waiting for something that is making progress (e.g. sleep(15)), we will incorrectly think it is deadlocked. However, we should not miss any deadlocks as long as we detect it before the request timeout cancels the request. That is, this could have “false positives” but no “false negatives.” To implement this, we added a background thread that dumped all thread stacks every 10 seconds, using Python’s sys._current_frames() function. It then compared the current stacks to the last stacks it captured. If a thread appeared to be in the same place, then it is suspicious and we logged the state of the program. This is a sort of “watchdog timer”, constantly searching our program for bugs. We tested it by manually causing a deadlock, then put it in production. Unfortunately, we saw requests getting stuck, but nothing in the logs. We spent a number of hours retesting and convincing ourselves that it should be getting triggered. Eventually, we realized a possible problem: If the deadlock was in the logging library, using logs would cause the watchdog to also get stuck! We changed the code to write the debugging output to a separate service (Google Cloud Task Queues). Within a few hours of deploying the fix, we had our answer: the code was stuck while logging. Mysteriously, requests were always stuck on this line of code, deep in Google App Engine’s logging service: bytes_left -= 1 + group.lengthString(num_bytes) This didn’t make any sense: how can this deadlock? It is just calling what seems to be a simple function and doing some math. Maybe the stack traces were inaccurate and it was actually stuck somewhere near this line? Eventually, we remembered the module import lock and the fact that our stuck requests usually happen at startup. What happens if we call the App Engine logging code while holding the module import lock (e.g. imp.acquire_lock())? Our test program immediately hung at the same “bytes_left -= 1 …” line. With a way to reliably reproduce the problem, we finally understood the problem, shown in the figure below. The deadlock: Thread A holds the import lock and waits on logging._lock while Thread B holds logging._lock and waits on the import lock Some of our functions have import statements to avoid circular import errors when that statement was at the usual place at the top of the file. The first time the function is called, the import is executed, which holds the import lock (Thread A steps 1–3). If the code being imported logs anything, it then tries to grab the logging lock (Thread A step 4). At the same time, a different thread calls the logging code and acquires the logging lock (Thread B step 1). The log handler then attempts to send the logs to the App Engine logging service, calling the line of code above. The group.lengthString() function is implemented in a native C library. For some reason, this library tries to acquire the import lock (Thread B step 3). At this point, we have a deadlock. Since the code trying to acquire the import lock is a native code library, we can’t “see” it from Python. Instead, all we see is that it is stuck on a function call, which explains how the code was stuck on the line above. So why did the original stack traces from the requests killed by App Engine not reveal this problem? When App Engine kills one of the requests in the deadlock, the other request is unblocked and continues. However, it is very likely to also time out and get killed very shortly after, printing a stack from anywhere, which confused our debugging. The line of code above did in fact occur fairly frequently in the stuck traces, but we had dismissed it as not possibly causing a deadlock since it wasn’t obviously waiting on anything. This was one of the most difficult bugs we have fixed. In retrospect, we could have saved a lot of time if we had realized that adding code that debugs itself can be easy. This is applicable to more than just deadlocks. For example, if your server is crashing on rare bad requests, you can log requests in a circular buffer, then dump the buffer on a crash. However, we also learned some other lessons for how to fix really hard bugs: Persevere : The hints about the logging and module import locks we learned over the year of investigating this bug were critical to finding the root cause. : The hints about the logging and module import locks we learned over the year of investigating this bug were critical to finding the root cause. Brainstorm with others : The “crazy” suggestion to log all stacks was not practical, but it led to the eventual solution. : The “crazy” suggestion to log all stacks was not practical, but it led to the eventual solution. Tools don’t need to be perfect : Our deadlock detector is flawed, but is good enough to detect this bug. : Our deadlock detector is flawed, but is good enough to detect this bug. If facts don’t make sense, test your assumptions: If you are starting to second guess your tools, figure out a way to verify if it is telling you the right thing. Sometimes, the things that are “impossible” are exactly where the bug is. In the end, remember that every bug in a computer system has a cause, and you can figure it out if you have the patience to keep digging. For more details about the bug and how we worked around it, see the test program we wrote to report this issue to Google.
https://medium.com/bluecore-engineering/code-that-debugs-itself-fixing-a-deadlock-with-a-watchdog-cd83019cce2e
['Evan Jones']
2018-05-01 12:01:01.334000+00:00
['Debugging', 'Software Engineering', 'Python', 'Bluecore', 'Google Cloud Platform']
A Simple 5-Step Process for Conquering Procrastination
A Simple 5-Step Process for Conquering Procrastination It takes more than motivation to conquer what holds you back. Photo by Alora Griffiths on Unsplash You’re not lazy. You’re not unproductive. You’re just a part-time procrastinator like all of us. You’re not shortsighted. You’re not unmotivated. Like all of us, you’re simply struggling to embrace short-term work for long-term benefits. And therein lies the battle all of us face on the journey to becoming happier and more successful — we’re naturally inclined to put off our biggest goals, largely because it takes longer to reap their benefits. High-achievers find a way to overcome the stumbling blocks that trip up those who are less motivated and thoughtful. Your blueprint begins first in your mind. The sooner you can subordinate your mental and emotional impulses to your goals and game plan, the quicker you’ll immerse yourself in ANY work that lies in front of you. Here’s a great example of the importance of this: Elon Musk did not think Tesla would be a successful venture. Shocking, right? Then why in the world did he go forward with it? I leave that up to the man, himself, to explain: “If something’s important enough you should try. Even if you think the probable outcome is failure.” — Elon Musk We should all look to progress toward work that stirs our passion, making us feel alert and energized. So many people never realize their potential because they lack energy for what they do. As a result, their daily tasks and obligations start to become a grind. This leads to frustration, loss of hope, and the feeling that what they’re working for lacks meaning — that their dreams will never come true. Our ability to produce — and re-produce — a winning mindset is a driver to truly living life on our terms. You’ll never live the life you dream about if you lack consistency. Consistency and drive begin with a positive attitude, a hopeful outlook, and an insatiable desire to embrace new challenges and tackle our repeatable tasks with renewed vigor. Focusing on Short and Long-Term Success So much of modern productivity research centers around your need to focus on the “process” and to enjoy the journey, rather than eyeing an end-goal or prize. That’s true. But know this, my friend — you absolutely must have goals. Visualize that prize which awaits you for all your hard labor and willingness to rise to meet old and new challenges each day. Because it’s that vision — that concept of victory — that eases your flow and progress toward approaching each step in your process with the enthusiasm and ambition that you need. Big dreams need goals and big goals need a game plan that is seeded with emotional intelligence and long-term vision. Caroline Webb illustrates in her piece for Harvard Business Review that it’s hard to commit to short-term work when we can’t immediately see the benefits we’ll realize down the road: “The problem is our brains are programmed to procrastinate. In general, we all tend to struggle with tasks that promise future upside in return for efforts we take now… it’s easier for our brains to process concrete rather than abstract things, and the immediate hassle is very tangible compared with those unknowable, uncertain future benefits. So the short-term effort easily dominates the long-term upside in our minds — an example of something that behavioral scientists call present bias.” So what do we do to confront this pernicious present bias? How do we avoid procrastination in order to become the most productive woman or man we can possibly be? I’ve developed five steps to focus on in an effort to yield higher returns of productivity: 1. Stop making excuses This begins first with what we tell ourselves. Second, it continues with what we share, tweet, snap, or tell others. Your hard-earned cash is far better invested in a book or educational course that expands your mind, rather than that new iPhone that you don’t really need right now. The reasons of why you can’t do something are better saved — instead, tell others you can do it, then commit and figure out how. 2. Position yourself to increase the work or activities that ignite the fire inside of you while decreasing the boring things This requires deep thought, planning, and a willingness to use your imagination. Know what you love and keep stacking more of it on your plate. That said… 3. Always have a step-by-step plan for mundane tasks There will always be mundane tasks and urgent matters that require our attention. Break down the boring, less pleasing things into chunks of work. Find within those chunks of work the “wins” that you’ll earn. The strengths you’ll strengthen and weaknesses you’ll gradually reduce. 4. Put a carrot in front of you Have a treat to look forward to, a light at the end of the tunnel, that inspires you to keep going. It may be a nice weekend getaway. A trip to a sporting event or a nice dinner. It could be re-investing in yourself via a personal development course or retreat. Have something — this is vitally important for fueling our day-to-day actions. 5. Extract value out of each activity by defining how what you do will align with your values, dreams and five-year plan Tie this back to your purpose which is the fountain of motivation and truth that lives in your soul. Negative Outcomes and Conclusions In an article for Psychology Today, Dr. Elizabeth Lombardo enumerates the negative outcomes and factors of procrastinating. “Procrastination can lead to increased stress, health problems, and poorer performance. Procrastinators tend to have more sleep issues and experience greater stressful regret than non-procrastinators. What’s more, procrastination can also hinder your self-esteem with the guilt, shame, or self-critical thoughts that can result from putting off tasks.” — Source: Dr. Elizabeth Lombardo The words “regret” and “self-critical” stand out most to me. By putting off things that we don’t want to do, the most likely outcome is that we end up criticizing ourselves, regretting our wasted time, and at worst, we lose sleep and experience higher stress levels. While we’re all conditioned and biased toward the here and now, a simple cost-benefit analysis tells us that putting off things that demand our attention now is not a winning proposition. Continue forward and know that instant gratification and temporary pleasures pale in comparison to sustained happiness and long-term growth. Come back to this article when you find yourself spinning your wheels and doubting yourself. Your future self will thank you.
https://medium.com/publishous/a-simple-5-step-process-for-conquering-procrastination-34e7691b25ce
['Christopher D. Connors']
2020-09-01 17:45:36.113000+00:00
['Inspiration', 'Personal Development', 'Self Improvement', 'Motivation', 'Productivity']
My Perspectives of News Break. Sharing my personal experience and…
Probably you heard a lot about the recent News Break mission for creators. It created a sensation among the writers and freelancers. With an open mind, I joined the bandwagon because the offer was too good not to pursue. In this story, I want to share my experience, observations, and insights gained from fellow writers on News Break. Information I provide in this article is publicly available. I don’t want to share contractual information due to confidentiality agreement. As you may know from my stories, I always keep my neutrality for platforms, products, or services. I don’t condemn or endorse them. I see them as they are. Therefore, this story is not a sales or marketing pitch but information to empower writers whom I care for deeply. Like many writers, I applied to News Break using an application form provided by one of our writers. My Initial Dilemma Even though I applied at the same with many writers a month ago, my friends received approval within a week. I was worried about why my application took so long. It was a ridiculous reason. News Break accepted my application and sent several email messages requesting me to start in the program. However, all emails from News Break went to my Spam folder, including 50,000 plus messages. I am oversubscribed and the poor Gmail, like any mailing system, struggles with the diverse emails coming from every part of the globe. But I know it is protecting me. A wise friend advised me to check my spam folder. Bang! All messages from News Break was there. I felt stupid, of course. But I took immediate action. I signed the creator program contract. The support team was responsive and resent it within 24 hours with clear instructions. I am grateful for the service they provided. The Publishing Process The publishing process is a breeze. News Break provided a link to my profile. It took me only a few minutes to update my profile. The link to the publishing dashboard was easy to start submitting my stories. The dashboard is well designed and intuitive. You can create a story with ease. The story board has essential tools. Content Creator Guidelines I read the terms and conditions in the Content Creator Guidelines and Agreement, which are similar to other platforms. I cannot go into details for confidentiality reasons. But as public knowledge there are a few points which can be useful for the new writers to consider. Your stories need to be a minimum of 1,000 words. As you remember from my previous stories, it is 600 words for Vocal Media. And Medium does not have any restrictions. Each story must have at least one picture credited to the source. All platforms require this. We are extremely sensitive about this on my publications. Extract from other sources must be cited. Every platform on earth requires this. They expect original content written by you, not others. I have never seen a platform which accepts plagiarized content. Have you? As you can see, these are similar requirements from writers on all platforms. This means that you can transfer your writing skills to News Break easily. Perks for Creators There are many benefits for writers. You can find incentives from this publicly available link. I will not go into details as many writers cover them. For example, a comprehensive story was by Tim Denning published on Better Marketing titled The Brutal Truth About News Break for Writers. Tim provides details in this story and compares both platforms neutrally. Even though some criticizes Tim’s content, I agree with every point Tim does in this article based on my experience with News Break and Medium. As you may know, Tim is a superstar on Medium, LinkedIn and several other social platforms. He is a source of inspiration for many of us. From my observations, Tim will shine on News Break as well since he shares high quality and high-impact stories that matters to individuals and society. My Initial Experience with News Break Let me share how my stories perform within 24 hours. I submitted around ten stories on Saturday. All of them were published within 24 hours. The stats on the profile do not refresh, I guess within 24 hours, but the creator dashboard refreshes frequently and shows actual impressions, page views, and shares. My first story performed beyond my expectation. It was a story I published on Medium and refined a bit to make it appealing to the News Break readers. Here is the story which received 32K+ impressions in a few hours. This is an outstanding performance for me because when I published similar story on Medium earlier this year, I received around 20 views and was not distributed to topics. However, News Break distributed it to Beauty & Fashion topic. Only after my sharing my Medium story in my circles in social media, the story was indexed by search engines and turned to be a reference material. I received around 5K views on Medium, but the views came through my efforts. My story did not earn income since external sources generated the traffic. By the way, receiving views and reads are more important than earning income for me, so I am still grateful for Medium to host my story. My second story on News Break also performed well. News Break distributed my second story to the Health topic immediately. The similar story I published on Medium was not curated even though my discerning readers adored it. It remained as a mystery to many of my readers why Medium did not curate this story. If you want to read my stories on News Break, you can follow my account. I write about health, mental health, leadership, technology, and content marketing. I will be posting many stories to empower writers on News Break and other platforms as I did on Medium. Other Benefits for Creators As an appreciation for creators, News Break also gives an affiliate link to the creators to bring more high-quality writers to their platforms. If you haven’t applied yet, you can use the link they provided to me. This is the first affiliate link I use in my stories, so I do it with full disclosure here. I add this writer application form not just to get points from News Break but for convenience of our writers who what to apply for this program. Finding the link could be a challenge. It is entirely up to you to use it or not use it. Diversifying our writing portfolio is essential for our survival. I don’t know about you, but I don’t want to be exclusive to any writing platform. They can change their terms and conditions at any time. As you may know, I also started writing for Vocal Media which brought me considerable benefits as discussed in this story titled Vocal Is Not Just For Money. You can check my Vocal Media profile to read my insights aimed for specific communities. The only exclusive investments I make is on my mailing lists and my website. I cannot control and influence my readers on other platforms, but I can do to my long term readers who are a fan of my content. They understand me and forgive the small mistakes I make like everyone else does. By the way, News Break does not allow any action to calls on your stories. We cannot even add our mailing lists to our stories. However, we can add it to our profile page. If you started writing for News Break, please remember to add a Follow widget as it must be done manually. image screen capture by writer — follow DrMehmetYildiz on News Break I did not know about this feature and overlooked adding them to my previous stories, but one of my mentors reminded me today as a useful feature. How to Collaborate with News Break Writers To help News Break writers, I created a Quora Space to allow them to share the links of their News Break stories. It is a public forum, and all News Break writers are welcome to participate. Please join the new Quora Space I created. One of the super News Break writers has already joined and shared her content. For ILLUMINATION writers, I created a News Break channel on our Slack Workspace. All writers can share their News Break stories. You can learn about our Slack Workspace form this story. I am impressed that many writers I follow on Medium write for News Break. One of the most inspiring ones is Matt Lillywhite, who has already gained 485K views publicly available on his great profile. I also found writers of our publication whom I closely follow such as top writers Sinem Günel, Rose Bak, Devin Arrigo, Tim Ebl, John Cousins and Jordan Mendiola who submitted an article to ILLUMINATION-Curated about his News Break experience yesterday. Jordan is an inspiring writer who leads the way and shares his experience generously. I will compile many more stories in my publications and provide a reference point for our writers. What If Your Application Declined It is common. News Break declined several applications at least in my circles. I helped a few friends to re-apply with relevant information. Some of them were accepted. My understanding is that you need to present yourself as an entrepreneur, not just a content producer. Your application should convince them your content will be in demand, and you will be capable of bringing new readers to their platform. I believe that News Break wants to recruit top writers who can take the responsibility of their content and audience. You need to talk to your walk. If you need help, please contact me. I will help you with your re-application. Conclusion I attempted to introduce News Break Creator Program as an alternative for writers. Freelance writers need to diversify their writing portfolio. This post is not about condemning or endorsing any platform. Each platform has its mission, strategy, and goals. We need to adhere to their rules to use their services. I enjoy writing on Medium, Vocal Media, and News Break. I also write for myself for my specific audience who consume my specialized content which cannot be found on public platforms. To achieve this, I use my mailing list and serve my tailored materials to a segmented group of readers. Each platform has its pros and cons. The biggest pro for Medium for me is collaboration. Medium presented me the opportunity to connect with thousands writers and readers. I am grateful. I hope News Break and Vocal will do the same. I am optimistic and keep an open mind for opportunities. Thank you for reading my perspectives. I wish you the best in your writing career. Please always feel free to contact me when you need help. I am one message away from you. How to connect with me I established three significant publications supporting 6,500+ writers and serving 65,000+ readers on Medium. Join my publications requesting access here. You are welcome to subscribe my 100K+ mailing list, to collaborate, enhance your network, receive technology and leadership newsletters reflecting my industry experience. I am on ILLUMINATION Slack Workspace. I use Linktree to share my social platforms. Connect with me on News Break. Connect with me on Vocal Media.
https://medium.com/illumination/my-perspectives-of-news-break-3a8e82a8ffc6
['Dr Mehmet Yildiz']
2020-12-22 23:09:40.059000+00:00
['Writing', 'Technology', 'Self Improvement', 'Entrepreneurship', 'Freelancing']
Giving Text an Inner Shadow with ImageMagick and Perl
Giving Text an Inner Shadow with ImageMagick and Perl Creating a CGI script that composites text with fancy effects onto an existing image is easier than you think Image licensed from Bigstock My memoir, The Accidental Terrorist, is about my youthful misadventures as a Mormon missionary. Missionaries always wear black name tags, so to promote my book I thought it would be nice to give fans a way to create and share their own customized name tag images. To accomplish this, I figured a simple CGI script written in Perl would be best. I had a vague sense that I could use the Perl interface to ImageMagick to overlay a name in bold white text onto a blank name tag image like this one: Blank name tag graphic (image-magick-step-1.jpg) What’s more, I wanted the name to look like it had actually been stamped or drilled into the name tag, with maybe a slightly pebbled white surface to give things a nice feeling of texture. I had used ImageMagick before for some simple applications, and I knew it was a very powerful graphics-processing package. However, it’s also very arcane, without much in the way of user-friendly documentation. (Oh, there’s plenty of documentation. It just helps to be fluent already in graphics-processing-ese to understand it.) Stack Overflow, to name just one forum, overflows with questions about how to do this or that with ImageMagick. I scoured the web for an answer to what I thought was my very simple question about how to make an inner shadow, but I came up empty. Finally, all I could do was start playing around until I figured it out for myself. I did figure it out, and I’ll lay out my method below in case there’s anyone else out there looking for an answer to the same question. I’m not claiming this is the best solution, in fact, I’m sure there’s probably some fiendishly clever way to do this in ImageMagick with a single convoluted command. Me, though, I like to take things step-by-step so I can easily see what’s happening at every point and why. Having said that, my method is pretty straightforward, though a few of the details are a little tricky. We’ll start with the declarations, initializing a bunch of variables we’ll need later (some of which we can futz around with to adjust our output): That all should be pretty self-explanatory, though we’ll talk more about some of these variables below. Next, we declare a couple of Image::Magick objects and load them with, respectively, the blank name tag graphic from above and the pebbled texture graphic below: Pebbled texture graphic (image-magick-step-2.jpg) So far, so good. But before we actually try to print any text on either of these images, we need to gather some information about the text itself — specifically, how wide it will be when rendered: QueryFontMetrics is a method of Image::Magick that, when passed some text descriptors, returns an array of stats about how that text will be rendered. The only return value we're interested in for our purposes here is $width , which will help us center the text properly. Our variables $startx and $starty describe the point on the name tag around which we'll center the text. Knowing the width of the text, we can easily calculate where the upper left corner will need to fall: If we wanted to center the text vertically as well, we could calculate that from the $height value, but in this case, we only need to know where the top edge of the text will fall. Now we start getting to the interesting stuff. Our next step is to construct a mask, which is a grayscale image used as a filter when compositing one image onto another. The black parts of a mask will render the composited layer transparent, while the white parts will render it opaque. The levels of gray in between provides varying degrees of opacity. I find it a little difficult to think of masks in those terms, though. It might be simpler to think of a mask as a stencil. You can lay your stencil down on the base layer of the image you want to composite, then sort of “spray-paint” your top layer through it. You’ll see what I mean after a couple more steps. For now, we’re going to create our mask image by initializing a new Image::Magick object, filling it with black, and then printing our (properly positioned) text on it in white: The chunk of code above results in the following image: Our mask layer, stored in the $mask object See, doesn’t that look like a stencil? We’ll be using this mask in our final step to spray bits of one image onto another while blocking out other bits. Okay, now we’re going to construct our shadow. This is what we’ll eventually composite with our text layer to give our name tag the 3-D look that we want. To create this shadow, we need to construct a new image that looks a lot like a mask but really isn’t. The process is very similar to making our mask above. We want our shadow to be shaped like our text, so we again build an image with white text on a black background (though we could just as easily use a brown or purple background, or anything else we feel like): But this time we do two things differently. We offset the text a little, in this case moving it down vertically by two pixels. Then we apply a Gaussian blur effect to the image, using a couple of variables that affect the degree to which the image gets blurred (play around with those values to see what happens). This gives us the following result: Our shadow layer, stored in the $shadow object Like I said, while this looks very similar to our mask image, it’s not exactly the same sort of thing. What we’re going to do with it — and this is where the magic really starts to happen — is layer a translucent version of it on top of our texture image. The code to do this is very simple: And that gives us the following image: Our composite shadow/texture layer, now stored in the $texture object We now have a composite image that looks like bright fuzzy letters projected onto a pebbled charcoal wall. The fact that the texture is only faintly visible is the result of our $opacity parameter, which we could easily dial up or down, depending on the effect we wanted. Now we’re ready for the final step. We take that stencil from way back and spray our composite shadow layer through it onto our original blank name tag: We write the result to the file system, and voilà! Here’s our final image, looking quite fine: Our final composite image (image-magick-step-6.jpg) There’s no doubt a way to do this in fewer steps, but what we have here was certainly acceptable for my purposes and not all that difficult. If you try this code out with your own images, I’d suggest spending time playing around with the values of the initial parameters, and with different colors for the shadow layer. You might be surprised what you end up with! Hellfire bevel: $offsetx = -2, $offsety = -2, $sigma = 2, $opacity = ‘85%’, $shadow->ReadImage( ‘canvas:brown’ ) In the end, my script was a little more complicated than what I’ve presented here, giving users a way to input a name and also choose from different image sizes with various slogans. But the code above is where the magic (or rather, the Magick!) all happens. Resources
https://medium.com/better-programming/giving-text-an-inner-shadow-with-imagemagick-and-perl-d8efd83affb8
['William Shunn']
2019-06-19 17:07:55.715000+00:00
['Design', 'Image Processing', 'Perl', 'Programming']
How to Develop Mind-Blowing Content Strategies
Creating a robust content marketing strategy begins with knowing your strengths. Do you have strong knowledge of your industry and audience? Is there anyone in the company that is a good speaker? Do you know someone that does graphic design? Do you know any good writers? Pool together all of your assets for creation and attack on all of them. YES, all of them. Starting out, you want to think of yourself as a big experimenter. TEST. TEST. TEST. Test everything in this guide. What works for your business, doesn’t work for the next. Testing and exploring gives you the vision to see what’s working and what’s not. This gives you the opportunity to double down on what’s actually working for you. Right now, it’s images and video content that’s helping. Visual images are processed 60,000x faster in the brain than text. A lot of companies are seeing a ROI on explainer videos and voice-overs. Mainly because there is little to no monetary investment to make these videos. All you need is a computer and a mic. Now I know there are a great handful of people who don’t consider themselves creative in aspects of developing content to engage an audience. If you’re not a good writer, designer, or speaker and simply do not have the time to do devote to developing these skill sets then I advise you hire awesome people like those Steller Creative groups to do it for you. There is no way around this. You’re either gonna devote time to create content or you’re not and get people to do it for you. But if you do have the time and you don’t have the money then here are some sources for you to be able to create content D-I-Y style: Videos ● Animaker ● Moovly Graphic Design ● Canva ● Desygner Know Your Brand Once you’ve pooled all your assets together, you’re going to take a strong look at your brand and get it down on paper. You should have a clear written tell-all about what your brand is about. The ensures consistency in your brand presence no matter who does the posting or creating. It’s important to have a very clear positioning on your brand in order to market content effectively. Knowing your values, mission, and persona makes it easy to create concepts that deliver on the message you want your audience to hear. If you don’t have your brand on paper, download our Branding Assessment or email me at [email protected] and i’ll personally send you one. Getting brands down on paper dramatically changed the way we approached developing a content strategy. It put everything we wanted to be online in plain view, spurring ideas and campaigns that can be ran. All because we had the business’ foundation inscribed on a piece of oak. Tooting the Steller horn: We create brand style guides for businesses who needs help creating clarity for their brand. Having everything tucked away in your brain and calling it “vision” is pure ignorance. GET IT OUT. We help grow a salon business by creating content that educated her audience with weird, interesting facts. This idea was inspired by Snapple. Know Your ENEMY….I mean CUSTOMER! Understanding your audience is the holy grail of tools for marketing content successfully. Knowing their pain points or struggles, you can position yourself as an authority by offering free information using a blog or relating their struggles back to how your product or service solves that problem. Understanding their lifestyle gives you a peek into their typical habits and behaviors. With this information, you’re empowered to relate better with your audience on a deep personal level. They’ll feel like you understand them in a very special way which will increase your sales. This is communicated back to them in imagery, captions on Instagram, tone of voice, and your business’ story. This is part of what helps you cultivate a brand persona, a set of character traits the represent your brand if it we’re a person. Content Distribution ConDis is a job of jobs. Mediocre marketers will say “All you have to do is post. What’s the big deal?”. I take it more seriously than content creation because in creation, we’re more flexible and open to new ideas. But when it comes to distribution we’re very data oriented. We like to know when your base is on the most. What days and what times. Next, the proper hashtags have to be placed on every post or your reach is shortened. Me, the copywriter, then creates an engaging caption sometimes with a call to action to ensure initiatives are taken to get results. You can see all of this data using your Instagram business tools. Those tools are sharp. We suggest testing the frequency of your posting on Instagram. We typically recommend 1–2 times on your profile but since the release of stories and live, you should be dropping 5 or more pieces of content. Facebook is still a pay to play platform but still follows these general rules. Content marketing is a strategic marketing and business process focused on creating and distributing valuable, relevant, and consistent content. This content is meant to attract and retain a clearly defined audience and, ultimately, drive profitable customer action. When done correctly, content marketing helps create a relationship with your audience, which leads to trust. And if your audience trusts you, they’ll be more willing to do business with you when they’re ready to make a purchasing decision. Your content should attract the right people to your site, convert those people into leads, and nurture and help close them into customers. But it doesn’t stop there — your content should always delight your customers, turning them into promoters of your brand. In a nutshell, content marketing is really just the art of communicating with your prospects and customers without having to sell to them. STORYTELLING Why does your business need to tell a story? Everyone loves a great story. People want to feel connected to a group, to belong, and stories create this connection. Stories give us a reason to communicate and relate; stories are stimulating and give us something to believe in; stories make us feel better, smarter, safer, or even loved. Business storytelling is similar. It’s about creating alignment between your business and your prospects and customers. To begin, start by asking yourself the right questions about your business and customers. Why does your company do what it does? What’s the story of your potential customers? How do they begin to desire or think about your product? What’s the story of your business? What do you want your product to provide and how do you deliver on it? Regardless of the story you’re trying to tell or how you’re trying to tell it, storytelling has three essential elements — characters, conflict, and resolution. ● Character (the primary person you’re trying to reach and educate or inform) ● Conflict (the problem it’s trying to solve) ● Resolution (the solution it offers) KEEP FRESH IDEAS Why do you need a process for generating content ideas? A content generation process will allow you to come up with a predictable flow of original, high-quality, and relevant content ideas. There are four things you should keep in mind when generating content ideas on your own: What are your buyer personas’ habits? • What are your competitors doing? What are people talking about on question and answer sites like Quora? What can you learn from your search engine optimization efforts? Each idea should be educational or informative about your industry, not your brand. Most people don’t know who you are yet, so you need to attract them with valuable thought-leadership content. PLANNING A LONG-TERM CONTENT STRATEGY Why is long-term content planning important for your business? When it comes to creating content, you want to remain as reactive and agile as you can to make the most of your time. Having a plan will give you and your team the ability to remain reactive to upcoming initiatives, stay organized, and proactively manage content required for your marketing tasks. Your content marketing efforts should always be targeted to at least one of your business’ buyer personas. What’s your primary buyer persona’s background? (Job? Career path? Family?) What are your primary buyer persona’s demographic traits? (Male or female? Age? Income? Location?) What are your primary buyer persona’s identifiers? (Behavior? Communication preferences?) What are your primary buyer persona’s goals? (Primary goal? Secondary goal?) What are your primary buyer persona’s challenges? (Primary challenge? Secondary challenge?) Identify the buyer’s journey for your primary buyer persona. The goal is to help them through the awareness, consideration, and decision stages. And while there’s no magic number for the amount of content within the buyer’s journey, let’s start off with identifying three pieces of content — one for each stage of the buyer’s journey. We recommend starting with a comprehensive, educational awareness-stage resource like a guide or eBook. This way, you can prove your value and help your primary buyer persona regarding your industry, which is a great way to start building a relationship. Understanding the different sectors that go into content marketing makes it easier to sit down with your team and brainstorm how you can relate this practical strategy to your business and brand. This is an excerpt of my eBook “The Definitive Guide To Content Marketing For Small Businesses”. It’s a clear and concise path to mastering the logistical and creative processes that’s often overlooked by the smoke and mirrors of marketing in the modern world. You can get it for free here. Thanks so much for reading! Hit the heart button if you enjoyed this article. It means a lot to me!
https://medium.com/steller-media-marketing/how-to-develop-mind-blowing-content-strategies-112980ae5c5b
['Darrell Tyler']
2017-06-12 15:46:50.084000+00:00
['Content Marketing', 'Startup', 'Marketing', 'Entrepreneur', 'Content Strategy']
Home Credit Loan Default Risk?
5.1 Target Count The target variable is 0 if the client will repay the loan and 1 if the client will default. It is clear looking at the graph that the data set is imbalanced. Some features that were explored … Amount credit (Credit amount of the loan): Amount Credit is the amount requested by the applicant. 5.2 Amount Credit Analysis The density is high for the amount less than 10⁶ for both types of applicants who can pay and have difficulty in paying loans. The graph looks right-skewed. Conclusion It is clear that the data is highly overlapping between the people who are able to pay loan on time and the people who had difficulty in paying loan back. So it is not of much use. Amount Income Total (Income of the client): 5.3 Amount Income Analysis: AMT_INCOME_TOTAL is the income of the client. The income of both types of clients is less than 20000000.0 except for one client who has an income of 117000000.0 and still is not able to pay the loan. If we remove it and plot a graph of AMT_INCOME_TOTAL <(0.2X1e8) we removed the one outlier but we can’t clearly make out whether the majority of both classes are overlapping or not. After plotting AMT_INCOME_TOTAL<(1X1e6) it is clear that majority values are highly overlapping for both classes Conclusion: Since the majority values of these features in both classes are overlapping this feature is not of much use. Days employed (in years) 5.4 Days employed (in years) Analysis: After plotting the number of years employed, there are some clients in both targets who are working for 1000 years which are an outlier. After removing the value 365243 in the number of days employed now the maximum number of years a person has worked in 50 years. It is visible that the density of clients who have less than 10 years of experience had difficulties in repaying loans. Conclusion: There is slightly less overlapping in the maximum density of two target values. So this feature will be useful. Days_birth (in years) 5.5 Days birth(in years) Analysis: There is a majority of young age group between (20–40) are having difficulty in repaying the loan. As the age increases, we see the age group (50–70) have the majority of repaying the loan back. Conclusion: There is slightly less overlapping in the maximum density of two target values. Younger clients are more likely to default So this feature will be useful. Feature Engineering is an important skill in such problems but creating features that actually affect the target variable is more important. My main task was finding the features in every table that is affecting the target variable. Let's see the number of the document submitted by a person who was able to pay the loan and who wasn’t 5.6 Target 0 5.7 Target 1 Conclusion: Both actually submit the same proportion of the number of documents during the application. There is one client submitting 4 documents in the target 1 section which is different but not much helpful. 5.8 flag document 3 Analysis Both types of clients mostly submit only 1 document which is document 3. DOCUMENT_3 is the most common document submitted by the applicants. FLAG_DOCUMENT_3==0 means not submitted document 3 FLAG_DOCUMENT_3==1 means submitted document 3 FLAG_DOCUMENT_3 is mostly submitted by the clients who can repay the loan and not submitted by the clients who cannot repay the loan but this could be also possible because the data is imbalanced. Conclusion So FLAG_DOCUMENT_3 could be a useful feature. EXT_SOURCE_1, EXT_SOURCE_2, EXT_SOURCE_3 (Normalized score from external data source) 5.9 External Source Analysis External source 1 < 0.4 then the client will default and External source 1 > 0.4 client will repay, there is a visible separation between two classes. External source 2 < 0.5 then the client will default and External source 2> 0.5 client will repay, there is a visible separation between two classes. External source 3< 0.4 then the client will default and External source 3 > 0.4 client will repay, there is a visible separation between two classes. Conclusion EXT_SOURCE_1, EXT_SOURCE_2, EXT_SOURCE_3 these features are very useful Encoding of Categorical values : There a mainly two ways of encoding Label Encoder and One Hot Encoding. The label encoder gives an ordinal value to the variable e.g. 1,2,3 etc which is not valid in categorical columns of this data so I used One hot encoding. One hot encoding can be done using pandas.get_dummies.. How to find the relation of features with the target variable?? 1st Approach: Pearson correlation of coefficient and find the top 20 features negatively or positively correlated with the target. 5.10 Pearson’s Correlation of application train data with target variable 2nd Approach: Simply train a model on the data without any hyperparameter tuning and get the feature importance. I knew two of the library doing this one is Random Forest and the other Light Gradient Boosting Mechanism. I selected Light Gradient Boosting because it gave me a high ROC-AUC score on train data. Light GBM is a fast, distributed, high-performance gradient boosting framework based on decision tree algorithm, used for ranking, classification and many other machine learning tasks. Top 30 features after training LGBM on application train data 5.11 Lgbm top 30 feature on application train data I considered the 2nd Approach as a priority because correlation is found out between two continuous data variable but here my target variable is discrete i.e {0,1} so the results are not completely trustworthy and when I created some new features using the top features from the 2nd Approach we can see the ROC-AUC score improving, which didn’t happen when used the 1st approach. when only trained on the application train data with LGBM model gave the ROC-AUC Score: 0.8024315069244328 Let us look at the feature of the highest importance by lgbm in fig 5.11 which is Payment rate Inverse Payment rate Inverse: It is the ratio of Amount Credit (amount given by home credit)and Amount Annuity(Amount to pay back every month). 5.12 Payment Rate Inverse Analysis: Clients having a value of less than 12 will repay the loan. Clients having a value greater than 12 and less than 18 will be a defaulter. Also having a payment range between 22 and 35 there is a high probability that the client will repay the loan. Conclusion: Payment Rate Inverse feature is very useful. b. bureau.csv & bureau_balance.csv bureau_balance.csv: Monthly balances of previous credits in Credit Bureau. This table has one row for each month of history of every previous credit reported to the Credit Bureau. 5.13 bureau_balance head The two features in this table are :- MONTHS_BALANCE:- Month of balance relative to the application date (-1 means the freshest balance date) STATUS:- Status of Credit Bureau loan during the month (active, closed, DPD0–30,… [C means closed, X means status unknown, 0 means no DPD, 1 means maximal did during the month between 1–30, 2 means DPD 31–60,… 5 means DPD 120+ or sold or written off ] ) I have applied some aggregations on the bureau_balance.csv like min, max, sum. Month wise aggregation of STATUS==C which means a number of credit account closed within a particular number of months. bureau.csv: All client’s previous credits provided by other financial institutions that were reported to Credit Bureau (for clients who have a loan in our sample). For every loan in our sample, there are as many rows as a number of credits the client had in the Credit Bureau before the application date.
https://medium.com/analytics-vidhya/home-credit-loan-default-risk-7d660ce22942
['Winston Fernandes']
2020-12-23 16:42:22.633000+00:00
['Kaggle', 'Artificial Intelligence', 'Deployment', 'Data Science', 'Machine Learning']
For 8-Hours My Life Became an Episode of SpongeBob
The more I grow up the more my life becomes an episode of SpongeBob. No, not the ones where the main characters play in an imaginary box or spend the afternoon jellyfishing. My life is becoming one of the tragic episodes of Spongebob. Thankfully, tragedy can be hilarious if it isn’t happening to you. Don’t believe me? Well, if tragedy weren’t funny “America’s Funniest Home Videos” wouldn’t have run for 31 seasons and the guys from the show “Jackass” would likely stop punching Steve-O in the balls so many times. As adults, sharing our tragedies is how we cope and subsequently laugh at them. And this is why SpongeBob works for all ages. One second you’re watching a sponge and his aquatic friends go on a silly adventure, and the next, that same sponge is giving his best friend a cake that says: “Sorry About the Scabies.” So, at my own expense here’s a great story of how I ended up in Rock Bottom — *ahem* I mean Queens, New York at 2 AM.
https://medium.com/illumination-curated/for-8-hours-my-life-became-an-episode-of-spongebob-f51b34660e3b
['Isaiah Mccall']
2020-09-26 07:06:03.365000+00:00
['New York', 'Spongebob', 'Self', 'Storytelling', 'Television']
Image Recognition APIs: Google, Amazon, IBM, Microsoft, and more
Imagine sitting in a foreign café, hungry, checking out the menu for a tasty bite to eat. Unfortunately, but not surprisingly, the menu’s, say, in Portuguese and you only speak English. The good news is the menu has a photo of each dish. So, you look at the pictures, recognize the dishes offered, and place your order. While learning by example comes naturally for us humans, it’s not that easy for machines and software applications. To know how to recognize at least one object, they must learn about its distinguishing features from a ton of its images made from various angles — a complex process that takes a lot of time and effort. Enabling computers to understand the content of digital images is the goal of computer vision (CV). Machine learning specialist Jason Brownlee points out that computer vision typically involves developing methods that attempt to reproduce the capability of human vision. Let’s get back to our food ordering situation. If a computer had to solve this problem, it could use its image recognition capability. What is image recognition and computer vision? Image recognition (or image classification) is the task of categorizing images and objects and placing them into one of several predefined distinct classes. Solutions with image recognition capability can answer the question “What does the image depict?” For example, it can distinguish between types of handwritten digits, a person and a telephone pole, a landscape and a portrait, or a cat and a dog (a frequent example). Image recognition is one of the problems being solved within the computer vision field. Computer vision is a broader set of practices that solve such issues as: Image classification with localization — identifying an object in the image and showing where it’s located by drawing a bounding box around it. — identifying an object in the image and showing where it’s located by drawing a bounding box around it. Object detection — assigning label classes to multiple objects in the image and showing the location of each of them with bounding boxes, a variation of image classification with localization tasks for numerous objects. — assigning label classes to multiple objects in the image and showing the location of each of them with bounding boxes, a variation of image classification with localization tasks for numerous objects. Object (semantic) segmentation — identifying specific pixels that belong to each object in an image. — identifying specific pixels that belong to each object in an image. Instance segmentation — harder than semantic segmentation because it’s about differentiating multiple objects (instances) belonging to the same class (breeds of dogs). This slide from the lecture on detection and segmentation helps us understand the difference between computer vision tasks: Different computer vision problems. Source: Stanford Lecture Slides. Lecture 11: Detection and Segmentation With the basics in mind, let’s explore off-the-shelf APIs and solutions you can use to integrate visual data analysis into your new or existing product. Image recognition APIs: features and pricing Computer vision products are usually one of the features customers can access through MLaaS platforms. MLaaS stands for machine learning as a service — cloud-based platforms providing tools for data preprocessing, model training, and evaluation, as well as analysis of visual, textual, audio, video data, or speech. MLaaS platforms are developed for both seasoned data scientists and those with minimal expertise. The platforms can be integrated with cloud storage solutions. Providers offer various features for visual data processing, which address use cases typical for given industries. Image classification, object detection, visual product search, processing of documents with printed or handwritten text, medical image analysis — these and other tasks are available on a pay-as-you-go basis in most cases. Let’s overview some of them, focusing on the two main aspects: 1) types of entities that these systems can recognize 2) pricing. Google: Cloud Vision and AutoML APIs for solving various computer vision tasks Google provides two computer vision products through Google Cloud via REST and RPC APIs: Vision API and AutoML Vision. Cloud Vision API enables developers to integrate such CV features as object detection, explicit content, optical character recognition (OCR), and image labeling (annotation). You can detect: Faces and facial landmarks. You can identify face landmarks (i.e., eyes, nose, mouth) and get confidence ratings for face and image properties (i.e., joy, surprise, sorrow, anger). Individual face recognition isn’t supported. Entities (labels). With the Vision API, you can detect and extract information about entities in an image, across a broad group of categories. Labels can represent general objects, products, locations, animal species, activities, etc. The API supports English labels, but you can use Cloud Translation API to translate them into other languages. Logos. Identify the features of popular product logos. Optical character recognition (OCR). Detect printed and handwritten text in images and PDF or TIFF files. Popular landmarks. The landmark detection feature allows for detecting natural and man-made structures within an image. Explicit content. The API evaluates content against five categories — adult, spoof, violence, medical, and racy. It also returns the likelihood score of each being presented in an image. Web references. The API returns web references to an image like description, entity id, full matching images, pages with matching images, visually similar images, and best guess labels. Image properties. This identifies characteristics like a dominant color. AutoML Vision is another Google product for computer vision that allows for training ML models to classify images according to custom labels. You can upload labeled images directly from the computer. If images aren’t annotated but located in folders for each label, the tool will assign those labels automatically. Users can get their dataset annotated by human operators. The product is currently in beta. Google allows users to see how the API analyzes an image of their choice: The API analyzes an image against five categories. Picture source: Wallcoo.net Pricing. Vision API users are charged per image, particularly, per billable unit — each feature applied to an image. The first 1000 units per month are free. From the 1001st unit to 5,000,000 units per month costs from $1.50 to $3.50. Units 5,000,001–20,000,000 per month would cost $1.00 for label detection and the rest of the features cost $0.60 per image. You can check their price calculator. AutoML Vision pricing depends on the used feature. For example, prices for usage of image classification depend on the amount of training required ($20 per hour), the amount of human labeling requested, the number of images, and the prediction type (online or batch). Online prediction is billable after 1000 images. Analyzing 1,001–5,000,000 images costs $3 per 1,000 images. If you choose batch prediction, the first node hour is free per account (one time), and then $2.02 per node hour. Amazon Rekognition: integrating image and video analysis without ML expertise Amazon Rekognition allows for embedding image and video analysis for applications. The service is based on the same technology used to analyze image and video data for the Amazon Photos service. Users aren’t required to have machine learning expertise. The Recognition API features let you to do the following tasks: Recognize entities, objects, activities. Detect labels — objects (i.e., people, cars, furniture, clothes, pets), scenes (i.e., woods, beach, a city street) or concepts (outdoors), activities (i.e., playing soccer, skating) Recognize and analyze faces. You can detect a person in a photo or video, detect facial landmarks, expressed emotion, get a percent confidence score for the detected face and its attributes, and save facial metadata. You can also compare a face in an image with faces detected in another image. Recognize celebrities. You can identify famous people in video and images. Capture movements. The service allows you to track the path people take in a video, their location, and detect their facial landmarks. Detect unsafe content. Amazon Rekognition identifies explicit nudity, suggestive (underwear or swimwear), violence (i.e., physical weapon), and disturbing scenes (corpses, hanging). Detect text in images. Detect and recognize text, such as captions, street names, product names, and license plates. Detection of multiple objects in an image. Source: Amazon Rekognition documentation Pricing. Amazon has a free tier for its recognition services. Users pay for the number of media files they analyze. Pricing also depends on a region, so customers from Ireland and Northern Virginia, for instance, would pay slightly different sums. You can use the pricing page to get a quote. Users can analyze 1,000 minutes of video, 5,000 images, and store up to 1,000 face metadata for free per month, for the first year. Below we provide costs for Northern Virginia (US East) customers as an example. Analyzing subsequent archived video costs $0.10 per min (billed per-second); live stream video analysis is $0.12 per min. Storage of face metadata is $0.01 per month per 1000 records. Image analysis pricing decreases based on number of images. The first 1 million images processed cost $1.00, next 9 million images — $0.80, next 90 million — $0.60. If your workload is over 100 million images per month, you’d pay $0.40. Storage of face metadata is $0.01 per 1,000 records. IBM Watson Visual Recognition: using off-the-shelf models for multiple use cases or developing custom ones IBM provides the Watson Visual Recognition service on the IBM Cloud which relies on deep learning algorithms for analyzing images for scenes, objects, and other content. Users can build, train, and test custom models within or outside of the Watson Studio. The demo of a custom model by vehicle glass repair company Belron. Source: IBM Another feature available in beta enables users to train object detection models. Pre-trained models include: General model — provides default classification from thousands of classes Explicit model — defines whether an image is inappropriate for general use Food model — identifies food items in images Text model — extracts text from natural scene images. Also, developers can include custom models in iOS apps with Core ML APIs and work in a cloud collaborative environment with Notebooks in Watson Studio. Pricing. IBM offers two pricing plans — Lite and Standard. Lite: Users can analyze 1,000 images per month with custom and pre-trained models for free and create and retrain two free custom models. The provider also offers Core ML exports as a special promotional offer. Standard: Image classification and custom image classification costs $0.002 per image and training a custom model costs $0.10 per image. Free Core ML exports are also included in the plan. Microsoft: processing of images, videos, and digital documents Microsoft Azure Cloud users have a variety of features to choose from among Microsoft’s Cognitive Services. Vision services are classified into six groups that cover image and video analysis, face detection, written and printed text recognition and extraction. The APIs are RESTful. Here is a short list of Microsoft Cognitive Services features: Face detection. Detect up to 100 people in one image with their location, identifying attributes like age, gender, emotion, head pose, smile, makeup, or facial hair. Detect 27 landmarks for each face (Face API). Adult content detection. With Computer Vision API, detect whether an image is pornographic or suggestive. Brand recognition. Detects brands within an image, including the approximate location (Computer Vision API). The feature is only available in English. Landmark detection. Identify landmarks if they are detected in the image (Computer Vision API). Celebrity recognition. Recognize celebrities if they are present in an image (Computer Vision API). Image properties definition. Define the image’s accent color, dominant color, and whether it’s black and white (Computer Vision API). Image content description and categorization. Describe the image content with a complete sentence and categorize content (Computer Vision API). Information extraction from documents. Extract text, key/value pairs, and tables from documents, receipts, and forms (the Form Recognizer service). Text recognition. Recognize digital handwriting, common polygon shapes and layout coordinates of inked documents (the Ink Recognizer service). The demo with a digital document processed with Form Recognizer. Source: Microsoft Azure Pricing. The cost of services depends on the API used, the region, and the number of transactions (not API calls). For example, if you do up to 1 million transactions with the Face API, it will cost you $1 per 1,000 transactions. Making over 100 million transactions would cost $0.40 per 1,000 transactions. Detecting adult content with Computer Vision API and making up to 1M transactions — $1.50 per 1,000 transactions; 100 million or more transactions cost $0.65 per 1,000 transactions. Clarifai: custom-built and pre-built models tailored for different business needs Clarifai has developed 14 pre-built computer vision models for recognizing visual data. The service is accessible through the Clarifai API. The provider emphasizes the simplicity of using its computer vision service: You send inputs (an image or video) to the service and it returns predictions. The type of predictions depend on the model you run. Each of the pre-built models identifies given image properties and contained concepts. With off-the-shelf models, you can, for instance: identity clothing, accessories, and other items typical for the fashion industry detect dominant colors present in the image present in the image recognize celebrities recognize more than 1000 food items down to the ingredient level down to the ingredient level detect nsfw (Not Safe For Work) content and unwanted content (NSFW and Moderation models) content and (NSFW and Moderation models) detect faces and their location, as well as predict their attributes like gender, age, descent. The demo of the General model. Source: Clarifai The company also considers peculiarities of such businesses as travel and hospitality and wedding planning by providing models that “see” related concepts. Training models based on specific images and concepts is also available. Pricing. Clarifai’s pricing is also usage-based. Customers have three pricing plans to choose from — Community, Essential (pay as you go and monthly invoice), and Enterprise & Public Sector (pricing is available on demand). Plan services include machine learning operations, hosting, consultation, mobile SDKs, infrastructure, and more. Community: includes 5,000 free operations, 10 free custom concepts, 10,000 free input images, among other features. Essencial: Users can train custom models for $1.20 per 1,000 model versions. Predictions with pre-built models cost $1.20 per 1,000 operations. Predictions with custom models are $3.20 per 1,000 operations. Search for images costs $1.20 per 1,000 operations; adding or Editing Input Images costs $1.20 per 1,000 operations. Zebra Medical Vision: medical image analysis tools for radiologists Specialists from the healthcare sector aren’t left out of image recognition tools. Zebra Medical Vision provides solutions for analyzing medical images — computerized tomography and X-ray scans — in real time. The company notes it uses a proprietary database of millions of imaging scans, along with machine and deep learning tools to develop the software for managing radiologists’ workflows. There are three solutions focused on identifying specific conditions and one for flagging and prioritizing cases. One can detect brain, cardiovascular, lung, liver, and bone disease in CT scans, 40 different conditions in X-rays scans, and breast cancer in 2D mammograms. Zebra Medical Vision is HIPPA and GDPR compliant. Pricing. Zebra’s AI1 all-in-one solution comes with a fee of up to $1 per scan. You can review tools by other vendors, such as DeepAI, Hive, Nanonets, or Imagga. Image and video moderation API by Sightengine, xModerator’s image moderation service, or APIs and SDKs for facial recognition and body recognition solutions by Face++ AI Open Platform could also be a good fit for you. How to choose an image recognition API? Plenty of commercial APIs for image recognition and other computer vision tasks are available, so the mission here is to select the one that would meet your needs and requirements. You can evaluate offerings against these criteria: Visual analysis features. Explore product pages and documentation to learn which entities the API can recognize and detect. The documentation always contains more detailed information, so we advise to give it a read. Type of visual data and analysis mode. Does the API or product support image analysis, video analysis, or both? Also, vendors specify what types of predictions (batch and online) they provide. Billing. Vendors offer usage-based pricing and keep most of the pricing information open, so you can estimate how much each solution would cost you based on the projected workload. API usage. APIs only become useful when developers know how to use them. Tutorials on how to enable APIs, make API calls along with examples of responses — all this knowledge you will find in the documentation. Support. Technical support must be available 24/7 via multiple channels (phone, email, forum, etc.) Vendors usually offer multiple support plans for purchase.
https://altexsoft.medium.com/image-recognition-apis-google-amazon-ibm-microsoft-and-more-6e9037e1aed0
['Altexsoft Inc']
2020-01-14 12:53:32.561000+00:00
['IBM', 'Amazon', 'API', 'Artificial Intelligence', 'Machine Learning']
How I Passed the Microsoft Azure Fundamentals Certification in 5 Days
How did I prepare for the exam? I primarily used 2 resources. I highly recommend you to stick with the same too. The mock tests and external paid courses are absolutely unnecessary. Trust me; it’s easier than you think. The Plan Day 1: Azure Training Webinar Day 1 [3hrs] Day 2: Azure Training Webinar Day 2 [3hrs] Day 3: Azure Fundamentals Learning Path — Part 1,2,3 [3hrs] Day 4: Azure Fundamentals Learning Path — Part 4,5,6 [3hrs] Day 5: Review of my short notes [1hr] + Examination [2hrs] As you would notice, I used only these 2 resources and booked my examination on Day 3. The plan is simple as it can get; let’s quickly go over the resources. Microsoft Azure Virtual Training Day This is typically a 2-day webinar held by the Azure Training team to help more professionals get familiar with the Azure cloud platform. You can check for the next scheduled event from this link. They have these multiple times a year. In my opinion, these are the best way to get started. Everything you need would be explained clearly by the trainers, and you get to ask questions to them and clarify any doubts you may have. Think of it as a crash course into the content to have a good overview of what you will be tested upon. If not for these, I wouldn’t have even attempted the exam. They talk about what’s tested on the examinations and were helpful to kickstart my preparation. Azure Fundamentals Learning Path If the webinar were the crash course, the learning path would be the in-depth text tutorial. Yes, I said text! I personally like text materials because they let me skim through and take notes easily at my own pace. I spent the next 2 days going through the learning path, solidifying all the concepts I listened to on the training day. Azure Fundamentals Learning Path (Screenshot by Author) There are questions to check your understanding at the end of each learning path, and by logging into your Microsoft account, you may save your progress (like I did above). If you went through both the above resources, you will naturally feel comfortable about the exam and don’t have to waste money on any mock tests or additional courses. Bonus Tip: Though practical experience is not required for this examination, knowing Azure Portal and the interface was helpful for me. So if you’re not familiar with the Azure Portal, feel free to create a free trial account and play around familiarizing yourself with the icons and the overall offerings of Azure.
https://towardsdatascience.com/how-i-passed-the-microsoft-azure-fundamentals-certification-in-5-days-75b8e261d5d1
['Arunn Thevapalan']
2020-12-27 15:47:13.626000+00:00
['Cloud Computing', 'Artificial Intelligence', 'Education', 'Machine Learning', 'Data Science']
Is the Concept of the Higher Self an Illusion?
I’ve wanted to hook up with my higher self ever since I can remember. Sometimes, I imagine I’m almost there. Wisdom streams, and I think “this is it! The smartest part of me is awake.” Later, of course, I’m back at the start of my journey and just as unenlightened as ever. Still, I continue to study religion, philosophy, science, and gather New Age gleanings, and read ancient texts, and anything else I believe could hold the key to my inner spark. Nonetheless, I concede that if you consider the higher self long enough, you might conclude it doesn’t even exist. Here’s why. The witness of thoughts When I first discovered I could step out of my thoughts and witness them, I thought I was a step closer to enlightenment. Perhaps the watcher of my thoughts was my higher self and had been patiently listening from the beginning of time. Indeed, if you observe your thoughts often, your self-understanding will increase. You’ll spot patterns of behavior and adjust so you improve. Witnessing your thoughts is a way to gain control of your emotions too. No longer swept along by a torrent of feelings, you are better able to steer your life in the direction you choose. Still, there’s a problem with the idea the witness is your higher self after all. It’s possible, after many witnessing sessions, to step back not only from your thoughts but from the witness of your thoughts. Who watches the watcher? The question then arises who watches the watcher? Is that the higher self? Probably not, since self-observation is like the old analogy of peeling an onion; you keep stripping off layers only to find another below the last one, and you never find the center of the onion, your inner core, because the peeling doesn’t end. In his lecture about tapping into the higher self, the philosopher Alan Watts describes the conundrum as similar to thieves climbing floors in a house when the police come looking for them. Up and up they go, yet they never escape or reach the top. Chasing your higher self is much the same since you are only ever a floor above what you imagine is your ego, and the enlightened part of you is always out of reach. Watts suggests the witness of thoughts, the so-called wisest part of you, is just another layer of the ego anyway, and attempting to be enlightened could be pointless. His argument is we probably wouldn’t enjoy being surrounded by people with their higher selves in operation. Can you imagine hordes of enlightened wisdom-spewing gurus flocking over the hills? It might be annoying, to say the least. What’s more, we need a variety of people in the world. Someone absorbed in self-reflection, for example, might produce useful insights. If they dropped their ego flashes of inspiration may not arise. Also, if everyone was enlightened, we wouldn’t need to learn and grow. So what then? The meaning of life might disappear. Evidence of the higher self seems to exist. People said to be gurus, and those who work on self-improvement with yoga, tai chi, philosophy and so on display signs of potential enlightenment. But their egos are still evident. Even nuns, monks, and sages have grouchy moments and days when their egos are in full-flood. So, maybe my journey to the higher self is an illusion, and I’m discarding layers of my ego only to uncover fresh ones. Nevertheless, I’ll always seek wisdom because there’s an obvious advantage to doing so. When you chip away at egoism, you find greater understanding beneath. My higher self might not emerge, but at least my lower self is in a consistent state of removal.
https://medium.com/the-bolt-hole/is-the-concept-of-the-higher-self-an-illusion-e5aa231cec10
['Bridget Webber']
2020-12-11 12:42:00.961000+00:00
['Mental Health', 'Personal Development', 'Self Improvement', 'Life', 'Psychology']
How to Spot Master Manipulators and Avoid Being Played
How to Spot Master Manipulators and Avoid Being Played Learn to recognize the textbook patterns of narcissists, sociopaths, and psychopaths to protect yourself and those you love from being abused Photo by JESHOOTS.COM on Unsplash Master manipulators act in patterned and predictable steps. Through their twisted lens, the world is their chessboard and people are pawns to be used and abused. If we know what to watch for, we’ll be far less likely to be played by these cons, whether as individuals or as a society. So let’s walk through the typical strategies in a narcissistic playbook. Manipulators Set Their Mark First, master manipulators set their focus on a target. This target may be a person, group, system, or nation that they’ll try to exploit for purposes of self-gain or simply to feel a surge of power and control. These manipulators fail to understand that true strength is choosing love and kindness, and because they lack any sense of empathy or compassion for other people, they actually view kindhearted and honest souls as weak and pathetic. They often refer to their targets as “Losers” because they believe they’re “Winning” at some kind of game. Unable to form healthy relational attachments, these psychologically damaged individuals rely on manipulative maneuvers when interacting with other people. They often start by luring a target during a grooming period. This phase may or may not include love bombing, but ultimately this is the stage in which a healthy-minded person believes they’re entering into a genuine, trusting, safe relationship (whether platonic, professional, or intimate). In this stage of their game, the abusers convince a target that they’re trustworthy and on their side in the world — more than anyone else. Then, they establish solid trauma bonds as they gaslight and brainwash a target through a progressive slide of abuse and alienation. This pattern is common for sexual predators, trafficking rings, and domestic abusers (whether physical, sexual, spiritual, financial, or psychological). But these same textbook maneuvers are used by con-artists of all types, building the trust of a target while gaining access to their bank accounts, bodies, minds, and spirits. They set their mark and then systematically devour a soul one small compromise at a time. They set their mark and then systematically devour a soul one small compromise at a time. While that slow progression is quite common (and well documented by those who study sociopathic/psychopathic behavior), manipulators may also lash out with an impulsive blast against someone who dares to question, challenge, or discern the abuser’s true character (especially if it’s done in public to tap into the abuser’s core of shame). This is typically known as “narcissistic rage” and it’s the kind of rant we’ve witnessed recently when R. Kelly exploded during an interview with Gayle King. In that moment of rage, manipulators feel ALL POWERFUL, especially if their target becomes emotional, silenced, or afraid. And that’s a drug these abusers learn to crave. Another reason manipulators attack a target is when they aim bitterly at anyone who threatens the abuser’s frail ego simply by existing in the world as a stronger, smarter, kinder, happier, or more successful person/group/system. This is the age-old story of jealousy taken to an extreme. As Taylor Swift sings, “People throw rocks at things that shine.” “People throw rocks at things that shine.” — Taylor Swift In this last situation, imagine emotionally fragile children who haven’t yet developed a sense of security in the world. Unlike more secure children who are willing to share, these less-secure children would rather destroy a toy then let someone else enjoy it. If their envy becomes pathological, they’ll even aim to destroy the happy child (whether over a toy, attention, approval, or just soul-deep jealousy of the one who is happier.) Now picture this happening on an adult level. This may involve a competitive co-worker who sabotages someone’s career, a jealous ex who stalks the new lover, a narcissistic partner who sets out to destroy someone’s entire life, or a psychopathic serial killer who preys upon the innocent simply for the thrill of taking total power over someone who has what the killer wants — innocence, love, happiness, friendships, trust. Beware the Smear Campaign and the False Reality Once the manipulators choose a target, they will intentionally erode the target’s reputation by labeling that target as unsafe, crazy, wacko, psycho, sick, unfit, a liar, a thief, a cheater, a criminal… anything to make people doubt the innocence/competence/stability/sanity of that target. They may even replace the target’s name with a nickname based on this false persona and repeat that accusation constantly until bystanders begin to associate the target accordingly. Abusers do this by finding one small mistake or flaw, exploiting that weakness, and eroding the credibility of the person or system by exaggerating and obsessively focusing on that one weak point. If manipulators can convince enablers to doubt the truth for even a second… they can reframe reality and convince them to believe wildly distorted claims or “alternative facts.” If manipulators can convince enablers to doubt the truth for even a second… they can reframe reality and convince them to believe wildly distorted claims or “alternative facts.” Taking this as far as they can go, manipulators will push this “spin” by launching an all-out smear campaign, causing some people (enablers) to doubt or distance themselves from the innocent scapegoat. This is what forms a “system of abuse.” Consider recent situations involving Harvey Weinstein, Jeffrey Epstein, and Larry Nassar. None of those predators could have gotten away with their horrifically violent abuses without an entire system of enablers surrounding them. Many manipulators take it even farther, brainwashing other bystanders to join in on the abuse. These co-abusers are called “flying monkeys” as a tie to The Wizard of Oz in which the Wicked Witch of the West holds court in her castle while sending her troops out to do the dirty work. With enough flying monkeys, some manipulators choose to step back and keep their hands clean while pulling the puppet strings of the people around them. As an example, take a look at American History X, a 1998 American crime drama written by David McKenna. In that film, the leader of a white supremacist group sits up in his seedy office and fuels a circle of manipulative minds. He labels innocent people as the enemy and then sets his flying monkeys loose to attack. Is he committing violent acts? No. But is he a puppet master pulling the strings without any regard for how those actions will impact the young extremists in his clutch (much less their targets)? Absolutely. Another juvenile behavior manipulators may revert to is backstabbing to triangulate their targets. In this case, they pit two targets against one another just to watch them devour each other. Consider The Girl on the Train, the bestselling novel by Paula Hawkins. In this story, the manipulative husband plays his ex-wife against his current wife (and then, just for kicks, adds in a new partner). His goal is to scapegoat his ex. But, he’s already starting in on his current wife as his new target. He’s so good as his game, he convinces everyone his ex is not only insane but a murderer… he even convinces her of that false reality. And that’s the real danger of these manipulators. They aim to make their target lose complete grip of the truth. And that’s the real danger of these manipulators. They aim to make their target lose complete grip of the truth. Triangulation may involve two people, two families, two companies, two rivals, or two groups of people (races, religions, classes, tribes, nations). A cheating husband may bring his lover to a dinner hosted by his wife. A manipulative ex may triangulate the kids as weapons against their innocent parent. A corporate executive may pit two competitors against each other to weaken them before pouncing with a buyout. A tyrant may fuel hate between two factions, encouraging them to tear each other to bits so they’re too distracted to notice the destructive things he’s doing right in front of them all. By trying to divide and then conquer, these abusers play a sick game from the start. By trying to divide and then conquer, these abusers play a sick game from the start. Peek Behind the Curtain When the target starts to question the truth, manipulators will project their own unhealthy behaviors onto the target and convince enablers that the innocent scapegoat is the one guilty of the very crimes they’re committing. The abusers will also play the victim, gathering empathy from those who can’t see behind their masks. For example, if a husband is having an affair, he’ll accuse the wife of cheating. If a con is stealing from the company, she’ll accuse an innocent coworker of stealing. If we really want to know the sins of master manipulators, listen to what they accuse others of doing and we’ll know exactly what they’re up to. If we really want to know the sins of master manipulators, listen to what they accuse others of doing and we’ll know exactly what they’re up to. Discernment is Key Once we understand clearly how this twisted con game works, it’s very easy to identify the “players” of the world. So how do we maintain our own power? By choosing not to become an abuser, an enabler, or a flying monkey. And while we can’t always avoid becoming a target (no one is immune), we don’t have to lower ourselves to their standards when we do find ourselves in that terrible position. Consider The Truman Show, a 1998 American comedy-drama directed by Peter Weir and written by Andrew Niccol. In this story, Truman was groomed from birth to believe in a false reality. While he thought his life was real, he was actually being played, terribly, by everyone he loved and trusted. None of the abuse could have happened if the producer of the reality show that exploited him hadn’t been supported by an entire cast of enablers. As Truman started to question the truth, the producer upped his abuse. Once he could no longer manipulate Truman completely, he set out to destroy the innocent target, and the entire system rallied behind him because it benefitted them to keep Truman in the show. This story serves as a strong example of how difficult it is to realize we’re being manipulated, break free of the lies, reclaim the truth, and fight our way to freedom, especially when an entire system is trying to convince us we’re the one who is wrong and the manipulator is right. It takes tremendous strength, clarity, resilience, and spiritual discernment to stay true to ourselves in that kind of storm. The key is to keep our heart open, our mind clear, our feet steady, and our soul anchored to a greater, more powerful, more sacred source of positive energy so we can discern the truth without being blown to bits by the dark, negative vortex of destruction.
https://medium.com/invisible-illness/how-to-spot-master-manipulators-and-avoid-being-played-fdb87809d969
['Julie Cantrell']
2020-07-01 23:45:02.707000+00:00
['Relationships', 'Advice', 'Life Lessons', 'Mental Health', 'Psychology']
I Don’t Get Too Attached to The Success of Any One Piece
I Don’t Get Too Attached to The Success of Any One Piece You can’t control your audience. So move onto the next one instead of fixating. Photo by Patrick Tomasso on Unsplash Other writers have a lot of preferences, and I might just be weird, but a rule I have for myself is to not get too attached to the success of any one piece. I put as much of myself into my writing and my personal pieces as much as anyone else. I put a lot of thought and a lot of effort, editing and craft into my pieces. I have articles that I read very quickly, and then articles I take hours if not days to write. What I have learned is that once I have hit the “publish” button and released a piece into the stratosphere, I have very little control over how my writing is received. I have very little control over whether people read in the first place. Sure, I can spam the messages of my friends and writing groups. But that doesn’t help because it’s just going to damage my credibility as a friend, and I’m going to just be that guy who spams other people with his blog links. I wasn’t always good at this, but now, once I release a piece of my writing into the world, I let it go. I stop being attached. How it is received and the attention it garners stop becoming relevant, because I’ve become laser focused on something I have much greater control over: The next one. Sure, I take breaks. I work. I live my life. I read my Bible and pray, but I don’t focus on how my piece does after it’s published. Do I give into human temptation to check? Yes, I absolutely do, like anyone else. But the joy is in the writing. Where I make my success and focus on my craft is in the writing. The next piece beckons to me now more than my attachment to my latest piece of writing that I hope gets read, but sometimes am disappointed that it doesn’t. Meredith Arthur asked me for her podcast today whether I had a piece I specifically wanted to share with the audience that I was particularly proud of. I had a lot of trouble answering that question because I don’t get too attached to any one piece, and I expressed that moving “onto the next one” has worked wonders for not only the quantity, but the quality of my writing. You don’t become a better writer begging other people to read your writing. You become a better writer by living more, reading more, and writing more. It’s important to me now to never stay stagnant — and fixating and being too attached to a single piece’s success is being stagnant to me. I used to do it a lot. I would refresh my pieces every 10 seconds to see if there was a new read — only to more often than not be disappointed. You don’t control whether people read your work. Either they like it or they don’t. Either they click or they don’t. What you can control is what you write and how you write. You can’t force views and reads, but you can always write. I have a lot of pieces that are duds, and some pieces that are hits. If I were to tell you I had a magic formula, I would be lying — some pieces take 30 minutes to write and go viral, while others I took days writing and barely got anyone to read. It happens. It doesn’t really make sense and there’s no rhyme or reason. I’m prouder of the ones that took days to write, even if no one read it. I’m the proudest of the pieces I wrote about my family and faith, even if no one reads them. Those are the pieces closest to home that I will always cherish. But I would be lying if I said the reception of my pieces didn’t matter to me. At the same time, the reception isn’t something I can control. To me, fixating and being attached to any one piece when you’re a blogger is like when basketball or soccer players argue with referees. What good is there in arguing with a referee? What can it possibly achieve that’s good? And why aren’t players focusing on the next play or getting back on defense? So focus on the next play. Get back on defense. Stop arguing with the referees who made their call on the reception of your piece, whether you think it was deserved or not. Basketball players get a lot of unfair calls in any game — but they get unfair calls in their favor as much as they get unfair calls against them. Players get away with overt fouls all the time, it’s just a matter of whether you’re the beneficiary or the one who lost out. Like foul calls in basketball, reception as a writer is not always fair and is prone to human error. So focus on the whole game instead of getting over-emotional about any one play, and in our case, any one piece. I don’t get over-attached to the success of any one piece, which doesn’t mean the piece didn’t mean something special to me, but that the success is defined by how I perceived it, not the audience. You can control yourself. You can’t control your audience. So move onto the next one instead of fixating.
https://medium.com/the-partnered-pen/i-dont-get-too-attached-to-the-success-of-any-one-piece-3016ea8d88f1
['Ryan Fan']
2020-07-09 15:11:07.371000+00:00
['Sports', 'Freelancing', 'Writing', 'Marketing', 'Self']
I Sucked at Writing So I Spent a Month Writing for 4 Hours a Day
I Sucked at Writing So I Spent a Month Writing for 4 Hours a Day How a couple bad articles and a lucky break got my sub-par writing 13,000 views I sucked at writing I’m a great verbal communicator. Give me a podium and I can deliver an enthralling speech but give me a pen and I’m useless. Writing was always my weak point and after 24 years of feeling like I’m awful at writing, I decided it was time for a change. “If you can write one short story a week — it doesn’t matter what the quality is to start — but at least you’re practicing and at the end of the year you have 52 short stories and I defy you to write 52 bad ones. It can’t be done.(2.50)” Ray Bradbury The best way to get better at something is to do it, so I made a month-long writing course for myself. I would spend 4 hours a day for a month learning how to write better. What does success look like? The first step was to figure out how I would measure success. What would a successful outcome look like? That’s where Medium came in. I would judge if I was getting better or worse based on the number of views and quality of the comments. Along with the metrics, I gave myself 3 goals and 2 stretch goals: Goals Get 10 views Get 1 positive comment Write an article a week Stretch Goals: Get published by a publication on Medium Get 100 views These might seem like very modest goals but at the time I couldn’t believe anyone would want to read something I wrote. In all honesty, it’s still a weird concept to me. Now onto the content. The Course Content: Using Medium to my advantage Congerdesign — Pixabay.com Here’s how I structured my 4 hours: Hour 1: Scour Medium for well-written articles Read popular stories on Medium Comment on at least 3 stories Make notes about what made these stories great Medium is filled with great writers and I wanted to learn from them. I reasoned that reading and analyzing why I thought a piece was good would be a solid way to learn. I gave my self the commenting rule because I wanted to become part of the community. If I want other people to read and comment on my work I should do the same for them. Hour 2: Search google for any questions I have about writing At the end of every four-hour session, I would write down roadblocks I hit. Then I would use this time the following day to figure out how to get around them. Hour 3 and 4: Write, Write, Write, and Write “You only learn to be a better writer by actually writing” Doris Lessing My First Article The beginning of my course was tough. The first two hours were fine, but when it came to actually writing I hit a wall. The self-doubt started to creep in. Is this good? Do I want other people to read this? You’re not actually getting better I can’t believe you’re spending so much time on this Why are you writing so slowly? When I read a great article and the author said they wrote it in two hours I felt awful. At this point, I had spent ten hours on my first article and still wasn’t done. How did people write so fast and so well? Learning from the masters After a couple sessions of hitting my head against the wall,I needed to find a new strategy. I started to research how other writers succeeded where I was failing. I found a great post by Mayo Oshin titled “The Daily Routine of 20 Famous Writers (and How You Can Use Them to Succeed)” It had a lot of great tips, but the biggest takeaway was that writing is hard and that’s ok. If these famous authors struggle then it’s normal for me to struggle too. Other than that I picked up a couple tricks that increased my writing speed. 1) Write an Outline I would spend so much time not knowing where to go with my writing. Spending an hour or so creating an outline helped me more than double my writing speed. 2) Put the phone in airplane mode I found myself looking towards my phone to alleviate the pain from writing. Putting it on airplane mode helped me quell that urge and focus on writing. 3) Just Write This is the hardest of the three to do, but by far the most useful. I realized I was spending too much time trying to make up the sentence in my head. I had to just write it down, I was afraid it would sound awful but it usually turned out well. I can always go back and edit but I can’t edit what's not there. Submitting to a Publication At the end of the first week, I finished my article but wasn’t sure what to do with it. I wasn’t sure if it was good or not. I was afraid of getting judged for producing sub-par work. At that moment a guiding axiom popped into my head, life begins at the edge of your comfort zone. Life begins at the edge of your comfort zone So I said, screw it, let me take the plunge and get people to view this thing. Finding Publications Luckily there’s a simple way to get people to view your work on medium, get published in a publication. Finding publications to submit to turned out to be a nightmare. Medium doesn’t list them anywhere. Eventually, I found a list of the top 100 publications and went about searching for ones that might accept my work. Tip ->Some publications hide their submission requirements but you can find them by searching “submission” on their homepages. After going through the list, I found 10 or so publications that were looking for content similar to what I was writing . I followed their submission guidelines and crossed my fingers. After three days, I had been rejected by two, and the other eight never responded, so I self-published it. The article wound up getting 5 views. I felt so happy people were reading my work! It’s the small things in life right :) Article number 2 The second Article only took me two days to write. I wrote it about a lesson I learned when interviewing people on their deathbeds for a documentary. It was a topic I was passionate about it and the words flowed like water. (if you curious about the documentary or the article you can check out the trailer here and the article here) I tried submitting to more publications, but the same thing happened a couple rejections and a lot of unanswered inquiries. So I self-published this one too, but then something big happened. After a day of it being on Medium, I had 15 views and one comment! The comment was from a writer whose work I had been commenting on since the start of as this course. It looked like being a good community member was starting to pay off. Goal 1 and 2 were accomplished and I wasn’t even halfway through the course. It was time to try and shoot for the stars! Article 3 I was feeling good and thought I could hit my stretch goals of 100 views and getting published. So I came up with a plan. Step 1: Pick an interesting topic I thought back over my past year and searched for interesting things I’ve done or had expertise in. I decided to write about an experience I had this past summer. I made a cryptocurrency course for myself and invested any extra money I made that summer into the cryptocurrency market to incentive me to keep learning about it. I thought cryptocurrencies were interesting, so I figured other people would too. Now I had a topic, but how would I turn it into a good story? Step 2: Write a quality story about the interesting topic Again I turned to the masters and after a little searching I learned about the hero’s journey. What do Star Wars, the four hour work week,and almost every Disney and Pixar movie have in common? They all follow the hero’s journey. commons.wikimedia.org A man named Joseph Campbell researched hundreds of popular myths and legends and found a common story arch. He wrote down the story arch into several steps and this became the hero’s journey. If you want to learn more about the hero’s journey here’s a great post by Chad Grills about it It took me some time, but eventually, I figured out a way to adapt the hero’s journey to my story. I would write my story like this: How I got inspired to start my cryptocurrency journey The start of the journey The first disaster How I overcame it A second bigger disaster How I overcame it The results Step 3: Edit It took me a week to write the third article but it still wasn’t close to being done. I was happy with the article but I knew it could be better. I had to edit it. “The first draft of anything is shit” Ernest Hemingway This means I would be behind schedule. I wanted to write four articles during my course. If I cleaned this one up I wouldn’t have time to write the fourth. I felt like this article had real potential so I adjusted my goals. I took out the write 4 articles goals but made my two stretch goals of getting a hundred views and getting published mandatory. Editing tools The first thing I learned about editing was the great free resources available. Running my work through Grammarly and The Hemingway App helped me tighten up my sentences and fixed most of the punctuation and grammar issues. Both of these tools are absolute lifesavers. Grammarly: Checks for grammar, punctuation, and spelling The Hemingway App : Highlights lengthy sentences, identifies the use of passive voice and gives you the reading level of your writing. Asking Friends for help After using The Hemingway app and Grammarly I thought the piece read well but what about others? I thought about it and realized I had a couple friends who like to write so I sent my article over to them and asked for some pointers. The biggest critiques I got were a lack of pictures, a lack of bolded words and that the paragraphs were too long. I added some pictures, bolded a couple of words, and split up the paragraphs. Now it was time to shine, getting published here I come. Getting into a publication Feeling confident I submitted to three of the largest publications on Medium. The Mission The Startup Better Humans After a day of waiting I got a reply from Better Humans, it said… “Passing on this just because it doesn’t fit our topics” I was a little crushed, but I told myself, “hey, at least you got a reply.” I waited another day but didn’t receive any replies so I decided to self-publish it. Then an hour later, I received an email from The Startup saying “Hey Joe, the article looks great we’d like to publish it” I couldn’t believe it, I got into one of the largest publications on medium. People wanted to read my writing. “How I turned my summer into a cryptocurrency investing course” was getting published! Things started to move fast after that. I spent the first 5 hours replying to messages and comments. Friends were saying congratulations and strangers were asking me for cryptocurrency advice. I didn’t know how to respond. The comments and messages keep flooding in and I felt the need to respond to everyone. These people took the time to read my work, I should honor that. I wound up having to spend an extra two to four hours a day to respond to everyone. After a week the dust started to settle, comments stopped, and the number of daily views dwindled. I could finally breathe again. Here are the results, my first month on medium yielded just barley over 13,000 views and lots of writing experience. Stretch goals of getting 100 views and getting published, check. End of the course Writing the cryptocurrency article and replying to the messages I got about it drained me. So I ended the writing course early and took some time to reflect on the experience. Below is a list of some of my biggest takeaways. Takeaways 1) Why are you writing: Writing is tough if you don’t give yourself a good reason to stick to it you will give up 2) Become Part of the community I learned so much from reading and commenting on medium articles. Plus it made me feel like I was part of the Medium family. I started rooting for authors and when they wrote a great article it gave me a boost of motivation to finish mine. 3) Step out of your comfort zone Don’t be afraid to be judged on your work. Do your best, get your work out there, learn from the experience, and repeat 4) Make an outline I hated doing this in English class but man does it work wonders. Layout a plan for your writing and then fill it in. 5) The Hero’s Journey Use The Hero’s Journey or a variant of it and you’ll have a good story 6) Grammarly and Hemingway App Run everything you write through these two apps! 7) Ask Friends Friends are the best. We all have a couple who like to write. Ask them to review your work and for any tips they have. I’m sure they will be more than willing to help Did the experience make me a better writer? I’m definitely better than before I started the course but I still don’t think I’m that good. I got lucky by having an experience in an area a lot of people were interested in. Whenever I sit down to write it stills feels like I’m banging my head against the wall, but now I’ve grown a little more used to the pain.
https://medium.com/blankpage/i-sucked-at-writing-so-i-spent-a-month-writing-for-4-hours-a-day-dba37aa85585
['Joe Robbins']
2020-12-21 15:57:25.890000+00:00
['Writing', 'Life Lessons', 'Productivity', 'Learning', 'Personal Development']
BYJU’s: The Unordinary Story of an Indian Decacorn Powerhouse
“He has an uncanny ability to teach you very difficult concepts with lucid visuals that help you understand everything from a first principles perspective.” — early student and friend of Byju Raveendran By 2010, over 1,200 aspiring MBA applicants fought their way into his popular course. He sold out arenas so he could give his famous live lectures. Let me repeat that. Byju sold-out arenas for a test-prep class. Rockstar is an understatement. Byju delivering his insanely popular test-prep course. Packed arena. Image via Factor Daily. He began broadcasting classes online to meet his demand. A light bulb moment occurred for Byju when he realized that many of his pupils (college graduates) were struggling from a foundational perspective in math and science especially. Students didn’t have a core understanding of logic; their entire schooling was focused on grades, not learning. How many resources can one give into remodeling a home before you realize the foundation was faulty and flawed in the first place? Boom. Pain point uncovered. “I was a teacher by choice and an entrepreneur by chance.” — Byju Raveendran In 2011, Think and Learn (BYJU’s the learning app’s parent name) was launched. Byju knew if he could scrap together the funds, he could create, package, and deliver the best educational content for students across India. His energy and focus shifted from exclusively serving aspiring graduate school applicants to the larger untapped market: 250 million+ K-12 students in India. Successfully selling and retaining this specific user segment would require a product and platform that would appeal to the parents of the students, who prioritize quality education for their own children over anything else. This would mean that Byju’s would have to beat out incumbents in the tutoring space that many traditional households already pay for. Core Competency Most successful EdTech platforms revenue-wise have won by integrating themselves into physical classroom environments as a digital tool or platform. BYJU’s however disrupts the outside-the-classroom digital experience and makes substantial revenue. By investing upfront in content development & curation focused in the core subjects of math, science, and English language arts, BYJU’s has developed a massively entertaining and engaging library of evergreen lessons in fundamental subjects for every student in the K-12 lifecycle and for undergraduate students aiming to attend India’s premium graduate schools. Rather than just offering a digital version of antiquated learning methods/styles, the BYJU’s strategy is to embrace everything that makes learning fun and relatable. For example, students learn about gravity via a neatly assembled graphic video of the earth, moon’s orbit, and an instructor walking quickly but articulately through an example of the moon’s relationship with the earth. This creates the sought-after perfect balance of high-level concept, example, and granular solution. All explained and sandwiched by content of learning materials and practice problems. “Why doesn’t the moon fall on earth?” is a classic, well-executed BYJU’s lesson. Image from BYJU’s YouTube feed. Expert teachers are hired to do what they do best: simplify the complicated, make the boring fun. These teachers are paired with graphic designers and videographers who come together to create 5-15 minute videos. In a way, BYJU’s is more of a production company that happens to distribute educational content for K-12 and HigherEd. BYJU’s has made tons of content-free, showing the value add for students and parents before they’re forced to make a purchasing decision. Streamlined Product-Market Fit With India having an exhaustive and complicated standardized exam structure for tenth grade, college entrance, and graduate entrance exams, their product bundles serve exactly those three categories. All courses fall under three umbrellas — making purchase decisions straightforward in an otherwise complex testing environment. Image via BYJU’s homepage. Let’s take the example of a middle schooler coming closer and closer to their first major exam in India — the 10th-grade ‘board’ exam determining where their remaining high school classes will be concentrated in (STEM or commerce). BYJU’s would cater its 7th-grade materials to work on fundamental math skills in algebra, geometry, or some combination of the basics. The expansive library of video lessons usually features over 50 options of 5–15 minute videos in each specific subject. From there, a baseline is established from the student’s ability to complete problem sets, number of video rewatches, and mini-quizzes to constantly reassess progress. This creates a personalized learning journey as BYJU’s phrases it, which couples together a mapped syllabus catering towards building up weaknesses and rechecking of strengths periodically. With involved examples and in-depth analysis based on three core categories of concepts, application, and memory abilities acting as the foundation, BYJU’s assigns scores for students to constantly be aware of what needs to be worked on. A student may watch an interactive video with in-video checkups to ensure student attention. Image from BYJU’s website. The goal for our example of a 7th grader focusing on STEM would be to max out activities, complete all videos, and earn as close to 100% as possible in a science or math class. Daily organic downloads of BYJU’s are nearing 70,000 according to their COO, and customer acquisition cost has lessened by 25% with improved brand awareness and consumer trust across all of their three Indian segments. Most recent reports suggest $370M in revenue for the fiscal year 2019, with over 30M users having downloaded the app and over 2.5M paying users. The company’s profitability allows it to reinvest its net revenue into content and increase quality control across the board in an effort to acquire more users. All BYJU’s needs is for potential new users to get logged in onto the platform, and the content will do the rest. Aggressive Go-To-Market Sales Strategy Intrigued by the proliferation of BYJU’s across India’s market, I learned firsthand from a few BYJU’s employees how their sales strategy operated. Quite the game plan. Sales at BYJU’s means chasing massive quotas, aggressive door-to-door knocking, and making customer meetings personal to every household. India’s constitution officially only recognizes 22 languages, and another 6 languages are regarded as ‘classical’ languages. But as there are many conflicting truths in India, over 780 languages are loosely recognized within state boundaries and regions. This leads to a truly diverse, complicated, and heterogeneously cut up consumer market. BYJU’s smartly attacks this problem head-on by creating localized sales team in each region who can speak to parents and students alike in their native tongue — leading to more trusting relationships. Sales managers are tasked with inspiring sales reps to customize conversations based on economic outcomes for children if they don’t receive proper educational resources. With rural parts of India (60% of the population) mostly consisting of uneducated parents, every rupee matters. Sales rep’s of BYJU’s come in and play their hand by emotionally cornering in parents into purchasing BYJU’s premium app plans. A friend told me how a BYJU’s sales representative hounded his parent and guilt-tripped his younger sister into thinking BYJU’s paid app was the only way for her to be educated. There’s been a few startling internal reports and whistle-blowers calling out ethically questioning practices on the business side. Consumers have also voiced their displeasures on sites like these. Thus far, it hasn’t caused a crazy stir. Worth keeping an eye for as BYJU’s grows linearly in its workforce and customer markets. Funded with cash, cash, and more cash Much like the fastest-growing Silicon Valley startups, BYJU’s has caught the eyes of elite venture capital investors across the globe. Funding rounds have come from: Tencent Holdings — the premier Chinese investing conglomerate w/ $137B in asset & notable stakes in WeChat, Riot Games, Snap, and Tesla. — the premier Chinese investing conglomerate w/ $137B in asset & notable stakes in WeChat, Riot Games, Snap, and Tesla. Sequoia Capital — Silicon Valley’s biggest brand name w/ investments in Apple, PayPal, WhatsApp. Combined public stock market value of $1.4T or 22% of Nasdaq. — Silicon Valley’s biggest brand name w/ investments in Apple, PayPal, WhatsApp. Combined public stock market value of $1.4T or 22% of Nasdaq. Qatar Investment Authority — Qatar’s sovereign wealth fund w/ over $335B assets under management. — Qatar’s sovereign wealth fund w/ over $335B assets under management. Chan-Zuckerberg Initiative — A fund led by Zuckerberg and his wife Chan dedicating resources into education, justice, and science companies. — A fund led by Zuckerberg and his wife Chan dedicating resources into education, justice, and science companies. Owl Ventures — The world’s biggest and exclusively focused EdTech funds w/ investments in Quizlet, Remind, and SV Academy. These are just the flashiest names and biggest firms from America, China, and the Middle East. This is the type of explosive firepower behind BYJU’s. Earlier this year, legendary American investor Mary Meeker led another round of $130M for BYJU’s, driving the valuation to $10.5B. Undoubtedly, every single one of these firms is chasing the first true EdTech company that’s successfully reached such great product-market traction. With the backing and strategic reach of these powerful firms, BYJU’s is formidable in its quest to gain international access. Cash is the lifeblood of any startup. BYJU’s has a rich supply and then some at their disposal. By creating a FOMO effect for every reputable venture capital fund in the globe, they’ve created a trusting & mutual relationship of showing revenue growth, roping in more cash to expand their ambitions. Marketing and Advertising Spend With tons of cash on hand, BYJU’s has left no card unturned in constantly keeping the brand top of mind for its core massive Indian market. Superstar Bollywood actor and global icon Shah Rukh Khan has become the company’s recognizable face broadcasted to homes across India. Image via Econ Time India. Shah Rukh Khan is a household name not just for 1.4 B Indians, but for the estimated 28 million Indians outside of the country across the globe. His mere association with BYJU’s has done the company wonders by validating the extent of its prowess as a brand. In a country like India where entertainment, religion, and cricket rule, BYJU’s has secured the support of one of the country’s largest entertainment names. India’s star cricketer, touting the prominent BYJU’s logo at their national jersey unveil in 2019. Image via Cricket Times. Virat Kohli, the LeBron James/Lionel Messi of Indian Sports, sports the BYJU’s jersey in every Indian national match which conservatively reaches 400M viewers across the globe. Again, BYJU’s is willing and able thanks to strong cash reserves to create prominent brand awareness. This is something most startups have never been able to do: have mainstream name recognition while possessing limited years on the open market as a consumer product. So far, the results have been fantastic. Advertising wise, BYJU’s is constantly pushing its name via banners ads on YouTube. Marketing spend has reached mammoth proportions of $25M+, massively outpacing other EdTech firms and digital-first companies period. For a country enamored by celebrities, BYJU’s has done everything possible to remain top of mind and recognizable. Acquisitions Galore This is an unconventional company when considering the steady pace of massive acquisitions BYJU’s has made domestically and abroad. The signaling from BYJU’s end is simple: if your company does something better than we do, there’s a big, fat target on your back. As competition or as becoming part of us. So far, there have been 5 acquisitions on BYJU’s end, totaling over $650M alone. BYJU’s recently closed the acquisition of WhiteHat Jr. for $300M. WhiteHat Jr. is an online EdTech firm in Mumbai that specializes in offering coding lessons, activities, and certificates to the under 22 age market. Last winter, Osmo was bought out for $120M by BYJU’s to expand it’s global footprint and tap into the niche physical + digital education market for pre-schoolers. From entrance exam students to pre-schoolers, BYJU’s has an end to end view into how student behavior and deeply understanding what the market wants. Market penetration strategy 101: start niche, build vertically, expand across different demographics as the product value proposition is proven. Global Expansion With the purchase of Osmo’s, BYJU’s initiated the first crucial step in stepping into the U.S. market and getting an on the ground view into what digital education customers are like in America. Beyond that, BYJU’s made waves in June of 2019 for announcing a partnership with Disney to build learning content integrating Disney’s most iconic characters to form a stronger footprint in India for younger audiences. Prediction: this is BYJU’s chance to prove to Disney that digital education packaged in entertaining methods is the future. Securing Disney’s hand as the first EdTech investment to date is a sure-fire way to create goodwill and trust; creating a roadmap for pivoting towards western markets where Disney already captivates younger audiences at a reliable clip. BYJU’s has set out for U.S., U.K., Indonesia, and Nepal as targets for their next market expansion opportunities. Again, hedging their bets — the safer consumer side but higher investment in places like America or Great Britain vs. the less capitally intensive investment but potentially higher ROI if market traction is gained in an Indonesia or Nepal. Risk mitigation is key. Popular favorites like Cars and Toy Story will be targeted towards the 1–3 grade students in India. Image from BYJU’s promotional ad. Will anything stop BYJU’s? The trajectory of BYJU’s success has surprised everyone but its visionary founder and core team. To them, improving learning outcomes and religiously focusing on concept mastery vs. black and white problem-solving drives all their product iteration. Byju Raveendran's relentless motivation to improve education across India and the globe gives him the ideological jet fuel to transcend all EdTech companies in their quest to change education. Becoming an iconic company for him is a product of changing the lives of a billion learners. Carrying out and embodying this strong ‘why’ will be the key for BYJU’s.
https://medium.com/age-of-awareness/byjus-the-unordinary-story-of-an-indian-decacorn-powerhouse-ba6fcb82a65
['Vinit Shah']
2020-09-17 22:04:06.334000+00:00
['Innovation', 'India', 'Entrepreneurship', 'Education Technology', 'Startup']
The Truth behind the Story
Brand stories come to life, sometimes in a good or bad way. In brand storytelling, I say it’s better to be safe than sorry and tell a fictional story. Yes if the brand tells a story and they don’t state if the story is real or not they can get some publicity. The whole bad publicity is better than no publicity can cause attention but will have consumers doubting your brand… I’d prefer to stay away from that and tell a great story. Making stories relatable and personable is key to connecting with the audience; I believe a brand can do so, without using a real person. Plus this gives the brand to create their ideal consumer and inspire them to “live like their brand.” It is very important for the audience to be able to tell whether a story is real. Certain stories can make an audience feel a certain way with their emotions and it should be an ethical responsibility for the brands to tell if a story is real or fake. If not, it can cause problems later on especially when there is no reason to lie. For example, The clothing company Hollister created a fictitious brand story, BBC wrote in 2009 “US fashion brand Hollister uses logos and labels bearing the date 1922, but the company was only founded in 2000.” The clothing store would have still been well known without the fictitious branding.” Sometimes fabricated stories can work to cause attention and shock the audience. The company, Plan Norway created a fabricated story about a 122-year-old girl named Thea and how she was getting married. The blog showed her whole process of the bride-to-be causing awareness to young brides. The Huffington Post wrote in 2014 “The campaign was intended to inspire people to get involved in stopping child marriage on a global scale by sponsoring girls in developing countries. Plan International’s sponsorship program connects donors with children in need of aid. In exchange for monthly contributions toward ongoing Plan projects in the child’s community, the sponsor receives updates and letters from the child.” Huffington Post was able to interview Olaf Thommessen, National Director of Plan Norway. Thommessen explained that “Plan Norway’s goal with the Thea project was to mobilize Norwegians to stop Thea’s wedding before the little girl made it to the altar on Oct. 11.The plan worked. Concerned Norwegians reportedly called the police to alert them to Thea’s plight.” Even though the company didn’t mention the story was fake right away, they wanted their audience to do something. Their story was created for a call-to–action which I can respect as a smart marketing strategy. They made the girl seem real to cause awareness and finally reveled that she was a fictitious character.
https://medium.com/once-upon-a-brand/the-truth-behind-the-story-e74c0d77517
['Danielle Darden']
2015-09-14 08:53:32.843000+00:00
['Brand Story', 'Marketing', 'Storytelling']
React Sans JSX
Photo by Caspar Camille Rubin on Unsplash One of the best things to happen to JavaScript would have to be Babel and process by which one could transpile from one version of JavaScript to another. Not only has this allowed the adoption of modern JavaScript to be accelerated, but it has also opened the doors for other important inventions that improve our JavaScript experiences. A wonderful example of this is JSX. JSX is an XML like structure that allows one to express component rendering in a syntax that is similar to HTML(even though it is not HTML) and therefore in a way that is familiar. Though it is used in other projects, JSX is primarily associated with React. It is how components are built. In fact, it is so common, it might be surprising to some to know the JSX is used outside of React. JSX is not part of ECMAScript (for now) and sending raw JSX to your browser would not work very well. To make it work one depends on a build process to convert JSX into real JavaScript. The most common build process typically uses Babel, Webpack, Typescript, or some combination of those three. Given all that, what if I told you that you can write React without a build process which means no Babel to transpile JSX into regular JavaScript. React has official docs on using React without JSX, but one may ask why would you do that? There are, in my opinion, two obvious reasons to not use JSX. One is that you have a mostly static web page, and only need to use React to control a widget or a section of the web page that needs functionality. Bringing in a whole build process can be overkill for something that simple. The other reason is that it is good to understand what is happening when your code is transpiled from JSX. The more you understand what’s happening under the hood, the better you are at writing and debugging your application.
https://medium.com/the-non-traditional-developer/react-sans-jsx-37ae49b16e54
['Justin Travis Waith-Mair']
2019-07-12 19:29:35.689000+00:00
['React', 'Software Engineering', 'Jsx', 'Babel', 'Learning To Code']
A Tale of a Freshly Baked Man’oushe
Eating a man’oushe during recess every day comprised nothing more than engaging in a monotonous routine that satiated his hunger. It also afforded him a good excuse not to talk while pretending to socialize with the rest of his classmates. Wissam’s daily ritual consisted of spending 1,500 Lebanese Pounds at the school cafeteria (a small kiosk, really), where he had to unleash his WWE skills in order not to come out empty-handed. The process was more akin to a Herculean feat. Daily, the manager of the cafeteria stocked up on a fixed amount of edibles. He was always basing his analysis on a whimsical assumption that probably depended by and large on his mood when he woke up every morning. The employees would unflinchingly inform latecomers and those who couldn’t make their way through to the front of the line that no items were left except for a couple of boring snacks no one else had wanted. To come out victorious entailed devising meticulous strategies to be amongst the first arrivers to the kiosk. Some of the maneuvers included asking for permission to go to the restroom just five minutes before the class session ended, or faking a stomach ache to be allowed to leave to buy something to eat. The latter usually prompted the teacher to urge the student also to accompany their food with the quintessential concoction that everyone seemed to believe could cure any ache one was suffering from: tea. Going back to the man’oushe, it is a Lebanese pastry similar to pizza, served for any meal during the day. However, it is more common for breakfast or lunch, usually topped with cheese, thyme, minced meat, or keshek (a Lebanese dairy-based product), and a variety of other combinations. The smell of this tasty freshly baked dough can trigger an array of senses, depending on the topping. The melting sweetened Akkaoui white cheese on top of the gently rising dough in an arched brick oven exudes a smell, unlike any other aroma. A mosaic of scents can waft out from the oven to your nostrils uninvited, making its way to the pandora box of the mind where all your memories are stored. Despite having a man’oushe frequently at school, Wissam seldom went down memory lane. Not so much because of the commotion he had to go through to secure himself a tasty man’oushe, as much as because this staple of Lebanese breakfast was not baked right there in front of him. Wissam noticed that being present during the ritual of baking was as crucial as indulging his palate with an exquisite dance of the soft dough and the mix of heavenly thyme, cheese, or keshek. However, he couldn’t explain the reason behind this gut feeling. It wasn’t something that he could articulate. Nor was he sure that others would understand what he meant either. So he spent his days pondering about the renewed sense of awe and inspiration he experienced every time he frequented a bakery but kept his thoughts to himself. He looked forward to it and impatiently waited until the next time his parents decided to have mana’ish for breakfast, generally on weekends. Parents can be quite capricious, though, at some point weeks passed by without having any baked goods cravings. To that effect, Wissam endeavored to set out a plan to subtly plant the man’oushe seed in his parents’ minds so they would crave it more often, alas to no noticeable success. He couldn’t tell what exactly about the trip to and from the bakery that evoked such ineffable visceral feelings. It was a mundane ten minutes walk uphill, where nothing much happened. In a suburban setting, on an early Sunday morning, you could hear the chirping of birds singing to the melodies of occasional cars passing by, and dancing to the rhythm of a handful of exercising sports junkies seeking to breathe in some fresh air. But that was pretty much it. Nothing fancy. The stillness and the slow start of the weekends made it possible for Wissam to adjust back to a slower tempo to balance out the frenetic ebb and flow of his quotidian weekday routine. Nothing even compared to the joy Wissam experienced when he was the first customer to arrive at the bakery. The noise of the dough mixer could be heard outside within a substantial radius from the place. It was like music to his ears that acted as a warmup session before he got his hands dirty. The bakery was a place where people socialized and gossiped. Time stopped there, and everyone was friendly with the rest. Wissam would greet the owner, make small talk, explain to him why he was fed up with school, and what he thought about the game of the night before. The owner of the bakery, Abu Tarek, engaged him in a fun discussion as he cut the dough into pieces, sheeted it and placed the pieces on a long, rectangular, well-floured wooden board. Wissam would, later on, zone out entirely as he prepared the mana’ish gently adding the thyme mix, or Akkaoui cheese his mom had prepared, spreading them over the dough, and dimpling it with love to make it ready to enter the oven. During this time, he would be lost to a flood of ideas that swamped his mind, things like what he wanted to become in the future, where he would be living, and the kind of lifestyle he would like to have. Little did he know that all this did not matter, a realization that would dawn on him years later. The cracking sound the wooden board made as the baker slid the dough into the oven never ceased to fill Wissam with hope. Expectantly, it seemed to him only necessary that the sequence following the placement of the dough in the oven would be the emanation of a mixture of fragrances that would levitate him above the ground and would take him to much better places. On his way back, carrying a tray full of freshly baked man’oushes, Wissam used to stop at a fork in the road that overlooked the sea. He often considered taking a turn and not going back, but running away with a man’oushe was not a very smart idea. Fifteen years later, when his savings had been squandered, inflation was eating away the flesh of the people, and a major economic crisis hit the country, he came back to the same spot. The bakery was closed, the birds were gone, his bank savings were no more. All he was left with was the distinct memories of the smell of a freshly baked man’oushe.
https://mahmoud-rasmi.medium.com/a-tale-of-a-freshly-baked-manoushe-df25025dd577
['Mahmoud Rasmi']
2020-04-28 22:10:32.156000+00:00
['Fiction', 'Memories', 'Writing', 'Lebanon', 'Storytelling']
The meaning of “life” and other NLP stories
The language of thoughts (or: how to express complex concepts with simpler ones) “The proposition is the expression of its truth-conditions.” — L. Wittgenstein, Tractatus Logico-Philosophicus (4.431) To understand formal semantics as a discipline (and how it differs from other approaches), we need to go back to a crazy Austrian dude at the start of the 20th century. What does it mean to understand the meaning of a sentence? To understand the meaning of a sentence means understanding its truth conditions, that is, understand how the world would look like, if the sentence were true. So, to make “Lions are bigger than domestic cats” true, the world should be such that a given type of felines is bigger than another (true); to make “Lions are bigger than blue whales” true, the world should be such that a given type of felines is bigger than a given type of aquatic mammal (false) (please note: the fact that we can establish if the sentence is true/false has nothing to do with understanding it; everybody understands “The total number of cats in Venice in January 1st, 1517, was odd.”, but nobody knows if it’s true). So if we buy that meaning = truth conditions, isn’t the problem solved? Actually nope, since the number of possible sentences is infinite: there is no list, however big, that will give us “all the truth conditions of English”. It should never cease to amaze the reader that the following sentence — most likely written here for the first time in history — can be understood without effort: Forty-six penguins were lost in the Sahara desert after a fortuitous escape from a genetic lab in Ciad. How does that happen? How can limited minds with limited resources understand infinitely many things? Formal semantics is like playing infinite LEGO: complex LEGOs are build using simpler ones, and simpler LEGOs are built with basic LEGO bricks; if you know how bricks can be combined and have some bricks to start with, there are countless things you can create. Pretty much in a similar fashion, the (to be defined) meaning of a sentence is predictably built out of the (to be defined) meaning of its constituents: so if you know the meaning of penguins and Sahara, you can understand what it means for a penguin to be lost in the desert. Formal semantics is the discipline studying the instruction set in which the bricks of our language can be put together. If all this seems pretty straightforward to humans, it will be good to examine compositionality in some well-known NLP architectures. Take for example what happens with the two sentences below and DeepMoji, a neural network that suggests emojis (the example comes from our A.I. opinion piece): My flight is delayed.. amazing. My flight is not delayed.. amazing. The same emojis are suggested for the sarcastic vs normal sentence (original video here). The two sentences differ for just one word (NOT), but we know that word is “special”. The way in which not contributes to (yes!) the truth conditions of the sentences above is completely ignored by DeepMoji, which does not possess even a very elementary notion of compositionality; in other words, adding negation to a sentence does not typically “move the meaning” (however construed) by a few points on an imaginary “meaning line” (like adding “very” to “This cake is (very) good”), but completely “reverses” it. Whatever “language understanding” is embedded in DeepMoji and similar systems, we need a completely different way to represent meaning if we are to capture the not behavior above. The story of formal semantics is the story of how we can use math to make the idea of “language LEGO” more precise and tractable. Mind you, it’s not a happy ending story . Semantics 101 “There is in my opinion no important theoretical difference between natural languages and the artificial languages of logicians.” — R. Montague A crucial thing about meaning is that there are two elements to it — recall the weird Austrian dude’s definition above: …understand how the world would look like, if the sentence was true. So there is a sentence, sure, but there is also the world: meaning, in its essence, is some kind of relation between our language and our world (technically, a plurality of worlds, but things get complicated then). Since the world is a fairly big and unpractical thing to work with, we use objects from set theory as our model of the world. Before formulas and code, we’ll use this section to build our intuition first. Our first toy language L is made by the following basic elements: names = ['Jacopo', 'Mattia', 'Ryan', 'Ciro'] predicates = ['IsItalian', 'IsAmerican', 'IsCanadian'] connectives = ['and'] negation = ['not'] The basic elements can be combined according to the following syntactic rules: a "name + predicate" is a formula if A is a formula and B is a formula, "A connective B" is a formula if A is a formula, "negation A" is a formula which means that the following sentences are all part of L: Jacopo IsItalian Mattia IsAmerican and Jacopo IsItalian not Jacopo IsItalian ... It is now time to introduce semantics: while we can be tempted to interpret L using some background knowledge (e.g. my first name is “Jacopo”) is absolutely crucial to remember that sentences in L have no meaning at all. Since we expect the meaning of complex things to be built up from simpler ones, we will start with the meaning of names and predicates, since “name + predicate” is the simplest sentence we need to explain. We start with a domain of discourse D, which is a set with some elements and some subsets, and we then say that: the meaning of a name (its “denotation”) is an element of D; the meaning of a predicate (its “extension”) is a subset of D. D is a generic “container” for our model: it’s just a “box” with all the pieces that are needed to represent meaning in L. If you visualize a sample D (below), it is easy to understand how we define truth-conditions for “name + predicate” sentences: if A is a “name + predicate” sentence, A is true if and only if the denotation of name is in the extension of predicate . A sample domain for our toy language L. So, for example: “Jacopo IsItalian” is true if and only if the element in D representing Jacopo is a member of the set representing IsItalian ; is a member of the set representing ; “Jacopo IsCanadian” is true if and only if the element in D representing Jacopo is a member of the set representing IsCanadian . As we learned, truth conditions don’t tell you what is true/false, but tell you how the world (better, your model of the world) should look like for things to be true/false. Armed with our definition, we can look again at our D and we can see that, in our case, “Jacopo IsItalian” is true and “Jacopo IsCanadian” is false. The extension of “isItalian” contains the denotation of “Jacopo” (in purple). When a sentence in L is true in our set-theoretic, small world, we also say that the sentence is satisfied in the model (technically, being true for sentences is a special case of being satisfied for generic formulas). Now that we have defined truth conditions for the basic sentences, we can define truth conditions for complex sentences through basic ones: if A is formula and B is a formula, “A and B” is true if and only if A is true and B is true. So, for example: “Jacopo IsItalian and Mattia IsAmerican” is true if and only “Jacopo IsItalian” is true and “Mattia IsAmerican” is true. Since “Jacopo IsItalian” and “Mattia IsAmerican” are “name + predicate” sentences, we can now fully spell out the meaning: “Jacopo IsItalian and Mattia IsAmerican” is true if and only the element in D representing Jacopo is a member of the set representing IsItalian , and the element in D representing Mattia is a member of the set representing IsAmerican . Armed with our definition, we can look at D and see that “Jacopo IsItalian and Mattia IsAmerican” is false as “Mattia IsAmerican” is false: The extension of “isAmerican” does not contain the denotation of “Mattia” (in blue). Finally, we can see in our semantics how negation is indeed a “reversing” operation: if A is a formula, “not A” is true if and only if A is false. “not Jacopo IsItalian” is true if and only “Jacopo IsItalian” is false. “not Jacopo IsItalian” is true if and only if the element in D representing Jacopo is a not member of the set representing IsItalian . Obviously, specifying truth-conditions for our toy language L is not terribly useful to build HAL 9000. But even with this simple case, two things should be noted: our semantics is fully compositional and allows, in a finite way, to assign truth conditions to an infinite number of sentences: there is no possible sentence in L left out by our definition of meaning. More expressive languages will have (much) more complex compositional rules, but the general gist is the same: a finite set of instructions automatically generalizing to an infinite number of target sentences; our choice of D was just one world among many possibilities: we could have chosen a world where “Mattia IsAmerican” is true, and our semantics would have been the same — remember, semantics assigns truth conditions, but it’s silent on how these conditions are actually satisfied. In real world applications we are often interested in truth as well, so we will need to couple semantics with a “knowledge base”, i.e. specific facts about the world we care about: when modeling a real world phenomenon, D should be construed to be “isomorphic” to it, so that “true in D” will mean the same as “true in the domain of interest”. The expert reader may well have guessed already how we can build an application of immediate value by leveraging (1) and (2) above: (1) guarantees that the knowledge encoded by semantics generalizes well; (2) guarantees that, insofar as we chose our target domain carefully, the satisfaction algorithm will evaluate as true all and only the sentences whose truth we care about. In particular, even the simpler program in computational semantics (such as code that checks satisfaction for arbitrary formulas) can be seen as an instance of querying as inference (as championed here): given a state of the world as modeled in some useful way (e.g. a database), can a machine automatically answer our questions about the domain of interest? In the ensuing section we are going to explore a slightly more complex language in such a setting. [ Bonus technical point: if semantics does not constrain truth in any way — i.e. as far as semantics goes, a world where Jacopo isItalian is true is just as good as one in which Jacopo isCanadian is true — is it helpful at all by itself? Yes, very, but to know why we need to understand that the core concept of semantics is indeed entailment, i.e. studying under which conditions a sentence X is logically implied by a set of sentences Y1, Y2, … Yn. In particular, the real question semantics is set to answer is: given a domain D, a sentence X, a sentence Y, if X is true in D, is Y necessarily true as well? Entailment is also the key concept of proof theory: in fact, we have an amazing proof of the relation between deductive systems and semantics, but this note is too small to contain it.] “Querying as inference” using computational semantics “In order to understand recursion, you must first understand recursion.” — My t-shirt Let’s say the following table is a snippet from our CRM: A sample customer table recording customers and payments. There are a lot of interesting questions we may want to ask when looking even at a simple table like this: Did all customers pay? Did Bob specifically pay? Did Bob pay five dollars? … and so on We can put our framework to good use, formulate a semantics for this domain and then query the system to get all the answers we need (a Python notebook sketching this use case is also included in the repo). The first step is therefore to create a language to represent our target domain, such as for example: names = ['bob', 'dana', 'ada', 'colin'] + digits [0-9] unary predicates = ['IsCustomer', 'IsPayment'] binary predicates = ['MadePayment', 'HasTotal'] quantifiers = ['all', 'some'] connectives = ['and'] negation = ['not'] Our language allows us to represent concepts like: there is a thing in the domain of discourse which is a customer named bob there is a thing ... X which is a customer, a thing Y which is a payment, and X made Y there is a thing ... which is a payment and has a total of X The second step is building a model which faithfully represents our table of interest. In other words, we need to build a domain of objects, a mapping between names and objects, and properly construe predicate extensions such that properties as specified in the table appear to be consistently represented in the model: domain: [1, 2, 3, 4, 5, 6], constants: {'bob': 1, 'dana': 2, 'ada': 3, 'colin': 4}, extensions: { 'IsCustomer': [[1], [2], [3], [4]], 'IsPayment': [[5], [6]], 'MadePayment': [[1, 5], [2, 6]] ... } Once that is done, we can query the system and let the machine compute the answers automagically: Did all customers pay? becomes the query For each thing x, if x IsCustomer, there is a y such that y IsPayment and x MadePayment y , which is evaluated to False [ Bonus technical point : for the sake of brevity, we have been skipping over the exact details involving the semantics of all , whose meaning is far more complex than simple names; the interested reader can explore our repo to learn all the technical steps needed to compute the meaning of all and some ]. becomes the query , which is evaluated to [ : for the sake of brevity, we have been skipping over the exact details involving the semantics of , whose meaning is far more complex than simple names; the interested reader can explore our repo to learn all the technical steps needed to compute the meaning of and ]. Did Bob pay? becomes the query There is an x such that x IsPayment and bob MadePayment y , which is evaluated to True . becomes the query , which is evaluated to . Did Bob pay 5 dollars? becomes the query There is an x such that x IsPayment and bob MadePayment x and x HasTotal 5 , which is evaluated to True [ Bonus technical point: to quickly extend semantics to handle number comparisons, we had to i) introduce digits in the grammar specifications and ii) modify the definition for satisfaction in atomic formulas to make sure that digits are mapped to themselves. Obviously, including numbers in full generality would require some more tricks: the very non-lazy reader is encouraged to think how that could be done starting from the existing framework!]. Isn’t this awesome? If our model mirrors the underlying customer table, we can ask a virtually infinite number of questions and make sure to be able to precisely compute the answers — all with a few lines of Python code. From toy models to reality “In theory there is no difference between theory and practice. In practice, there is.” — Y. Berra The “querying as inference” paradigm has all the elegance and beauty of formal logic: a small and well-understood Python script can be used to answer potentially infinite questions over a target domain. Unfortunately, it has also all the drawbacks of formal logic, that makes its immediate use outside the lab not as straightforward as you would hope: semantics as we defined it is limited to express somewhat basic concepts and relations, but we would love to do much more (for example, we would love to sum over numbers in our customer table above). While it’s possible to extend the framework to cover increasingly complex structures, that comes with some cost in complexity and manual effort; over numbers in our customer table above). While it’s possible to extend the framework to cover increasingly complex structures, that comes with some cost in complexity and manual effort; model building in real use cases requires lots of hard decisions: in our toy customer table example, we were still required to make non-trivial choices on how to map table rows to a domain that can be formally queried. The more complex the use case, the harder it is for data scientists to produce a compact, complete and extensible formal domain; querying is done in a formal language which is not exactly human friendly: the user would have to know how to translate English into some logical dialect to get the desired answers. Of course, a much better UX would be to provide users with an English search bar and put an intermediate layer translating from natural to formal languages — some of the work we have been doing at Tooso exploits a version of this idea to make querying as human friendly as possible [note for the historically inclined readers: defining semantics for a formal language F and then providing an English-to-F translation goes back to the seminal PTQ paper]. These scalability concerns and other technical reasons (such as limitations with fully general inference in first-order logic) have historically prevented computational semantics to become as pervasive in industry as other NLP tools. In recent times, some research programs have been focused on bridging the gap between the vector-based and the set-theory-based view of meaning, in an effort to take the best of both worlds: scalability and flexibility from statistics, compositionality and structure from logic. Moreover, researchers from the probabilistic programming community are working within that framework to combine probability and compositionality to systematically account for pragmatic phenomena (see our own piece on the topic here). At Tooso, our vision has always been to bridge the gap between humans and data. While we believe no single idea will solve the mystery of meaning, and that many pieces of the puzzle are still missing, we do think that there is no better time in the history of humanity to tackle this challenge with fresh theoretical eyes and the incredible engineering tools available today. Before we solve the language riddle in its entirety, there are a lot of use cases requiring *some* language understanding which can unlock immense tech and business value. As a final bonus consideration, going from science to “the bigger picture”, let’s not forget that after this post we should now be ready to finally know what is the meaning of “life” (anecdote apparently due to famous semanticist Barbara Partee): we would have to translate it to a constant symbol life, and use an operator such as | to indicate that we are talking about its extension in our model. So, in the end, the meaning of “life” is |life . Maybe this is what the crazy Austrian dude meant when he said: Even when all possible scientific questions have been answered, the problems of life remain completely untouched. But this is obviously a completely different story.
https://medium.com/tooso/the-meaning-of-life-and-other-nlp-stories-8dfc6ed75b71
['Jacopo Tagliabue']
2019-02-03 01:32:15.088000+00:00
['NLP', 'Artificial Intelligence', 'Python', 'Computational Linguistics', 'Machine Learning']
Entrepreneurship Is More Than Being a Founder
Entrepreneurship Is More Than Being a Founder It’s an approach to life Photo by lucas clarysse on Unsplash The word entrepreneurship is drastically misunderstood. It is often thought that entrepreneurs are superior or a higher breed of human being. They are not. Entrepreneurs are associated with starting companies and all things business. This association explains about 1% of what entrepreneurship is. How do I know? I was once a creator of businesses and now that I’m not, people often mistake who I am. The number one question I get is: “Why don’t you start another business?” I have no intention of starting a business because that has nothing to do with me living like an entrepreneur. Entrepreneurship is a way of life. Entrepreneurship is this: It’s how you think It’s how you view the world It’s an expression of creativity It’s creating something new Entrepreneurship is so much more than being a founder or creating companies. We are missing the whole point of entrepreneurship and I’m going to explain it to you in simple terms. You may already be an entrepreneur and not realize it because its meaning has been lost. I am going to help you rediscover what it means to be an entrepreneur so you can utilize its true meaning in your own life.
https://medium.com/better-marketing/entrepreneurship-is-more-than-being-a-founder-5013f9c17235
['Tim Denning']
2019-08-22 01:53:14.543000+00:00
['Life Lessons', 'Entrepreneurship', 'Startup', 'Self Improvement', 'Life']
How Emotionally Intelligent People Deal With Their Problems the Right Way
This reminds me of the time I was on the way home from a mini-vacation. We had dinner at a busy restaurant. I used to work at a kitchen in high school in college. My (then) wife served for a decade. Her mother has been a caterer and her father currently manages a restaurant. In short, we’re very patient and understanding when it comes to the kitchen and waitstaff. Anyway. We sat down and our service was going well to start. Then, time passed. And as time passed, we saw the server less and less. Our meal took so long, and our waiter was so absent, that we had to go to the front of the restaurant and check the status of our meal. Not only did we not get our food, but we watched as nearly every table around us received theirs in a timely fashion. We suspected the waiter may have forgotten to put the order in or made some mistake, which would have been fine had he addressed the situation and, more importantly, addressed us. I suspect he was either ashamed, afraid, or both. Finally, by the time we did receive our food, he acknowledged the problem and offered a discount on the meal. Had he followed the customer playbook and stayed attentive while trying to alleviate his mistake, everything would’ve been fine. The Power of Leaning In and Facing Your Problems Head-on Why am I telling you this story? Because it shows a perfect example of the choice you have to make when you encounter problems. Your two choices are either to avoid problems or lean in and fix them. Avoidance lets you off the hook in the short-term, but it doesn’t fix your problem, and often the problems you ignore compound and get worse. Your finances are bad so you don’t want to look at your bills — all the while racking up late fees, interest, and penalties. You don’t want to look at the scale, your diet, or your exercise habits so you rationalize your way out of thinking about them. Your relationships, career, and life, in general, could all use a tuneup, but you won’t check under the hood. Burying your head in the sand does help to a degree — you can avoid the discomfort you’re afraid of — but once you peek up you’re in a far worse situation. Better to just lean into your problems. When you lean into your problems you more or less have this conversation with yourself: “Okay. This is where I’m at. This is what happened. I’m responsible for the situation and addressing it head-on — while uncomfortable — will lead to either a solution or the peace of mind in knowing I did everything I could to improve it.” Flip Problems On Their Head and Master Your Mindset When was the last time you addressed yourself in a real way and took responsibility for your life? What are your problems? Which of your problems are you rationalizing? Are you shifting blame to someone or something when it’s really on you (it’s almost always on you)? When you lean in, you build emotional muscles that grow as a result of the stress that comes with truly dealing with a problem. Not only can you produce a better outcome, but you get to put a deposit into your confidence & resilience account. Do this often enough, and you’ll have the reward of being able to say “I’m someone who can handle my life.” That reward is not trivial. Think about it. How would you divide up the population between people who are scrambling, frazzled, and scraping by vs.people who are handling their lives? There’s something scary & vulnerable but powerful & liberating in being able to stand emotionally open-chested at your circumstances, bearing them with a relaxed sense of responsibility, and dealing with them as pieces of your life you’ll inevitably run across thus not needing to ‘worry’ over them.
https://medium.com/curious/how-emotionally-intelligent-people-deal-with-their-problems-the-right-way-ec7658cff1f6
['Ayodeji Awosika']
2020-11-09 20:50:40.841000+00:00
['Emotional Intelligence', 'Mental Health', 'Inspiration', 'Self Improvement', 'Psychology']
Kubernetes and Big Data: A Gentle Introduction
Kubernetes and Big Data: A Gentle Introduction KLau Follow Feb 4 · 11 min read This blog is written and maintained by students in the Professional Master’s Program in the School of Computing Science at Simon Fraser University as part of their course credit. To learn more about this unique program, please visit {sfu.ca/computing/pmp}. Photo by Tom Fisk from Pexels Kubernetes, what is that? Kubernetes has been an exciting topic within the community of DevOps and Data Science for the last couple of years. It has continuously grown as one of the go-to platforms for developing cloud-native applications. Built by Google as an open-source platform, Kubernetes handles the work of scheduling containers onto a compute cluster and manages the workloads to ensure they run as intended. However, there is a catch: what does all that mean? Sure, it is possible to conduct additional research on Kubernetes, but many articles on the Internet are high-level overview crammed with jargon and complex terminology, assuming that most readers already have an understanding of the technical foundations. In this post, we attempt to provide an easy-to-understand explanation of the Kubernetes architecture and its application in Big Data while clarifying the cumbersome terminology. However, we assume our readers already have certain exposure to the world of application development and programming. We hope that, by the end of the article, you have developed a deeper understanding of the topic and feel prepared to conduct more in-depth research on. What are microservices? A fictional Buy-a-Book online store with three microservices: Login, Buy and Return. Each microservice is decoupled from the rest of the app and is responsible for one specific task. The services interact with each other through APIs. (source) To gain an understanding of how Kubernetes works and why we even need it, we need to look at microservices. There isn’t an agreed-upon definition for microservices, but simply put, microservices are smaller and detached components of a bigger app that perform a specific task. These components communicate with each other through REST APIs. This kind of architecture makes apps extensible and maintainable. It also makes developer teams more productive because each team can focus on their own component without interfering with other parts of the app. Since each component operates more or less independently from other parts of the app, it becomes necessary to have an infrastructure in place that can manage and integrate all these components. This infrastructure will need to guarantee that all components work properly when deployed in production. Containers vs. Virtual Machines (VMs) Left: A containerized application. Each app/service runs on a separate container on Docker, currently the most popular and widely-adopted container technology. Right: Each app/service is running on a separate virtual machine placed on top of a physical machine. (source) Each microservice has its dependencies and requires its own environment or virtual machines (VMs) to host them. You can think of VMs as one “giant” process in your computer that has its storage volumes, processes and networking capabilities separate from your computer. In other words, a VM is a software-plus-hardware abstraction layer on top of the physical hardware emulating a fully-fledged operating system. As you can imagine, a VM is a resource-consuming process, eating up the machine’s CPU, memory and storage. If your component is small (which is common), you are left with large underutilized resources in your VM. This makes most microservices-based apps that are hosted on VMs time-consuming to maintain and costly to extend. A Docker Host can handle multiple containers, with each container defining a detached microservice. For example, one container holds all the files, the other defines the MySql database, the PHP backend is defined in another container and so forth. Extending the app (e.g. adding a Python-based machine learning model) is simply a matter of creating another container inside the Docker Host without affecting the other components. (source) A container, much like a real-life container, holds things inside. A container packages the code, system libraries and settings required to run a microservice, making it easier for developers to know that their application will run, no matter where it is deployed. Most production-ready applications are made up of multiple containers, each running a separate part of the app while sharing the operating system (OS) kernel. Unlike a VM, a container can run reliably in production with only the minimum required resources. Therefore, compared to VMs, containers are considered lightweight, standalone and portable. Diving into Kubernetes We hope you are still on board the ride! Having gone through what are containers and microservices, understanding Kubernetes should be easier. In a production environment, you have to manage the lifecycle of containerized applications, ensuring that there is no downtime and that system resources are efficiently utilized. Kubernetes provides a framework to automatically manage all these operations in a distributed system resiliently. In a nutshell, it is an operating system for the cluster. A cluster consists of multiple virtual or real machines connected together in a network. Formally though, here’s how Kubernetes is defined in the official website: “Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.” Kubernetes is a scalable system. It achieves scalability by leveraging modular architecture. This means that each service of your app is separated by defined APIs and load balancers. A load balancer is a mechanism where a system ensures that each component (be it a server or a service) is utilizing the maximum available capacity to carry out its operations. Scaling up the app is merely a matter of changing the number of replicated containers in a configuration file, or you could simply enable autoscaling. This is particularly convenient because the complexity of scaling up the system is delegated to Kubernetes. Autoscaling is done through real-time metrics such as memory consumption, CPU load, etc. On the user side, Kubernetes will automatically distribute traffic evenly across the replicated containers in the cluster and, therefore, keep deployment stable. Kubernetes allows more optimal hardware utilization. Production-ready applications usually rely on a large number of components that must be deployed, configured and managed across several servers. As described above, Kubernetes greatly simplifies the task of determining the server (or servers) where a certain component must be deployed based on resource-availability criteria (processor, memory, etc.). Another awesome feature of Kubernetes is how it can self-heal, meaning it can recover from failure automatically, such as respawning a crashed container. For example, if a container fails for some reason, Kubernetes will automatically compare the number of running containers with the number defined in the configuration file and restart new ones as needed, ensuring minimum downtime. Now that we have that out of the way, it’s time to look at the main elements that make up Kubernetes. We will first explain the lower-level Kubernetes Worker Node then the top-level Kubernetes Master. The Worker Node is the minions that run the containers and the Master is the headquarter that oversees the system. Kubernetes Worker Nodes Components Kubernetes Worker Nodes, also known as Kubernetes Minions, contain all the necessary components to communicate with the Kubernetes Master (mainly the kube-apiserver) and to run containerized applications. Docker Container Runtime Kubernetes needs a container runtime in order to orchestrate. Docker is a common choice, but other alternatives such as CRI-O and Frakti are also available. Docker is a platform to build, ship and run containerized applications. Docker runs on each worker node and is responsible for running containers, downloading container images and managing containers environments. Pod A pod contains one or more tightly coupled containers (e.g. one container for the backend server and others for helper services such as uploading files, generating analytics reports, collecting data, etc). These containers share the same network IP address, port spaces, or even volume (storage). This shared volume has the same lifecycle as the pod, which means the volume will be gone if the pod is removed. However, Kubernetes users can set up persistent volumes to decouple them from the pod. Then, the mounted volumes will still exist after the pod is removed. kube-proxy The kube-proxy is responsible for routing the incoming or outgoing network traffic on each node. The kube-proxy is also a load balancer that distributes incoming network traffic across containers. kubelet The kubelet gets a set of pod configurations from kube-apiserver and ensures that the defined containers are healthy and running. Kubernetes Master Components The Kubernetes Master manages the Kubernetes cluster and coordinates the worker nodes. This is the main entry point for most administrative tasks. etcd The etcd is an essential component of the Kubernetes cluster. It is a key-value store for sharing and replicating all configurations, states and other cluster data. kube-apiserver Almost all the communications between the Kubernetes components, as well as the user commands controlling the cluster are done using REST API calls. The kube-apiserver is responsible for handling all of these API calls. kube-scheduler The kube-scheduler is the default scheduler in Kubernetes that finds the optimal worker nodes for the newly created pod to run on. You could also create your own custom scheduling component if needed. kubectl The kubectl is a client-side command-line tool for communicating and controlling the Kubernetes clusters through the kube-apiserver. kube-controller-manager The kube-controller-manager is a daemon (background process) that embeds a set of Kubernetes core feature controllers, such as endpoints, namespace, replication, service accounts and others. cloud-controller-manager The cloud-controller-manager runs controllers that interact with the underlying cloud service providers. This enables cloud providers to integrate Kubernetes into their developing cloud infrastructure. Cloud providers such as Google Cloud, AWS and Azure already offer their version of Kubernetes services. Kubernetes for Big Data Photo by Manuel Geissinger from Pexels One of the main challenges in developing big data solutions is to define the right architecture to deploy big data software in production systems. Big data systems, by definition, are large-scale applications that handle online and batch data that is growing exponentially. For that reason, a reliable, scalable, secure and easy to administer platform is needed to bridge the gap between the massive volumes of data to be processed, software applications and low-level infrastructure (on‐premise or cloud-based). Kubernetes is one of the best options available to deploy applications in large-scale infrastructures. Using Kubernetes, it is possible to handle all the online and batch workloads required to feed, for example, analytics and machine learning applications. In the world of big data, Apache Hadoop has been the reigning framework for deploying scalable and distributed applications. However, the rise of cloud computing and cloud-native applications has diminished Hadoop’s popularity (although most cloud vendors like AWS and Cloudera still provide Hadoop services). Hadoop basically provides three main functionalities: a resource manager (YARN), a data storage layer (HDFS) and a compute paradigm (MapReduce). All three of these components are being replaced by more modern technologies such as Kubernetes for resource management, Amazon S3 for storage and Spark/Flink/Dask for distributed computation. In addition, most cloud vendors offer their own proprietary computing solutions. Google Trends comparison of Apache Hadoop and Kubernetes. We first need to clarify that there isn’t a “one versus other” relationship between Hadoop or most other big data stacks and Kubernetes. In fact, one can deploy Hadoop on Kubernetes. However, Hadoop was built and matured in a landscape far different from current times. It was built during an era when network latency was a major issue. Enterprises were forced to have in-house data centers to avoid having to move large amounts of data around for data science and analytics purposes. That being said, large enterprises that want to have their own data centers will continue to use Hadoop, but adoption will probably remain low because of better alternatives. Today, the landscape is dominated by cloud storage providers and cloud-native solutions for doing massive compute operations off-premise. In addition, many companies choose to have their own private clouds on-premise. For these reasons, Hadoop, HDFS and other similar products have lost major traction to newer, more flexible and ultimately more cutting-edge technologies such as Kubernetes. Big data applications are good candidates for utilizing the Kubernetes architecture because of the scalability and extensibility of Kubernetes clusters. There have been some recent major movements to utilize Kubernetes for big data. For example, Apache Spark, the “poster child” of compute-heavy operations on large amounts of data, is working on adding the native Kubernetes scheduler to run Spark jobs. Google recently announced that they are replacing YARN with Kubernetes to schedule their Spark jobs. The e-commerce giant eBay has deployed thousands of Kubernetes clusters for managing their Hadoop AI/ML pipelines. So why is Kubernetes a good candidate for big data applications? Take, for example, two Apache Spark jobs A and B doing some data aggregation on a machine, and say a shared dependency is updated from version X to Y, but job A requires version X while job B requires version Y. In such a scenario, Job A would fail to run. Each Spark job is run on its own isolated pods distributed over nodes. (source) In a Kubernetes cluster, each node would be running isolated Spark jobs on their respective driver and executor pods. This setup would avoid dependencies from interfering with each other while still maintaining parallelization. Kubernetes still has some major pain points when it comes to deploying big data stacks. For example, because containers were designed for short-lived, stateless applications, the lack of persistent storage that can be shared between different jobs is a major issue for big data applications running on Kubernetes. Other major issues are scheduling (Spark’s above-mentioned implementation is still in its experimental stages), security and networking. Consider the situation where node A is running a job that needs to read data stored in HDFS on a data node that is sitting on node B in the cluster. This would greatly increase network latency because data, unlike in YARN, is now being sent over the network of this isolated system for compute purposes. While there are attempts to fix these data locality problems, Kubernetes still has a long way to really become a viable and realistic option for deploying big data applications. Nonetheless, the open-source community is relentlessly working on addressing these issues to make Kubernetes a practical option for deploying big data applications. Every year, Kubernetes gets closer to becoming the de facto platform for distributed, big data applications because of its inherent advantages like resilience, scalability and resource utilization. So long my friend. In this article, we have only scratched the surface of what Kubernetes is, its capabilities and its applications in big data. As a continually developing platform, Kubernetes will continue to grow and evolve into a technology that is applied in numerous tech domains, especially in big data and machine learning. If you find yourself wanting to learn more about Kubernetes, here are some suggestions on topics to explore under the “External links” section. We hope you enjoyed our article about Kubernetes and that it was a fun read. External links: Official Kubernetes documentation https://kubernetes.io/docs/home/ Official Docker documentation https://docs.docker.com/ Cloud Computing — Containers vs Vms, by IBM https://www.ibm.com/blogs/cloud-computing/2018/10/31/containers-vs-vms-difference/ Kubernetes in Big Data Applications, by Goodworklabs https://www.goodworklabs.com/kubernetes-in-big-data-applications/ Should you use Kubernetes and Docker in your next project? Daniele Polencic at Junior Developers Singapore 2019 https://www.youtube.com/watch?v=u8dW8DrcSmo Kubernetes in Action, 1st Edition, by Marko Luksa https://www.amazon.com/Kubernetes-Action-Marko-Luksa/dp/1617293725/ref=sr_1_1?keywords=kubernetes+in+action&qid=1580788013&sr=8-1 Kubernetes: Up and Running, 2nd Edition, Brendan Burns, Joe Beda, Kelsey Hightower https://www.amazon.com/Kubernetes-Running-Dive-Future-Infrastructure/dp/1492046531/ref=sr_1_1?keywords=kubernetes+up+and+running&qid=1580788067&sr=8-1
https://medium.com/sfu-cspmp/kubernetes-and-big-data-a-gentle-introduction-6f32b5570770
[]
2020-02-24 18:43:18.078000+00:00
['Blog Post', 'Kubernetes', 'Big Data', 'Docker', 'Machine Learning']
The Coronavirus is Mutating. It’s Unclear if That’s a Problem.
The Coronavirus is Mutating. It’s Unclear if That’s a Problem. Here’s what you should know The novel coronavirus has mutated, and the mutated form of the virus now accounts for most cases in the U.S. and across the globe. That’s the major finding of a recent study, published July 3 in the journal Cell, which also found evidence that the mutated coronavirus may be more infectious than its predecessor. “The first cases on the West Coast of the U.S. were the original type that emerged in China,” says Erica Ollmann Saphire, PhD, one of the authors of the new study and a professor at the La Jolla Institute for Immunology in California. “The new form came to the East Coast of the U.S. from Europe, and it’s now globally dominant.” Her study includes color coded graphs that reveal how the mutated virus appeared to wrest control from the older virus; in New York City, the mutated virus was dominant almost from the get-go. The principal author of the new study, Bette Korber, PhD, says that the newer “variant” of the virus was first identified in Italy, and that it likely emerged from a mutation in a single individual. While the old virus persisted in some parts of the U.S., and may still be the most prevalent form in a few places, Korber, who is a research scientist at the U.S. Government’s Los Alamos National Laboratory, says that the new variant has made up the majority of cases in just about every U.S. sampling location since May. More research is needed to confirm this study’s finding that the new variant of the virus may be more transmissible — meaning more spreadable — than the older one. But Saphire says that both lab and clinical work in her group’s study indicate that the mutated virus creates more copies of itself and also produces a higher “viral load” in the people it infects, both of which could contribute to heightened spread. She also says that in places where both forms of the virus have circulated, the newer form usually “took over,” which suggests that it may have a fitness advantage over the original virus. All of this has important implications for future research — including the development of treatments and vaccines. “It’s very clear we need to keep an eye on these mutations,” Saphire says. Understanding the new virus Viruses are living organisms. And like all living organisms, they have the ability to evolve — to acquire adaptations that increase their ability to survive and thrive. These adaptations can be major or minor. In some cases, they can change a virus in ways that interfere with the actions of a vaccine. The influenza virus, for example, is genetically unstable and constantly evolving; this partly explains why each new flu season requires a new flu vaccine, and the efficacy of that new vaccine varies from year to year. Compared to the flu virus, coronaviruses are relatively stable. Experts tend to view this stability as a good thing because it means that, if and when we have an effective SARS-CoV-2 vaccine, the virus is unlikely to mutate in ways that diminish that vaccine’s potency. But Saphire says that the novel coronavirus is so prevalent — more than 13 million confirmed cases worldwide as of July 14, and a great many more unconfirmed cases — that even if mutations are statistically rare, they will emerge. She explains that most of these mutations “don’t go anywhere” — meaning they don’t confer any fitness advantage on the virus, and so they do not spread widely among people. But the mutations that her team identified are the rare exceptions. “The variant that has emerged has four changes in it, including one in the surface spike protein, which is the one we’re concerned about,” she says. “People originally claimed that even if mutations happen, they wouldn’t have any effect on functionality. But clearly that’s not true.” “Spike proteins” are those nubby protrusions that coat the outside of the virus, making it resemble a WWII-era water mine. Those spike proteins allow the virus to fuse with and invade healthy cells. Saphire says that they’re also a “major target” in the vaccines that are now in development. If a mutation causes changes to these spike proteins, she says, “one of the worries is that the antibodies might suffer.” Antibodies are immune system proteins that can effectively block the virus’s ability to attach itself to healthy cells. If the virus changes in ways that render these antibodies ineffective, that’s bad news for several reasons. For one thing, it means that people who have already weathered a SARS-CoV-2 infection might not be protected from a second infection. It also means that vaccines, which are designed to make the immune system pump out protective antibodies, might not work as well — or at all. Saphire also says that prospective monoclonal antibody treatments, which involve injecting people with copies of virus-repelling proteins, could also be rendered ineffective by spike-protein mutations. So far — and thank goodness — the dominant SARS-CoV-2 mutations don’t seem to interfere with the helpful action of antibodies. “This is a tremendous relief,” she says. Also, the new variant doesn’t seem to make people any sicker than the old one; some of Saphire’s colleagues performed an analysis of roughly 1,000 Covid-19 patients in Sheffield, England, and that analysis did not find evidence that the newer virus was associated with worse hospital outcomes. But the mutations that her study examined emerged quickly — within a few months of the novel coronavirus’s initial identification in China. It’s a certainty that the virus will undergo more mutations — some of which may have already occurred. And these mutations may have wide-ranging effects. What new mutations could bring Since the start of the pandemic, virus experts have understood that the novel coronavirus would mutate. But the common assumption was that these mutations would be so minor as to be insignificant. “People originally claimed that even if mutations happen, they wouldn’t have any effect on functionality. But clearly that’s not true,” says Lee Riley, MD, a professor and chair of the Division of Infectious Diseases and Vaccinology at the University of California, Berkeley School of Public Health. Like Saphire, Riley says that any mutations that stick are ones that will increase the virus’s “fitness,” and improved fitness could have good or bad ramifications for the human species. “It may become adapted in ways that help it spread even more efficiently, which may already have happened,” he says. That’s bad. On the other hand, the virus may also adapt in ways that render it less pathogenic — meaning the sickness it causes could become less severe. In fact, Riley says that this sort of illness-weakening “attenuation” is what normally happens to viruses as they mutate — the reason being that viruses that are too deadly tend to run out of hosts. Stephen Morse, PhD, is a professor of epidemiology and infectious disease at Columbia University’s Mailman School of Public Health. He says that as viruses mutate in ways that increase their ability to spread — which, according to the new study, may have already occurred — this change tends to go hand in hand with a decrease in the virus’s lethality. “That’s what usually happens,” he says. (The recent and now nearly month-long surge in U.S. cases has thus far produced only a modest spike in deaths — though experts say that there are a lot of potential explanations for this that have nothing to do with the virus weakening.) Saphire agrees that viruses often weaken over time. “As viruses adapt to human populations, they tend to get milder,” she says. But a tendency is not a certainty, and she says that there’s no knowing just what the novel coronavirus will do next. “Mutations are random events,” she adds. She and countless other scientists will keep a close eye on the virus. But whether SARS-CoV-2’s evolution will be helpful or harmful to its human hosts remains an open question.
https://elemental.medium.com/yes-the-coronavirus-has-mutated-no-this-doesnt-mean-we-re-doomed-a7512e150cc1
['Markham Heid']
2020-07-16 18:28:40.705000+00:00
['Coronavirus', 'Covid 19', 'Science', 'Pandemic', 'The Nuance']
A plastic for a penny
Now let’s go to Vietnam where there is no infrastructure for plastic waste management. How does a middle-income country with other priorities manages its waste? Easy. It burns it or let it go to the sea. Most plastic wastes are not recycled in the country. That’s why Vietnam is one of the champions of pouring plastic into the ocean. A tiny part of the plastic used in the country is recycled. In the south of Hanoi, the Vietnamese capital city, there is a neighbourhood dedicated to plastic management. I went to visit this beautiful place. People who don’t want to bother with their plastic waste often burn it on the spot. It is a harmless practice in the countryside, but in Hanoi which is one of the most polluted cities in the world, plastic burning contributes to air pollution and respiratory diseases. People who don’t want to burn their trash simply put in a plastic bag and let it on the side of the street. During the night, trash collectors come to take those bags. These collectors are paid according to the weight of waste they bring to the recycling company. Wastes are not made equal and some are more valuable than others. For example, hard plastic used for tables and chairs is more valuable than plastic bottles. Waste collectors try to get the most valuable waste to increase their salary.
https://medium.com/environmental-intelligence/a-plastic-for-a-penny-d5978f09785b
['Thuận Sarzynski']
2020-01-26 11:21:01.437000+00:00
['Travel', 'Vietnam', 'Sustainability', 'Environment', 'Photography']
On Writing How
Photo by Steve Johnson on Unsplash Artist is the man or woman who gets to know the structure of the mind rather than its content for then he can shape words into words thus his words will become of universal mind then the content of thought may it be the passing leaf or the flow of times finds absolute resonance in every word and in every heart that read the words. For now, the minds see themselves on the page, and words are his or her words. Now the man at the decades apart is here, and his mind is written unto pages saved on disk writers, and in painting, those who had recognized the same patterns of the mind found ways to live and live and live and go on!
https://medium.com/scrittura/on-writing-how-ca8504a0fb22
['Kerim A. Altuncu']
2019-12-27 05:12:13.835000+00:00
['Spirituality', 'Writing', 'Life', 'Writer', 'Science']
Defining brand storytelling using the hero’s journey
Storytelling is en vogue, a phrase which here means ‘brands and agencies are all talking about it without taking due time to appreciate what it means’. Done right, brand storytelling is indispensable as a framework for telling authentic messages in a way that resonates with the consumer. As it stands, it’s a nebulous catch-all term that lets anything said or written about you, be it on twitter, a YouTube comments section, a local news feature — all be construed as a part of your de facto story, leaving you with precious little control of the narrative. So how do you take it back? Direct your own narrative. Develop a story so memorable that it defines your brand — something compelling, entertaining, and unshakeably you. A story that warrants undying loyalty and one that, just for a moment, makes everyone forget you’re a brand. Sure, this is typically the realm of bestselling novels and film franchises, but what have novelists and Hollywood got that you haven’t? The hero’s journey Star Wars, Toy Story, The Lord of the Rings, Die Hard and the rest of your favourite film franchises all have something in common, along with a good proportion of the myths, fables, and fairytales you read at school. It’s the hero’s journey. Put forward by comparative mythologist Joseph Campbell in his 1949 Hero with a Thousand Faces, and adopted later by George Lucas, Pixar and many more, the theory posits a universal formula for a story that resonates with our collective unconscious. Dan Harmon (creator of Rick and Morty and Community) has since simplified the theory into 8 steps, which I’ve illustrated here using Toy Story: 1. Establish a protagonist (You) A character is in a zone of comfort. Woody is Andy’s favourite toy. 2. Something ain’t quite right (Need) We learn that things aren’t perfect in our hero’s universe, setting the stage for external conflict. It’s Andy’s birthday. Woody needs to remain Andy’s favourite (or at least thinks he does). 3. Crossing the Threshold (Go) They enter an unfamiliar situation, and begin their journey. Andy gets a Buzz Lightyear, who replaces Woody as his favourite toy. Woody pushes Buzz out of a window, and is thrown out by his friends. 4. The Road of Trials (Search) They adapt to their unfamiliar situation. The protagonist is broken down into their component parts. They gain the skills they’ll need to achieve their goals and return home (the most obvious example is a training montage — think Mulan, Rocky, the Empire Strikes Back). The two fight, are almost run over by a truck, and infiltrate a pizza delivery van. They’re captured and taken back to Sid’s house. 5. Meeting with the Goddess (Find) They get what they wanted. The Need (number 2) is fulfilled. Buzz loses his arm, and finally accepts the hard truth that he is, in fact, a toy. Seeing Buzz’s vulnerability, Woody reaches out, offering teamwork and friendship. 6. Pay the Price (Take) They pay a heavy price for achieving their goal, but a secondary goal is achieved. Buzz is strapped to a rocket, and surrounded by (seemingly) cannibalistic toys. But the rocket buys Woody time to plan their escape. 7. Bringing it Home (Return) They cross the return threshold, and come back to where they started. Woody and Buzz land safely back in the car, and Andy assumes they were there all along. 8. Master of both Worlds (Change) The protagonist is in control of their situation, having changed. Woody is no longer worried about being Andy’s favourite, since now he has a friend in Buzz. Dan Harmon’s Hero’s Journey Represented as a circle, we can see the descent and return between order and chaos, and it works with conscious and unconscious too. As soon as we cross the threshold at point three we journey into the collective unconscious, find out something important about who we are, before returning to the safety of the waking mind. Campbell called myth a ‘mirror to the ego’ — seeing ourselves in that mirror is what we what engage with; this is where the value lies for us as the consumer. The hero’s journey rings true because it shows us more of ourselves, and of what we all have in common. This is what your consumer will remember, and what will endear them to you. How to make it work for you and your customer Find your hero. Is it the consumer? Great. Dos Equis’ Most Interesting Man in the World immediately eclipsed the beer itself with his Internet meme status, and the product’s popularity soared. Is it the product? Lego continue to assert their pop cultural relevance with hugely successful feature films and an ever-expanding videogame universe which, with sales of over 100 million units, is the 15th most successful franchise in the industry. Luckily you don’t need a $60 million budget or hours of screen time to impress your story upon the world. All you need, to mould the way people talk and write about you, is to tell a unified story across your touchpoints. Show us the mirror Create a character (or set of characters) to define your brand — give them an overarching narrative that has room to unfold over months or years, and so long as your content released in those years sits under that arc, it will feel consistent, unified and give you regular opportunities to direct your own narrative. Nobody’s going to pay the same attention to your brand story that they do a major movie franchise. But that doesn’t mean it matters any less. If you (and your agencies) know your brand story inside out, you can see that the content you create has its logical place in the overarching narrative. That makes you relevant, accessible, and memorable, and it will build a reputation for human affinity that puts you head and shoulders above the brands without one.
https://medium.com/nowtrending/defining-brand-storytelling-using-the-heros-journey-c8e5514f8757
['Curtis Batterbee']
2017-04-06 16:54:15.872000+00:00
['Branding', 'Movies', 'Marketing', 'Dan Harmon', 'Storytelling']
Blinkist Review 2020 & Get Blinkist Premium for free
Are you an avid reader? And do you feel like if you could read all of your favorite books? Yeah, we know for avid readers, there is no best feeling than getting to read as many [favorite] books as they can. Sadly, there’s a time constraint too, people in today’s workaholic-life have a hard time managing the time to read their lengthy favorite books! But it doesn’t have to be like this anymore. Yes, there’s the solution to this problem; an app called “BLINKIST”. Blinkist simply serves you your craving to read your favorites books in just 15 mins. Yeah, no more sitting on a couch and lying for hours to complete just one book. Sounds amazing, right folks? However, you might be wondering if “Is it worth paying for a Blinkist subscription?” Well, in that case, we will be covering all the bases of Blinkist in this Blinkist review so that you can speak for yourself whether you might want it or not! Blinkist Review: WHAT IS BLINKIST? Well, in short, Blinkist is an app founded in 2012 promising to serve the books’ summaries in under 15 mins of your time. Blinkist, to make the nonfiction books summaries readable under 15 mins, has come up with the format of presenting the books called “blink books” or “book in blinks” where each blink has key insights of the book. In the age of AI and machine learning, Blinkist still prefers humans over algorithms. Blinkist has their own team of editors that read, short and distill the books into Blinks, without compromising the quality. And that what proves how much Blinkist is quality conscious! Blinkist has more than 2500+ nonfiction book summaries in their library, enough to get you occupied and get going. How much does Blinkist cost? Blinkist comes in both free and premium plans. And the premium plans come in two categories; Premium Yearly and Premium Monthly. The Premium Yearly subscription plan is billed $79.99 annually costing only $6.67 monthly, and the Premium Monthly subscription plan is billed $12.99 monthly. Subscribing the Premium Yearly plan gives you 7-days free trial so that you can explore the Blinkist with zero risks and can opt-out the Premium subscription plan at any time. However, the Free plans do basically offer an editor-picked book for you to read or listen daily, and nothing more! So, with the price as much as $12.99 monthly there’s another question that arises “Is it worth paying for a Blinkist subscription?”. So, let’s look into Blinkist review details, if Blinkist is worth paying! [And if you are suspicious whether the price for the subscription the Blinkist is asking high or not, scroll down to check head-to-head price and collection comparison of Blinkist & Blinkist Alternative.] Blinkist Review: Is it worth paying for a Blinkist subscription? The price of the Blinkist premium can be as low as $6.67 and high as $12.88 monthly. That’s quite an amount of the money to pay for a service if it isn’t utilized to its potential or if it doesn’t stand up to its asking cost. But Blinkist does justify the price it is asking for the service if we look into the premium features it is offering! Here’s the list of the premium features the Blinkist is offering: Unlimited access to the app : Out of more than 2500 books [and increasing], there will be no restrictions on how many titles can you access, read and listen per day. You can read and listen as many as you desire and crave. : Out of more than 2500 books [and increasing], there will be no restrictions on how many titles can you access, read and listen per day. You can read and listen as many as you desire and crave. Hands-free Audio : Yeah, the premium plans allow you to listen to the books audio hands freely. No more of hassle. : Yeah, the premium plans allow you to listen to the books audio hands freely. No more of hassle. Highlighting : How much it hurts when you loved the sentence or lines in the book and you aren’t able to mark it for future reference? Pretty much, right? Worry no more, because Blinkist premium has a feature to highlight your ideas and bookmark them for future : How much it hurts when you loved the sentence or lines in the book and you aren’t able to mark it for future reference? Pretty much, right? Worry no more, because Blinkist premium has a feature to highlight your ideas and bookmark them for future Offline Access : Offline Access is a premium feature by Blinkist that allows you to save/download the book to your personal library so that you can read or listen at any time at your ease! : Offline Access is a premium feature by Blinkist that allows you to save/download the book to your personal library so that you can read or listen at any time at your ease! Sync across the device and Evernote : Exactly what the premium plans should offer; universal syncing, Blinkist has a feature to sync the highlights across the device and Evernote. : Exactly what the premium plans should offer; universal syncing, Blinkist has a feature to sync the highlights across the device and Evernote. Kindle Support: Kindles are the best machines to read your books on, so Blinkist understands that for sure as Blinkist lets the readers send their reads to their Kindles. However, besides the premium features Blinkist has, it actually depends what it takes for you to consider the app if it is worth paying or not! If any of the following behavioral things strike with you, probably, Blinkist is lot more worth paying especially for you: A heavy commuter : Like we already know how boring can a commute be, and it can’t always be accompanied by repeatedly playing same old, same old songs. And that’s where Blinkist comes into play to make the time and the paying worth it. You can really gain some knowledge and insights to topics you are curious about with the book summaries on Blinkist. : Like we already know how boring can a commute be, and it can’t always be accompanied by repeatedly playing same old, same old songs. And that’s where Blinkist comes into play to make the time and the paying worth it. You can really gain some knowledge and insights to topics you are curious about with the book summaries on Blinkist. Multitasker : How could it be good and awesome would it be to learn while you mow the lawn, do dishes, on the gym or other activities? If you are multitasker kind person and love to learn to utilize every bit of second, Blinkist is must-have for you. : How could it be good and awesome would it be to learn while you mow the lawn, do dishes, on the gym or other activities? If you are multitasker kind person and love to learn to utilize every bit of second, Blinkist is must-have for you. Synopsis-guy : Some books, despite being the #1 seller, sometimes doesn’t resonate with your type. And reading those 100-pages books just considering “#1 seller” titles can be nothing but an absolute time-killing. So, how about if you could read the perfect synopsis before you read the whole and repent later for wasting the time? — Perfect, right? Yeah, that’s where the Blinkist perfectly worth for you, saving your time before you kill it with some nonsense “#1 seller” book! : Some books, despite being the #1 seller, sometimes doesn’t resonate with your type. And reading those 100-pages books just considering “#1 seller” titles can be nothing but an absolute time-killing. So, how about if you could read the perfect synopsis before you read the whole and repent later for wasting the time? — Perfect, right? Yeah, that’s where the Blinkist perfectly worth for you, saving your time before you kill it with some nonsense “#1 seller” book! Curious and Conversation-winner: The more curious you are, the more knowledge you gain, and the more knowledge you’ve, the more you become conversation winner. Blinkist has perfect composition and dosage of knowledge you require to win any conversation. Blinkist has blinks that have insights and information that you can use to out-wit in most of the conversations. What categories of books does Blinkist offer? Blinkist, with more than 2500 titles [and newly added every week], covers almost all of the nonfiction books categories. Just a few months ago, there was a total of 19 categories, but as of now Blinkist has jumped and offers a total of 27 nonfiction books categories. Entrepreneurship & Small Business Politics Marketing & Sales Science Health, Fitness & Nutrition Personal Growth & Self-Improvement Economics History Communication Skills Corporate Culture Management & Leadership Motivation & Inspiration Money & Investments Psychology Productivity & Time Management Sex & Relationships Technology & the Future Mindfulness & Happiness Parenting Religion & Spirituality Creativity Education Nature & Environment Career & Success Biography & Memoir Philosophy Society & Culture Blinkist Review: What are its Pros & Cons? Pros: Less reading and More Learning Utilization of spare time while commuting or doing gyms. Audiobook summaries to listen rather than staring at the screen and reading. Downloadable Audiobooks Syncing across the devices and to Evernote Easy access to the bookmarks and the highlights. Larger collection to read from. Quality of blinks Different voices for audiobooks to choose from. Premium plans for the larger teams. Competitive subscription plans if compared to the similar Blinkist alternative. Responsive support Cons: Losing the original vibes of hardcover paper books. The blinks don’t cover the summaries of some of the books, some books are good to read raw and uncut. The information might get overwhelming and confusing sometimes, as the blinks of similar categories might blend into each other. The Premium Monthly is insanely expensive [2X] when compared to the monthly charges of the Premium Yearly plans. Blinkist Review: Blinkist vs Blinkist Alternative From the above comparison between Blinkist and Blinkist Alternative, it’s loud and clear, Blinkist is, undoubtedly, #1 book summaries service among the Blinkist alternative. Only getabstract and Soundview seem to compete with the Blinkist in the books’ collection size, but the Blinkist wins over them because of the affordable plans and pricing. How to Get Blinkist Premium for free? Who doesn’t love free things? — Everyone, right! We talked much and reviewed about the Blinkist. Though the subscription costs just $7 per month, which is equivalent to the 1–2 cups of coffee, it’s never sad to get something like Blinkist Premium for free. Yeah, exactly, so how amazing would it be if you could win real Blinkist Premium for free? And no, we are not promoting unethical ways to get the Blinkist Premium free. So, how to get Blinkist Premium free? Signup to Blinkist [if you are new and don’t have Blinkist account] and go to your account. [if you are new and don’t have Blinkist account] and go to your account. If you are using web-version, tap on “You” and Select “Invite Friends” Copy the referral link and share it on your social media accounts or ask your friends to signup via your referral link. And for every successfully referred signup, both you and your friend get 7 days free trial. There is no limitation on how many friends you can refer as of now, which means you can get Blinkist Premium for as long as your new friends and family signup via your referral link. Blinkist Review: TL; DR This post itself is around 1600 words and seems like it needs a summary for itself. So, here it is the TL; DR version, or say, a summary of the review of the books summarizing product. Blinkist is #1 books summarizing product, there is no denying. Yeah, there are some drawbacks, but it’s natural. In fact, Blinkist outdoes every other Blinkist competitor in terms of price and collection size. For a cost of a cup of coffee [ or if you follow the above method to get Blinkist Premium Free], you get an immense collection of books you can learn from. You’ll never be bored or let go of your time unproductive when you’re commuting or standing in a queue. Your knowledge and book-reading cravings will be fulfilled. And if you still feel like you need more in-depth Blinkist review, go ahead, scroll to the top, and start reading if Blinkist is worth paying or not!
https://medium.com/prabidhi-info/blinkist-review-get-blinkist-premium-free-trial-7047760b046b
['Mahesh Shrestha']
2020-09-20 10:33:07.954000+00:00
['Book Review', 'Productivity', 'Blinkist', 'Reading', 'Books']
Stop Worrying and Just F*cking Do It!
Some call it procrastination, others call it overthinking. You can call it what you want but the fact of the matter is you’re not taking action which means you’re not moving forward. This morning as I went about my daily routines, getting everyone ready, dropping the kids off to school/day-care and then heading off to work myself, there was one question at the forefront of my thoughts; “What shall I write about today?” Perhaps your day even started off in a similar way? As the morning progressed, I did what I had to do at work but in the back of my mind, the same question remained. I contemplated various ideas, sketching out a blueprint in my head of how they might play out. What could I start the piece off with, the opening paragraph, how might I bulk out the middle and of course what conclusion would I arrive at to close it all off? A few hours passed with me dissecting a number of different topics in this way, hoping for an ‘aha’ moment, but nothing seemed to appeal to me. “Maybe a quick look through my drafts list would be the catalyst I need to get me off the mark?” I was looking for something that sparked my interest, that really jumped out at me, a topic that had me excited to expand upon while at the same time brought some value to the reader. But again, nothing seemed to pull me in. It was beginning to look like one of those days where you just can’t get started. When every time you try to get the ball rolling that flow state is just nowhere to be seen. Do you know what I mean? Then it begins to play on your mind, you try and figure out the reason why you’re finding it so hard to pull the trigger today. Perhaps you didn’t get a good night’s sleep, or maybe you’ve been feeling a little under the weather lately. Whatever the reason, getting started seems almost impossible. Things just don’t feel, ‘right’. Frustration starts creeping in. Your ideas begin to get worse and worse. Now you’re clutching at straws. Hoping for something, anything that will help you break free from this stagnant state. And then, voila, a breakthrough occurs when suddenly you remember; Things don’t have to be perfect in order for you to get started. Stop Worrying and Just F*cking Do It! All too often we sit around just waiting for something to happen. Searching for that perfect moment to begin and get after it. I know this because it happens to me, a lot. All the tools to get going could be right there in front of you but still, you’ll find an excuse to put things off, just for a little bit longer. “I don’t know where to begin.” “I just have to do a little more research.” “There’s no point in starting now, I’ll wait until tomorrow.” “Once I do _____, _____ and _____, then I can start _____.” (Fill in the blanks) No matter what you’re trying to achieve, there will always be a ‘reason’ (an excuse) why you can’t get started right away. But really, what’s the worst that can happen? You start a new project, piece of writing, business venture, whatever and you soon discover things aren’t going exactly how you’d expected. So, what? There is nothing stopping you from changing the plan, making adjustments or just pulling the plug altogether and starting again on something new. But at least you will have tried and learned what to do or what not to do for the next time. So, it’s time to stop overthinking and start taking action. There is nothing to be afraid of. And as Dan Pena, The 50 Billion Dollar Man says; “JUST FUCKING DO IT!”
https://medium.com/the-dream-verse/stop-worrying-and-just-f-cking-do-it-8e28e708cdc1
['Ryan Justin']
2019-11-13 10:36:02.884000+00:00
['Inspiration', 'Entrepreneurship', 'Personal Development', 'Self Improvement', 'Motivation']
From Solutions to Problems
One thing to remember is that “Big A” Agile doesn’t create great products alone. When I say to build a framework, I am certainly not suggesting that you simply adopt Scrum and be done with it. Agility, if anything, has led to a breakdown in how we approach problems. Somewhere along the way we stopped caring about the discovery and started focusing on delivery, because “working software” is better than nothing. Maybe some of us never cared about the discovery, to begin with. The “working software is better than nothing” principle is true, but also inherently flawed because it fails to mention the critical work before a single line of code is written. The key takeaway here is that “working software” is only valid if it’s solving a problem for someone. Finding the balance for the right amount of discovery versus delivery is both an art and a science. Personally, I doubt copying a single framework “by the book” works in most organizations. I think it’s important to utilize principles and philosophies from different schools of thought throughout your product discovery and delivery process. You may research and test using Design Thinking, Lean Startup, Scrum, XP — but in the end, you will likely end up with elements of each of these in your framework. Next, we’ll walk through the high-level stages of iterative discovery. Understanding is always a good first step So, you have a structure, now what? Successful problem solving comes down to empathy and framing — and this is often a good place to start. You may already have some level of understanding of where to focus your efforts. But if not, what are the inputs or events that will trigger the discovery exercise? Typically, these can vary based on your organization’s maturity and what channels are available for your customers to voice their feedback. Here are some good places to begin: Trending requests by customers and prospects to your Sales and Success teams. Ideas sourced from a user group, community, or customer forum. Common RFx questions or requirements from prospects. Help-desk and support ticket data about common challenges. Post-mortem analysis of lost prospects and/or churned customers. Continuous feedback via surveys like NPS. With information like this available, you can begin to focus your discovery. A non-trivial amount of time should be spent in front of customers and users, listening to their challenges, and framing them in a way that’s consumable for your stakeholders. If you come from a circle that deals with traditional “projects”, you may think this sounds synonymous with big planning upfront. You’re both right and wrong. This isn’t a typical “plan”, but failing to plan is often planning to fail. There should be some level of strategic thought put into the problems you choose to tackle, and this data simply helps with your prioritization. Lost in Translation Keeping in mind the initial channels of input we’ve covered above, you may need to do some translation. Because our user community isn’t exposed to our discovery process 100% of the time — customers and other stakeholders may have become accustomed to making demands. That’s why we often hear things like “build X” or “design feature that does Y” rather than “help me solve challenge Z”. This is also because customers are great at imagining what they think they want — but really bad at actually knowing what they need. Herein lies the incredibly important step of problem validation. I won’t get into the tactical details, but problem validation can be summed up as translating inputs like “build X” into well-defined and clearly articulated problems, which are validated with the same people who suggested them. And I can’t stress the importance of actual human interaction with your user base through customer interviews, not simply sending out a mass survey. Take a moment to reflect on your own language as well; this is also important. The terminology used to describe your work, and to describe how you work, can heavily influence the success of the work itself. You’ll want to ensure that when you are customer-facing and trying to validate a problem, that the message you’re sharing resonates across various audiences. I’ve found writing a glossary to be a useful exercise. You can use this as a tool for translating your product vernacular into simple terms that your customers and other stakeholders will understand — a lingua franca for problem-solving if you will. There are many ways to generate a good idea We have a well-defined problem that’s been validated with a segment of the market. So how might we help alleviate this problem? We’re at a level of understanding where we can confidently begin to dial in on potential solutions. And that exact question is actually a great one to pose when kicking off your ideation phase. Regardless of whether or not you have access to a design team to back you up, I find there are two very different and equally good ways to come up with ideas for solutions. You should aim to exercise a combination of both. The first is done completely alone, or with a very small group. In this method, you can deeply focus on what you know already about the problem, and write various ideas that you come up with. The second method is to leverage a larger, more diverse set of stakeholders in order to collaboratively bring ideas to the table. This method can take the shape of a facilitated workshop or an informal discussion between colleagues. Either way, different perspectives will shed light on things you may not have picked up in your discovery, and others’ previous experiences will generally strengthen the group’s ideas. It also allows you to see what assumptions are made by those who are not intimately familiar with the problem space. Being the more creative and fluid phase of the overall discovery journey, I don’t believe there is necessarily a “right or wrong way” to approach ideation. I do not think it matters which method is used first, as long as you’re not doing all the work yourself. The input from others will help validate your ideas, and their ideas will challenge yours in a healthy way. This may even lead you to circle back to a previous stage. Ideation can and sometimes should be an iterative sub-process Finally, is good to keep a finger on the pulse of your discovery process. Even during a creative process like ideation, it doesn’t hurt to continually ask a few other key questions to yourself throughout this phase, in order to keep yourself from veering too far off target.
https://medium.com/swlh/from-solutions-to-problems-5c5bc09a4e1c
['Ryan S.']
2020-07-14 23:39:39.004000+00:00
['Design', 'Startup', 'Technology', 'Product Management', 'UX']
How I used Algebra to solve a Data Science problem about Insurance Incentives
How I used Algebra to solve a Data Science problem about Insurance Incentives Tracyrenee Follow Dec 24 · 8 min read Just yesterday I was watching a YouTube video created by Krish Naik, who is a YouTube influencer and also has a Kaggle account.He made a video on how it is important for people seeking to enter the field of data science to undertake competition programming because it helps them to develop logical organisational skills in programming, which is something that potential employers like to see. Data science competitions are varied, so something can be learned from each one that is entered. In addition, as skills are acquired, the code for these competitions can always be modified to improve accuracy and move up the leaderboard. With this in mind, I have selected a competition question from Analytics Vidhya concerning developing an incentive plan for salespersons and determining whether an insurance policy will be renewed. The link for the datasets for this competition question can be found here:- McKinsey Analytics Online Hackathon (analyticsvidhya.com) The problem statement for this question reads as follows:- “Your client is an Insurance company and they need your help in building a model to predict the propensity to pay renewal premium and build an incentive plan for its agents to maximise the net revenue (i.e. renewals — incentives given to collect the renewals) collected from the policies post their issuance. You have information about past transactions from the policy holders along with their demographics. The client has provided aggregated historical transactional data like number of premiums delayed by 3/ 6/ 12 months across all the products, number of premiums paid, customer sourcing channel and customer demographics like age, monthly income and area type. In addition to the information above, the client has provided the following relationships: Expected effort in hours put in by an agent for incentives provided; and Expected increase in chances of renewal, given the effort from the agent. Given the information, the client wants you to predict the propensity of renewal collection and create an incentive plan for agents (at policy level) to maximise the net revenues from these policies. Equation for the effort-incentives curve: Y = 10*(1-exp(-X/400)) Equation for the % improvement in renewal prob vs effort curve: Y = 20*(1-exp(-X/5))” I opened an .ipynb file on Google Colab, which is a very versatile Jupyter Notebook that I can use from virtually any computer that has an internet connection and Google accessibility. Because the files for this competition question were so large, I had to save them to the Google drive that houses all of my files. These files need to be saved to the Google drive so they can be used from any computer that allows Google access. Because many Python libraries are already installed on Google Colab, I only had to import two of the main Python libraries, being pandas and numpy. I also read in the datasets that I had saved onto my Google drive. I did this by copying the path of the file and pasting it into the line of code that I had created:- I checked for null values and in this instance three columns in the train and test datasets contained null values that would need to be imputed:- Because the null values were in numeric columns, I replaced all of the null values with the median value of each column. I created a new column called “age_in_years”, where I divided “age_in_days” by 365.25, which is the number of days in a year. I felt it was better to convert the age to years to help out with the computations. I then created a new column called “incentives”, which I used to calculate the incentives for the insurance sales persons. I did not find the formulas provided by the insurance company to be very helpful because they did not define X and Y, making it difficult to determine what the incentive should be. I therefore researched on the internet what the sales incentive for insurance sales should be and learned that the sales incentive can be anywhere between 2% and 8%. I therefore had to use trial and error to select that best rate, which is 4.5% in this post, but can be any optimum value between 2% and 8%. It is important to create the “incentives” column at the beginning of the program because the incentive will determine how many hours of work the insurance salesperson is willing to put in to get the sale. A higher incentive will indicate that the salesperson will be willing to put in more work than a lower incentive. In addition, if the salesperson already has a heavy workload, he is likely to prioritise his work and put a higher incentive before a lower incentive. Because of this, the incentive will have an effect on whether or not the customer renews the policy:- I used seaborn code to produce a graph of the target, being train.renewal, and found that there is a class imbalance in this dataset. I put the target value on a counter and found that 0 is comprised of only 6.259% of the column:- I used matplotlib to produce graphs of all of the numeric columns in the dataset, which gives a picture of how the independent variables affect the dependant variable, being “renewal”. I then ordinal encoded the two categorical columns in the dataset because most models, especially in the sklearn library, will only train and predict on numeric data. Once the datasets had been fully preprocessed, I defined the X, y and X_test variables. I created a variable, test_id, which contains the data from test.id and will be used at the end of the program when the dataframe with predictions is created. The y variable, being the target, is composed of train.renewal and contains binary data of either 0 or 1. X is a dataframe that is composed of the train dataset with the following columns dropped: “renewal”, “age_in_days”, and “id”. X_test is a dataframe that is composed of the test dataset with the following columns dropped: “renewal” and “age_in_days. I also put X and X_test on a scaler, using sklearn’s StandardScaler() function. It is important to scale the data because it improves the accuracy of the predictions:- Because I like visual representations of the data, I created a two dimensional graph of the target variables and how they appear in the computer’s memory. As can be seen, the 0’s are intermingled with the 1’s, and this is going to affect the accuracy of the predictions:- I used sklearn’s train_test_split() to break the X dataset up into training and validation sets. I set the validation set to the 10% of the X dataset because i wanted to have as much trainable data as possible and hopefully improve the accuracy of the model:- I defined the model, being sklearn’s LinearSVC() because I have had good results with it in the past when dealing with class imbalances. I achieved an accuracy of 82%, but when I changed the value of the incentive the accuracy varied. The parameters will need to be retuned whenever the insurance salespersons’ incentives change:- I predicted on the validation set and attained an accuracy of 82%. I put the predictions on a confusion matrix and found that 186 0’s had been misidentified and 1,255 1’s had been misidentified. It is therefore very important to put any classification problem on a confusion matrix to determine how many examples are correct:- In order to visualise the accuracy of the model, I created a two dimensional graph that depicts the correct examples in purple and the incorrect ones in yellow:- I then predicted on the test set. I created a variable, incentives, and placed test.incentives in it because this data is going to be necessary when the submission dataframe is created:- I prepared my submission by created a dataframe that included the test_id, which had been created at the beginning of the program, the model’s predictions, and the incentives, which had recently been created. I converted the submission dataframe to a .csv file, which was ready to be downloaded:- I then downloaded the submission .csv file and put it on Analytics Vidhya’s Solution Checker and checked my scores. The score obtained is dependant on the incentives, so if the incentives are changed then the score will change. With this particular competition question I was only allowed to check my predictions eleven times in a day and I had exhausted all of my submissions for this day. I would highly suggest you try this code out and see for yourself how the incentive affects the predictions and also the score:- The code for this program can be found in its entirety in my personal GitHub account, the web link being found here:- Misc-Predictions/AV_Hack_Insurance.ipynb at main · TracyRenee61/Misc-Predictions (github.com)
https://medium.com/ai-in-plain-english/how-i-used-algebra-to-solve-a-data-science-problem-about-insurance-incentives-cda33c72c95d
[]
2020-12-25 10:52:49.906000+00:00
['Data Science', 'Python', 'Artificial Intelligence', 'Linearsvc', 'Machine Learning']
7 Key Data Integration Trends to Watch out in 2020
Businesses around the world are advocating for connected customer experiences, which is one of the key objectives of Digital Transformation (DX). Integration plays a vital role in an enterprise’s digital transformation journey, by connecting various data sources, applications and devices. It is an integral part of an enterprise’ technology landscape to have seamless connectivity between the different components of its business ecosystem. The last decade started the emergence of cloud-based applications and presented its utility to the world, the year 2020 and the decade following it promises to deliver on the idea of a connected world and data integration would be at the core of its operations. This blog is aimed to look at the key data integration trends shaping up the upcoming year 2020. • The Rise of Hybrid Integration Platform (HIP) • APIs at the Centerstage of Business Performance • Push Towards Consistent and Connected Customer Experiences • Secure B2B Integration/ EDI Integration • Real-Time Integration to Power Business Needs • Proliferation of IoT • Empowering Users for Enterprise Synergies Let’s look at each of these in detail – Data Integration Trend 1 — The Rise of Hybrid Integration Platforms (HIP) — Since the evolution of IT, data has always been confined as an in-house asset as it gives greater control and accountability to the enterprise. However, the confluence of digital technologies and a multifold increase in big data led to decentralized data management. At a given time, businesses are juggling with various digital transformation initiatives. The traditional methods of using legacy integration solutions are not adaptive enough to cater to the needs of connecting multiple endpoints hosted on-premises or on the cloud. Such situations need a specialized hybrid integration platform (HIP) which can be well amalgamated with the existing underlying infrastructure and can embrace the cloud ecosystem equally. As per Gartner, by 2022 at least 65% of large organizations will have implemented a HIP to power their digital transformation. And why not, after all a HIP provides all the necessary tools for the users to develop, deploy and govern the customized integration workflows to meet the requirements of their intended use cases. Data Integration Trend 2 — APIs at the Centerstage of Business Performance — APIs (Application Programming Interface) has been around for years as a fundamental part of any software. Off lately, businesses around the world have realized the power of APIs that make them position themselves as a strategic partner rather than merely being a solution provider. When it comes to digitization, customers are looking at a shorter sales cycle and expediting integrations of the SaaS applications to their existing infrastructure. APIs are a lifeline of such digital transformation initiatives as it effectively addresses the connectivity issue of the modern business ecosystem. APIs are changing the way software is developed and brought to the market. More and more enterprises are exposing their product APIs to the third parties to build compatible integration solutions which are readily available, hence bringing people, businesses and things together. Data Integration Trend 3 — Push Towards Consistent and Connected Customer Experiences — Customer is the king and always has been. In a digital world, there are multiple touchpoints through which a customer interacts with an enterprise’s and its product/services. Integration comes to the rescue of IT leaders when it comes to offer a consistent experience to the customers and make them feel a part of the business ecosystem. There are multiple applications which need to be integrated together to provide a connected experience, in other words, a multi-channel integration. The multi-channel integration is of paramount importance as it makes or breaks a customer loyalty towards enterprise businesses. In fact, connected customer experience can also help businesses to gain crucial insights into customer journey right from awareness to decision making, from their expectations to needs and to further simplify their sales cycle and enhancing various touchpoints for truly unified customer experience. After all, customer delight is enterprise delight! Data Integration Trend 4 — Secure B2B Integration/ EDI Integration — EDI or Electronic Data Interchange standards have paved the way for B2B transactions and enable the business between the trading partners for years now. Moreover, the need to connect digitally with your business partners has gained momentum and that’ why business-to-business (B2B) integration technologies are increasing in demand. B2B integration requires a sophisticated solution in place that can tackle with each enterprise’s protocols, governance, security and business rules. Each enterprise has its own ERP system and unique standard data formats for documents such as purchase orders, invoice, shipping documents, etc. In fact, the EDIs are often customized by enterprises with additional or modified data fields to their own specific needs. The eventual goal of B2B integration is to seamlessly integrate these disparate sources for exchanging business transactions. Quick partner onboarding, comprehensive visibility into supply chain and reliability are some of the major benefits of a B2B Integration. Data Integration Trend 5 — Real-Time Integration to Power Business Needs — The advent of technologies has made the world a smaller place to live. The rise of e-commerce, social media and wearables have changed the way we live and communicate with our surroundings. The one thing that remains common is data. Data is the most strategic asset for any organization, and it has led to the emergence of analytics-driven organizations. The transition to cloud demands real-time business intelligence and business updates which cannot be handled by the traditional methods of ETL and point to point integrations. Specifically for businesses like e-commerce which demands real-time data synchronization between systems across various Point of Sale — desktop PCs, laptops need robust integration platform that can deal with the heterogenous data sources along with the other strategic issues such as pushing the data from source to destination without altering its meaning, dealing with data inconsistencies and data value conflicts, providing secured data, and auditing the integration data. Data Integration Trend 6 –Proliferation of IoT — IoT is here to stay. According to Mckinsey, IoT could have an annual economic impact of $3.9 trillion to $11.1 trillion by 2025 across many different settings, including factories, cities, retail environments, and the human body. The rise in IoT can be attributed to the improved infrastructure connectivity and presence of real-time business data. The world of IoT has flooded enterprises with many applications and device that need to be synced on a real-time basis. Enterprises will rely on data integration solutions to face the proliferation of IoT in their day-to-day operations. They would need to deploy specialized integration platforms to support complex integration scenarios, implement multiple workflows and IoT projects. Moreover, this will help in providing connected experiences to the end customer by providing a single view and connecting cloud systems, SaaS apps, and data. Data Integration Trend 7 — Empowering Users for Enterprise Synergies — Since the beginning of 21st century, IT team has been monumental in shaping the business ecosystem and bringing it up to speed to embrace all the technological breakthroughs. Now, since the business horizons are expanding due to movement towards SaaS applications, there is a consistent need to work on the connectivity issues between the legacy on-prem and cloud-based applications. Deploying merely an IT team and expecting them to deliver can lead to implementation delays and errors. The need of the hour is to have an integration platform which can empower even an LoB user to build integrations with utmost ease and confidence. A no-code integration platform with a graphical interface with a drag and drop functionality would be needed so anyone can build integrations at anypoint, anywhere. This would create employees’ synergies that would lead to unlocking productivity and efficiency within an enterprise. Conclusion - The enterprise connectivity will take the centre stage, even Gartner predicts that through 2020, integration work will account for 50% of the time and cost of building a digital platform. The businesses will juggle their way to find their way to connecting cloud and enterprise applications, systems and data with agility, low cost and high quality. The customer will lie on the cusp of all their efforts as enterprises would continue innovating and delivering. DXchange Integration Cloud is here to take on all your integration needs. Book a demo, today!
https://medium.com/dxchange-io/7-key-data-integration-trends-to-watch-out-in-2020-68bd8176e8ed
[]
2019-12-18 10:52:26.279000+00:00
['Big Data', 'Cloud Computing', 'Trends', 'Ipaas', 'Integration']
How Codifying Can Help You To Communicate Effectively
How Codifying Can Help You To Communicate Effectively Useful but straightforward techniques that can help you communicate effectively. Photo by You X Ventures on Unsplash. Table of Contents Introduction Ways of working and other unique factors significantly impact the way communication happens within an organization. My goal here is to provide you some tools that you can use immediately as an individual without the need to change anything in your team or organization. These tools are meant to help you communicate effectively. Other solutions might include operational changes, which I will also not cover here. Why is this important to me? Being able to communicate effectively consistently is an essential skill of being a leader. Many times in my career, I have received constructive feedback about becoming more effective with my communication. I see myself as a constant work in progress, especially in this area. I have experienced Brook’s law over and over again in my career [1]. While at the same time, I also enjoy being a part of hyper-growth organizations. From Brook’s law: Communication overhead increases as the number of people increases. What to expect in this post (and some terminologies) To resolve communication issues, a good grasp of your workplace jargon and domain knowledge are not enough. I will highlight six different frameworks, models, or techniques that help in solving these communication issues. Frameworks, models, or practices is a mouthful, so I’m going encapsulate these into ways of “codifying” that enable effective communication. These ways of codifying have already been around for some time. We need to know how and when to utilize them. There are many other ways to address communication problems [2]. This blog post will focus on using codifying techniques only. I will cover in this post what I found to be effective in my experience as a sender and as a receiver. Sender and Receiver. For simplicity, the Sender will be referred to as someone who delivers the message in written form while the communicator is someone who has the message verbally. The Receiver is the recipient of the message. The message can be in verbal or written form. [Back to top] Communication at work There are common communication issues that could be costly in any organization. Its root causes have good reasons why they are there in the first place. Contributing factors Transparency and open communication. These are desirable traits in a healthy organization. For the most part, however, we could end up being constantly bombarded with information. We will have the tendency to get distracted, which reduces our ability to focus on the task at hand. The Sender ends up delivering a not so well thought of the message. The Receiver will either misunderstand the message or miss the key points of the news because it’s not easy to understand. Transparency and open communication Global organizational structure, i.e., remote work, distributed teams, and working across multiple timezones. These are great for globally operating organizations that value flexible working environments and diversity of ideas. This limits our communication time and medium with some of our colleagues, depending on the timezone overlaps or even worse — no overlaps. There are times when our choices to deliver a message are limited to either via Slack, email, internal blog posts, or other asynchronous means. Face to face communication is also compromised, which could make building rapport or delivery of sensitive personal conversation more challenging [3]. Global organization structure mindmap. Individual communication styles. We all have our personal communication styles depending on the situation, whether it’s verbal or written. When you read someone’s writing, especially those whom you’ve worked with for an extended period, you can almost hear them speaking in your head. Varying communication styles could sometimes cause miscommunication in diverse teams. Some folks tend to beat around the bush; others tend to be direct while others prefer to be succinct. From words, phrases, sentences, paragraphs, emails to blog posts; all of these can get lost in translation. We want to let our colleagues be themselves if we want to nurture a diverse workforce. However, there should be a balance between giving way to individual styles and aligning to the company’s values; mileage may vary, just like in many cases. Communication styles may vary depending on individual characteristics or situations. [Back to top] Common situations The factors mentioned above contribute to causing these situations below. You tried to rally your team to achieve Goal A, but they ended up aiming for Goal B. You gave feedback to your teammate or colleague. You both ended up in an unproductive argument, or even worse, a damaged relationship. You are working on a project that has many stakeholders where requirements keep coming from different directions. After your performance review meeting with your manager, you felt like you could have done a better job in appraising your contributions. Or your manager could have done a better job keeping track of your progress. You had to make a decision that affects a broader group of people and stakeholders. Inputs and requests are coming from multiple stakeholders. You find yourself in a “too many cooks spoil the broth” situation. You are about to write an email to your team, but you do not know where and how to start because there is a tremendous amount of information. Do some of these situations sound familiar to you? This is where codifying can help. [Back to top] How codifying can help Codifying provides structure Codifying removes that initial barrier that a sender faces when writing an email or a document or preparing a presentation. When communicating verbally, codifying helps the communicator organize their thoughts by following a structure that they are already familiar with. The communicator will only need to find where the idea fits in the system. Structure gives way to predictability. Predictability makes the message easier to follow for the Receiver. If your document is divided into subsections with headings or if your email is written down into bullet points, the readers know what to expect when they skim, scan, and read thoroughly. When you are sharing an update during your meeting, pausing in between topics in addition to giving a clue of what you are covering next will make it easier for the listeners to follow. Predictability allows clarity to take place. Knowing what to expect in the message helps in making it more transparent. The familiarity of how the message flows will allow you to focus on the idea that is conveyed entirely. On the other hand, imagine receiving a message that is not adequately structured hence becomes unpredictable. You will need to scan back and forth all over the document or have to pause and think about what you just heard during the meeting, which could lead to losing focus. Clarity provides a common understanding of the message, which benefits both the Sender and the Receiver. Now that we know how codifying can help us in communicating effectively let’s go through some examples below. I will share some examples and other similar techniques that you can research on your own. Codifying your message [Back to top] RASCI Matrix (Responsible, Accountable, Supporting, Consulted, Informed) Also called the responsibility assignment matrix. Usage When we write a project plan, review a design document, or conduct a product kickoff meeting, it is essential to clarify who are the people involved and what their responsibilities are. Providing a RASCI Matrix, especially at the beginning of a project, will help everyone involved know what is expected from them. If this is not identified early on, there is a tendency that the project will end up with overlapping or unclear responsibilities. Then the situation could become “too many cooks spoil the broth.” Other times it can be the opposite, like the “bystander effect,” where those who are supposed to be involved trust that someone else will be held responsible, so it’s best not to do anything [4]. Example You are the engineering lead and manager of a team that owns the payment and checkout services of your organization. One of your goals this quarter is to support a new third-party payment provider. Payment and checkout services are depended upon by multiple teams within your organization. It is also under the close eye of your senior leadership and finance teams because it directly impacts revenue. There is also a third-party solutions engineering team that you will need to work closely with to go live with the integration. Understanding the roles of these stakeholders involved is critical to the success of this project, so you have decided to agree with everyone on the roles and responsibilities involved. Here is how your RASCI matrix will look like. Responsible — You and your team (Software Engineers) are expected to complete the task, project, or epic, ensure that it is delivered on time. Accountable — You and your Product Manager will be the main point of contact with your leadership team, other team’s engineering leads, and product managers. These are people who will be held accountable by the leadership team or project sponsor. Supporting — Third-party solutions engineering team. They will provide input on how to complete the task and can potentially help Responsible help the job. Consulted — Other teams that depend on the payment services in your organization. They solely provide input on how to complete the task or what to watch out for without necessarily helping to complete the job. Informed — senior leadership and finance department. They need to be updated on the progress. Further reading DACI Matrix (Decision, Approver, Contributors, Informed) and other types of matrices. Useful for decision-making, which involves multiple stakeholders [5]. RASCI example is used for product engineering teams. [Back to top] STAR Format (Situation, Task, Action, Result) Usage Most of us have most likely known about STAR Format being useful in interviews. However, STAR is also helpful in preparing promotion packets, celebrating success, conducting performance reviews, or pretty much anytime you need to give examples that will serve as evidence to a message being conveyed. From my experience as a manager and a direct report, preparing for performance reviews can sometimes take a chunk of your time more than you need to. Even if you know very well what you have contributed during the performance cycle, getting started in noting them down or sharing them with your manager can feel like a chore. Celebrating success with your team is also best delivered with specificity as to what was achieved. Using a framework like STAR can help provide a full picture of the examples of situations that the Sender would like to share with the Receiver. Example Piggybacking on the same scenario in RASCI — You are the engineering lead and manager of a team that owns the payment and checkout services of your organization. Your team has now successfully achieved this quarter’s goal ahead of time, which is to integrate with a third-party payments service provider. You want to show appreciation to your team by sending an email to your team and all stakeholders to celebrate success. You’d like to ensure that you are not short-changing your team. Therefore, accurately representing what they have achieved to all the stakeholders is essential. Also, being more specific as to what the team has achieved comes across as more appreciative compared to just sending a generic “Congratulations with a Thank You” message. Situation — The goal is to ensure that we are supporting a new third-party payment service provider by the end of Q2. Challenges the team faced include having to work with different timezones; many groups are affected by the payment service changes, multiple stakeholders, and the criticality of payment service to the business. Task — Update the payment and checkout services software so that it supports the new payment service provider. Work closely with the third-party solution engineers to build the integration with our services. Action — Despite the challenges faced, the team managed to coordinate with the third-party engineers early on, prepared a project plan, worked closely with the product owner to come up with the user stories, then created the subtasks needed to accomplish every sprint. The team has also conducted design reviews to spot any flaws in the design. Deployment was responsibly executed in phases with proper feature flags. Result — The goal was completed before the end of the quarter, which gathered positive feedback from senior leadership. Revenue numbers correlated with supporting the new payment integration has started to spike up. Further reading SCQA Format (Situation, Complication, Question, Answer). Similar to STAR format, commonly used in storytelling through emails [6]. STAR Format. [Back to top] SMART Goal (Specific, Measurable, Attainable, Relevant, Time-based) Usage This is probably the simplest and most commonly used in the list. This could be used for any goal; an individual plan agreed between you and your manager or a product goal set with your team. Example You are a mid-level software engineer working for a tech startup approaching your third year. The first half of the year has been concluded; you’ve been exceeding expectations in the last two review cycles. Now is the time for you and your manager to agree on what you want to achieve in the second half of the year as part of your career growth plan. Part of your goal is to get promoted to a senior software engineer. Here’s a simplified example of a SMART goal that you can work on with your manager: Your goal is to get promoted to senior software engineer by the end of the year. Our engineering competency framework states that you are expected to lead and deliver a technical project successfully. The goal can be achieved by directing the technical design and implementation payment service integration project. The integration should be rolled out in production, serving at least 80% of our live traffic. According to our roadmap and user stories, the payment service integration project is achievable within four months. Supporting this payment service provider is part of the company’s strategy for this year. Let’s dissect as to how this is a SMART goal: Specific Get promoted to senior software engineer and leading the technical design and implementation. Measurable The integration should be rolled out in production serving at least 80% of our live traffic. Attainable According to our roadmap and user stories, the payment service integration project is achievable within 4 months. Relevant Supporting this payment service provider is part of the company’s strategy for this year and engineering competency framework states that you are expected to successfully lead and deliver a technical project. Time-based achievable within 4 months Further reading CLEAR Goal (Collaborative, Limited, Emotional, Appreciable, Refinable) is another goal-setting related principle that you can use [7]. SMART Goal example mind map. [Back to top] SBI Feedback (Situation, Behaviour, Impact) Usage Depending on the situation, poorly structured feedback, no matter how well-intended they maybe can cause misunderstanding. If we are not careful, our feedback could be perceived as blame. Different people give and receive feedback in different ways. A framework like SBI allows you to focus on the situation and how it affected you when providing your feedback. This allows the feedback receiver to know what scenario you are referring to and how their behavior in that situation has impacted you. Example You are an engineering lead and manager overseeing a team of engineers. In one of your team meetings, you notice that a senior engineer in your team kept interrupting the product manager while sharing their plans about the possible changes to the roadmap. The behavior has made the product manager visibly uncomfortable. Though we can assume that the senior engineer has no harm intended because they could be passionate about the product, the impact of the team meeting was evident. Concerned about your senior engineer’s behavior, you have decided to provide the following feedback: During this week’s team meeting, when the product manager was sharing possible changes to the roadmap, you came across as if you were deliberately interrupting their presentation. Though I believe this was not the intention, this has made everyone in the room feel uncomfortable, which resulted in an unproductive team meeting. Here’s a breakdown of how this is an SBI feedback: Situation During this week’s team meeting, when the product manager was sharing possible changes to the roadmap, Behavior you came across as if you were repeatedly interrupting their presentation. Impact Though I believe that this was not the intention, this has made everyone in the room feel uncomfortable which resulted in an unproductive team meeting. You’ll notice that the feedback is tied to a specific event, which reinforces that you are more concerned about a particular situation while not insinuating that it is a permanent part of the Receiver’s identity. It also clearly indicates that you are first seeking to understand rather than accusing the Receiver that the actions were deliberate with bad intentions. The impact as a result of the behavior provides context as to what are the possible future consequences if the feedback is not taken seriously. Further reading SBI-BI feedback (Situation, Behaviour, Impact — Suggested Behaviour and Impact). This is the same as SBI, but you will include a suggested behavior with its favorable impact in your feedback [8]. SBI Feedback mind map. [Back to top] Conclusion Codifying is not a silver bullet that will guarantee effective communication. It is a set of tools that we can pull out of our toolbox to help us communicate effectively. The codifying techniques I shared above are barely scratching the surface. These are only the top four most common techniques that I have used, among many others. I’d encourage you to check out the other methods that you might need. [Back to top] Check out my posts about software architecture:
https://medium.com/swlh/codify-to-communicate-effectively-422a2b8afc7c
['Ardy Dedase']
2020-09-10 00:51:32.520000+00:00
['Communication', 'Management', 'Leadership', 'Startup', 'Productivity']
Top 6 Data Science Programming Languages for 2019
Data Science has become one of the most popular technologies of the 21st Century. With a high demand for Data Scientists in industries, there is a need for people who possess the required skills in order to become proficient in this field. Besides mathematical skills, there is a requirement for programming expertise. But before gaining expertise, an aspiring Data Scientist must be able to make the right decision about the type of programming language required for the job. In this article, we will go through some of the required data science programming languages in order to become a proficient Data Scientist. Top 6 Data Science Programming Languages Introduction to Data Science Programming forms the backbone of Software Development. Data Science is an agglomeration of several fields including Computer Science. It involves the use of scientific processes and methods to analyze and draw conclusions from the data. Specific programming languages designed for this role, carry out these methods. While most languages cater to the development of software, programming for Data Science differs in the sense that it helps the user to pre-process, analyze and generate predictions from the data. These data-centric programming languages are able to carry out algorithms suited for the specifics of Data Science. Therefore, in order to become a proficient Data Scientist, you must master one of the following data science programming languages. Best Data Science Programming Languages Here is the list of top data science programming languages with their importance and detailed description – 1. Python It is easy to use, an interpreter based, high-level programming language. Python is a versatile language that has a vast array of libraries for multiple roles. It has emerged out as one of the most popular choices for Data Science owing to its easier learning curve and useful libraries. The code-readability observed by Python also makes it a popular choice for Data Science. Since a Data Scientist tackles complex problems, it is, therefore, ideal to have a language that is easier to understand. Python makes it easier for the user to implement solutions while following the standards of required algorithms. Latest Features of Python Python supports a wide variety of libraries. Various stages of problem-solving in Data Science use custom libraries. Solving a Data Science problem involves data preprocessing, analysis, visualization, predictions, and data preservation. In order to carry out these steps, Python has dedicated libraries such as — Pandas, Numpy, Matplotlib, SciPy, scikit-learn, etc. Furthermore, advanced Python libraries such as Tensorflow, Keras and Pytorch provide Deep Learning tools for Data Scientists. 2. R For statistically oriented tasks, R is the perfect language. Aspiring Data Scientists may have to face a steep learning curve, as compared to Python. R is specifically dedicated to statistical analysis. It is, therefore, very popular among statisticians. If you want an in-depth dive at data analytics and statistics, then R is the language of your choice. The only drawback of R is that it is not a general purpose programming language which means that it is not used for tasks other than statistical programming. With over 10,000 packages in the open-source repository of CRAN, R caters to all statistical applications. Another strong suit of R is its ability to handle complex linear algebra. This makes R ideal for not just statistical analysis but also for neural networks. Another important feature of R is its visualization library ‘ggplot2’. There are also other studio packages like tidy verse and Sparklyr which provides Apache Spark interface to R. R based environments like RStudio has made it easier to connect databases. It has a built-in package called “RMySQL” which provides native connectivity of R with MySQL. All these features make R an ideal choice for hard-core data scientists. 3. SQL Referred to as the ‘meat and potatoes of Data Science’, SQL is the most important skill that a Data Scientist must possess. SQL or ‘Structured Query Language’ is the database language for retrieving data from organized data sources called relational databases. In Data Science, SQL is for updating, querying and manipulating databases. As a Data Scientist, knowing how to retrieve data is the most important part of the job. SQL is the ‘sidearm’ of Data Scientists meaning that it provides limited capabilities but is crucial for specific roles. It has a variety of implementations like MySQL, SQLite, PostgreSQL, etc. In order to be a proficient Data Scientist, it is necessary to extract and wrangle data from the database. For this purpose, knowledge of SQL is a must. SQL is also a highly readable language, owing to its declarative syntax. For example SELECT name FROM users WHERE salary > 20000 is very intuitive. 4. Scala Scala stands is an extension of Java programming language operating on JVM. It is a general-purpose programming language having features of an object-oriented technology as well as that of a functional programming language. You can use Scala in conjunction with Spark, a big data platform. This makes Scala an ideal programming language when dealing with large volumes of data. Scala provides full interoperability with Java while keeping a close affinity with Data. Being a Data Scientist, one must be confident with the use of programming language so as to sculpt data in any form required. Scala is an efficient language made specifically for this role. A most important feature of Scala is its ability to facilitate parallel processing on a large scale. However, Scala suffers from a steep learning curve and we do not recommend it for beginners. In the end, if your preference as a data scientist is dealing with a large volume of data, then Scala + Spark is your best option. Start Learning Scala and Spark with Industry Veterans 5. Julia Julia is a recently developed programming language that is best suited for scientific computing. It is popular for being simple like Python and has the lightning-fast performance of C language. This has made Julia an ideal language for areas requiring complex mathematical operations. As a Data Scientist, you will work on problems requiring complex mathematics. Julia is capable of solving such problems at a very high speed. While Julia faced some problems in its stable release due to its recent development, it has been now widely being recognized as a language for Artificial Intelligence. Flux, which is a machine learning architecture, is a part of Julia for advanced AI processes. A large number of banks and consultancy services are using Julia for Risk Analytics. 6. SAS Like R, you can use SAS for Statistical Analysis. The only difference is that SAS is not open-source like R. However, it is one of the oldest languages designed for statistics. The developers of the SAS language developed their own software suite for advanced analytics, predictive modeling, and business intelligence. SAS is highly reliable and has been highly approved by professionals and analysts. Companies looking for a stable and secure platform use SAS for their analytical requirements. While SAS may be a closed source software, it offers a wide range of libraries and packages for statistical analysis and machine learning. SAS has an excellent support system meaning that your organization can rely on this tool without any doubt. However, SAS falls behind with the advent of advanced and open-source software. It is a bit difficult and very expensive to incorporate more advanced tools and features in SAS that modern programming languages provide. So, these were some of the programming languages for a data scientist. Summary Data Science is a dynamic field with ever growing technologies and tools. Since Data Science is a vast field, you must select a specific problem to tackle. For this, you should select the programming language best suited for it. The programming languages mentioned above, focus on several key areas of Data Science and one must always be willing to experiment with new languages based on the requirements. Still, if you have any query regarding data science programming languages, feel free to ask in the comment section.
https://medium.com/datadriveninvestor/top-6-data-science-programming-languages-for-2019-39ba1b6819a8
['Aakash Kumar']
2019-04-09 03:50:42.801000+00:00
['Sql', 'Artificial Intelligence', 'Python', 'Data Science', 'Machine Learning']
How Catastrophes Change Your Anxiety
“You have dug your soul out of the dark, you have fought to be here; do not go back to what buried you.” – Bianca Sparacino I’ve battled anxiety since I was a child. When I was young, regular life was enough to send me into a panic. Being late, forgetting a schoolbook, a teacher looking at me oddly. Anything and everything was fodder for my mind to start kicking the crap out of me. Over the years, I’ve dealt with and survived more of the worse ends of the life: abuse, financial problems, homelessness, the death of my long-time partner. While going through those things, the day-to-day problems that used to plague me became background noise. When something really bad is going on, like during my partner’s illness leading to his death last year, day-to-day stresses were like pouring acid on a raw exposed nerve. I was a mess for pretty much all of 2018. Huge catastrophic problems may mute your day-to-day anxiety. When you are dealing with a loved one dying, you forget you even have a stove, let alone that you used to worry if you left it on when you went out. Yesterday was the 10-month anniversary of my boyfriend’s death. Anxiety has still not returned to normal. I’m still anxious. But now I am anxious about big existential issues. Two things help me get through it. Breathe This is my most fundamental tool for dealing with my anxiety. I have to remember to breathe. If I lose control of my breathing and let myself start to hyperventilate, it’s too late. I’m going to have a panic attack. When I feel myself losing control, I’ve ingrained a habit of telling myself to breathe. I try to block out whatever words my brain is trying to run on repeat and just say breathe. Over and over. I focus all my attention on that one word. If my brain starts running it’s freak out record again, I drown it out with the word breathe. Until I can get the mental loop to stop, I cannot get myself to physically breathe. Once I get the panicked words to stop and only hear my own voice saying breathe, then I follow up with the action. When I was younger, I’d try to force the action over the words that were playing in my mind. That never worked for me. I am a very auditory person and also a writer. I would focus on the words that my brain was feeding me, and the action of breathing could not break through them. So, I chose to fight words with words. One word. Breathe. Perspective The reason I was able to be panicked by seemingly ordinary life stresses when I was younger is that I lacked perspective. Everything seemed so awful at the time because I did not know how much worse it could get. Now I know. It can get worse. A whole lot worse. Worse than I ever could have imagined. Having that perspective helps me to not panic about things I know will be okay. They’ve happened before and it hasn’t been a catastrophe. They can happen again and not be a catastrophe. Catastrophes happen. I will save my panic for them.
https://heatherashman.medium.com/how-catastrophes-change-your-anxiety-6515225b828f
['Heather Ashman']
2019-09-07 22:59:47.808000+00:00
['Self-awareness', 'Mental Health', 'Perspective', 'Anxiety', 'Grief']
Avoiding Organizational Debt
Leaders who can’t make tough decisions cause teams to accumulate “organizational debt.” Steve Blank, who first coined the term, described how “all the compromises made to ‘just get it done’ in the early stages of a startup…can turn a growing company into a chaotic nightmare.” But a lot of these compromises are less about the pursuit of lean productivity and more about avoiding conflict. Like the notion of “technical debt,” which is the accumulation of old code and short-term solutions that collectively burden the performance of a digital product over time, organizational debt is the accumulation of changes that leaders should have made but didn’t. The consequence of delayed optimization adds up over time. Image by O.R.Orozco — 99U The consequences of this kind of organizational debt were abundantly clear during my tenure at Adobe and experiences working with other companies. In large companies that pride themselves on having a friendly culture and comfortable work environment, leaders are liable to refrain from causing ruckus. Sometimes leaders opt to isolate or transfer under-performers and bad actors to other projects and teams rather than deal with the difficulty of firing them. Often times, when it’s necessary to reorganize a team, leaders are dissuaded by the time it takes to plan, coordinate with HR, and communicate the change (especially if the communication involves upsetting someone). As a result, the most common decision is to not make a decision yet. Organizational debt accrues. Especially when companies are successful, the repercussions of moving people around or changing team structure and practices are amplified. The old adage “don’t fix it if it ain’t broke” becomes the law of the land. The incremental optimizations that leaders should be making never happen, and the company’s organizational debt accrues. Eventually, the mountain of organizational debt compromises the team’s operations and product. Progress slows as people become misaligned, and motivation dwindles as bureaucracy sets in. And then a nimble start-up (or better-led competitor) outpaces you and wins. What to do? Small companies with a culture of honesty and a commitment to continuous improvement have an advantage. You should alway be optimizing how you work. My friend Aaron Dignan from TheReady shared some great ideas for eliminating organizational debt in big companies, including launching a “bounty program,” where, like a bounty program to catch technical bugs, “any employee that encounters a policy or process that is hindering their ability to deliver value to the customer can submit the policy/process (and a recommendation) to the program website.” Keep an eye out for the symptoms of organizational debt. When you find yourself waiting for changes that seem obvious, speak up. When you implement a process, be sure that it it isn’t an escape door from taking action. A great process advantages conviction — when people you trust on your team know what needs to be done, they should be empowered to do it. — Follow along @scottbelsky // http://scottbelsky.com // and sign up for the free ~monthly digest to see new content first.
https://medium.com/positiveslope/avoiding-organizational-debt-3e47760803a0
['Scott Belsky']
2017-01-21 23:43:02.922000+00:00
['Leadership', 'Startup', 'Project Management', 'Entrepreneurship', 'Management']
Everyone loves a great story.
Everyone loves a great story. And behind every great story, there’s an even better storyteller. Welcome to our ever-evolving community. The Narrative is the groundbreaking, independent publication that is unique in its ability to create a platform for cutting-edge, influential writers. With a forward-thinking approach, we address an eclectic mix of topics from the perspective of educated, culturally aware individuals from across the globe. Our publication displays a strong emphasis on inclusivity and diversity, and we welcome contributions from all voices. Do you want to write for The Narrative? The Narrative is currently open for new submissions. To become an active contributor for our publication, please comment below with 2–3 sentences stating why you’re the perfect fit. Please note that as of 11/23/19, The Narrative has transitioned into a community-based pub where every writer is made an editor. This means that you have the privilege to self-publish posts whenever you please without waiting for our approval. Why self-publish? As Medium’s userbase keeps growing, I understand that it may become increasingly difficult to get eyeballs on your articles. This is why I decided to re-brand the publication and use a model that has been working for a lot of active users. When you self-publish, other writers will instantly receive a notification for your article. This will not only help you gain a few new followers, but it will also provide more exposure for your writing too. Things to remember. As an editor of The Narrative, editing the posts of other authors without their permission will be frowned upon. Please be mindful of this when contributing to the publication. The Narrative does not promote any hate speech or plagiarism. If you violate these terms, we will immediately revoke your participation from the publication. Supporting other writers is highly encouraged. To keep the publication a positive space, we recommend you to read the work of others, highlight and comment as much as you can. Thank you for your interest in becoming a part of The Narrative. We look forward to hearing your stories.
https://medium.com/narrative/seeking-storytellers-for-the-narrative-9a75e23abbb0
['Katy Velvet']
2019-12-12 08:07:18.704000+00:00
['Storytelling', 'Writing', 'Community', 'Life', 'Inspiration']
Machine Learning with Apache Spark
Big data is part of our lives now and most companies collecting data have to deal with big data in order to gain meaningful insights from them. While we know complex neural networks work beautifully and accurately when we have a big data set, at times they are not the most ideal. In a situation where the complexity of prediction is high, however, the prediction does need to be fast and efficient. Therefore, we need a scalable machine learning solution. Apache spark comes with SparkML. SparkML has great inbuilt machine learning algorithms which are optimised for parallel processing and hence are very time-efficient on Big data. In this article, we will take a simple example of SparkML pipeline for cleaning, processing and generating predictions on big data. We will take the weather data of JFK airport and make try several inbuilt classifiers in SparkML. The data set contains columns like wind speed, humidity, station pressure, etc. and we will try to classify the wind direction based on other inputs. Lets being by cleaning the data set using Spark. Please note, I will leave a link to my GitHub repo for this code so you don't have to copy it from here. However, I will explain the code in this article from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler, Normalizer, MinMaxScaler from pyspark.ml.linalg import Vectors from pyspark.ml import Pipeline import random from pyspark.sql.functions import translate, col # spark context sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]")) spark = SparkSession \ .builder \ .getOrCreate() # create a dataframe out of it by using the first row as field names # and trying to infer a schema based on contents df = spark.read.option("header", "true").option("inferSchema","true").csv('noaa-weather-data-jfk-airport/jfk_weather.csv') # register a corresponding query table. we do this to save the data #in memory and run our operations on it. df.createOrReplaceTempView('df') # cleaning the data as it contains trailing charcters. Double is a #data type like float # columns with no trailing charecters were straight converrted to double type, rest were first cleaned df_cleaned = df \ .withColumn("HOURLYWindSpeed", df.HOURLYWindSpeed.cast('double')) \ .withColumn("HOURLYWindDirection", df.HOURLYWindDirection.cast('double')) \ .withColumn("HOURLYStationPressure", translate(col("HOURLYStationPressure"), "s,", "")) \ .withColumn("HOURLYPrecip", translate(col("HOURLYPrecip"), "s,", "")) \ .withColumn("HOURLYRelativeHumidity", translate(col("HOURLYRelativeHumidity"), "*", "")) \ .withColumn("HOURLYDRYBULBTEMPC", translate(col("HOURLYDRYBULBTEMPC"), "*", "")) \ # the cleaned columsn were now changed to double types df_cleaned = df_cleaned \ .withColumn("HOURLYStationPressure", df_cleaned.HOURLYStationPressure.cast('double')) \ .withColumn("HOURLYPrecip", df_cleaned.HOURLYPrecip.cast('double')) \ .withColumn("HOURLYRelativeHumidity", df_cleaned.HOURLYRelativeHumidity.cast('double')) \ .withColumn("HOURLYDRYBULBTEMPC", df_cleaned.HOURLYDRYBULBTEMPC.cast('double')) \ # Filtering for clean data set with no nulls and wind speed not 0 df_filtered = df_cleaned.filter(""" HOURLYWindSpeed <> 0 and HOURLYWindSpeed IS NOT NULL and HOURLYWindDirection IS NOT NULL and HOURLYStationPressure IS NOT NULL and HOURLYPressureTendency IS NOT NULL and HOURLYPrecip IS NOT NULL and HOURLYRelativeHumidity IS NOT NULL and HOURLYDRYBULBTEMPC IS NOT NULL """) # saving the cleaned data set into CSV df_filtered.write.csv('clean_df.csv') The code above is my pre-processing script which will give me a clean data frame to work with and try different machine learning approaches. After looking at the raw data I made the following observations which needed to be addressed in pre-processing: The data type was in the string and needed to be changed to something readable by algorithms Certain columns had trailing characters like ‘s’ and ‘*’ We had a significant amount of null values and not usable values like 0 in the target column The first part of the code deals with the trailing characters where needed and converts the other columns to ‘double’ type which is like a float value. The second part of the code converts the clean columns to double type. The third part of the code filters out all the null values from the predictors and 0 values from the target variables. Lastly, we save the file to a CSV as clean df which we will use for machine learning operations. I have tried out a few classifiers in my main code to check which one works the best, however, in this article I will only an example of logistic regression. Again no need to copy the code from here as I will provide my GitHub repo link. from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler, Normalizer, MinMaxScaler from pyspark.ml.classification import LogisticRegression from pyspark.ml.evaluation import MulticlassClassificationEvaluator from pyspark.ml.feature import Bucketizer from pyspark.ml.stat import Correlation from pyspark.ml.linalg import Vectors from pyspark.ml import Pipeline import random # read our clean csv df_filtered = spark.read.csv('clean_df.csv') # vector assembler vectorAssembler = VectorAssembler(inputCols=["HOURLYWindSpeed","","HOURLYStationPressure"], outputCol="features") df_pipeline = vectorAssembler.transform(df_filtered) # checking correlations Correlation.corr(df_pipeline,"features").head()[0].toArray() # train test split splits = df_filtered.randomSplit([0.8, 0.2]) df_train = splits[0] df_test = splits[1] # discretize the value using the Bucketizer, where we split the #column in buckets from above 0, 180 and then infinity bucketizer = Bucketizer(splits=[ 0, 180, float('Inf') ],inputCol="HOURLYWindDirection", outputCol="HOURLYWindDirectionBucketized") # after the bucketizer we do one hot enncoding encoder = OneHotEncoder(inputCol="HOURLYWindDirectionBucketized", outputCol="HOURLYWindDirectionOHE") # funtion for ccuracy calculation def classification_metrics(prediction): mcEval = MulticlassClassificationEvaluator().setMetricName("accuracy") .setPredictionCol("prediction").setLabelCol("HOURLYWindDirectionBucketized") accuracy = mcEval.evaluate(prediction) print("Accuracy on test data = %g" % accuracy) # logistic regression # defining the model lr = LogisticRegression(labelCol="HOURLYWindDirectionBucketized", maxIter=10) # new vector assembler vectorAssembler = VectorAssembler(inputCols=["HOURLYWindSpeed","HOURLYDRYBULBTEMPC"], outputCol="features") # bew piplineline for lr pipeline = Pipeline(stages=[bucketizer,vectorAssembler,normalizer,lr]) # predictions model = pipeline.fit(df_train) prediction = model.transform(df_test) classification_metrics(prediction) The code above uses some common functionalities of SparkMl which a data scientist working spark should know. These are: Vector assembler: is basically use to concatenate all the features into a single vector which can be further passed to the estimator or ML algorithm Correlation: spark provides a handy tool to check correlations for better feature engineering Bucketizer: Bucketing is an approach for converting the continuous variables into a categorical variable. We provide a range of buckets in splits which lets us categorise and use classification algorithms OneHotEncoder: One hot encoding is a process by which categorical variables are converted into a form that could be provided to ML algorithms to do a better job in prediction. Pipeline: Pipeline functionality in Spark lets you define a certain set of processes and the order in which they should be executed. Pipelines can also be saved and used later which makes it a great tool for scalability and portability. In my code repository, I have used several other models like the random forest, gradient boosted trees. I also have tried a regression problem while predicting the wind speed on the same dataset. Feel free to reach out in case you have queries. Thanks! Link to my GitHub repo: https://github.com/manikmal/spark_ml
https://medium.com/analytics-vidhya/machine-learning-with-apache-spark-1e2c0724f0a5
['Manik Malhotra']
2020-11-04 12:31:15.103000+00:00
['Machine Learning', 'Python', 'Spark', 'Sparkml', 'Big Data']
My Vegan Family Needs You to Quit Hoarding All the Tofu
I haven’t had tofu in 3 weeks. No tofu scram; no tofu teriyaki; no sandwiches filled with fried, nutritional-yeast-coated tofu slabs. My local grocery stores have plenty of toilet paper, but there’s not a tub of tofu in sight. Aside from people dying, and the possible long-term effects of social isolation on our children, I would say lack of tofu is my #3 problem. Okay, that’s not true; my list of worries is epic right now. But at a time when there’s so little we can control, my little vegan family wants the comforts of tofu: its chewy texture, its reliable nourishment, and above all just the normality it represents to us. Every morning, I witness my husband’s stress level climb one more notch as he opens the fridge to the sleepy realization that, no, he still can’t cook his usual scramble breakfast. And every day my 5-year-old daughter asks when we’re going to get more, and I have to tell her again that there’s no way of knowing. In some ways, it’s a small thing. But it’s also a daily reminder of how fast the world is changing and how little control we have over anything. Like so many people around the world, we’re under a Stay at Home order. Even before the statewide lockdown began, my daughter and I were sick, so we self-quarantined. As a result, my family’s only gotten groceries three times this month, and each time, the tofu shelves have been completely cleaned out. I live in a small farming town two hours from Seattle. I don’t know any other vegans in my town, so I doubt vegans are the ones hoarding the tofu here. Besides the missing tofu, the plant-based refrigerated protein shelf is well-stocked, like the one in the “EVERYBODY PANICKED…EXCEPT THE VEGANS” meme. All the fake meat is still there, just not the tofu: no firm, no extra-firm, not even silken. Meme using a photo by DeAnne Moore, from after Hurricane Harvey, not the coronavirus pandemic. Luckily, I was prepared for this. I may not have tofu, but my kitchen is otherwise stocked, because as a vegan, I’m used to fending for myself. I learned long ago, unless I provided my own food, I’d often go hungry. So now I’ve always got a LUNA Bar in my purse and 20 pounds of red lentils in my kitchen. Marla Rose feels me; she writes for Tenderly: Vegans of a certain, um, vintage, have an anxiety borne through experience that at any time, there will not be food available. Hunger pangs at weddings, dry granola bars on road trips, plates of plain baked potatoes at the office’s annual meal at a steakhouse: They’ve all left us a little nervous about going hungry and left scars on our fragile psyches. The upshot is that many of us maintain pretty well-stocked pantries (and purses, and glove boxes…) just out of habit. We’re not hoarders, mind you, but we are prepared. And thank goodness, because my grocery store is 100 percent out of dried beans as well. Seriously, now y’all are willing to eat beans? Also missing from my grocery store shelves: wheat flour. All of it. My house was already stocked with five pounds each of both white and whole wheat pastry flours, because if a vegan wants pizza, cookies, cake, or cinnamon rolls, we’re probably going to need to bake it ourselves. As a result, most vegans are well-practiced bakers. The rest of you? I can’t help but wonder if the reason my store’s out of nooch is half the flour-hoarders thought nutritional yeast must be that bread-baking “yeast” everybody’s been talking about. Nope. But you’ll love it on popcorn. At a time when there’s so little we can control, my little vegan family wants the comforts of tofu: its chewy texture, its reliable nourishment, and above all just the normality it represents to us. A week into my tofu-less existence, I texted my vegan friend in Seattle to complain. She said her grocery was out of all refrigerated tofu too, so she settled for some shelf-stable Mori-Nu silken tofu (which, by the way, is completely sold out on its website). “Better than nothing!” she said. I really miss tofu, but I don’t know if I’m desperate enough yet for Mori-Nu. Like, maybe if I want to bake a pumpkin pie? Tofu or not, my family’s going to be fine. Because of our constant preparedness and the wonder of food stamps, we already had ten kinds of dried beans, seven kinds of flour, and six kinds of nuts stocked in our kitchen before any of this coronavirus panic started. And now we’ve got plenty of produce too, because I keep accidentally feeling the fruit and then realizing we’re in a time of, “You touch it, you buy it.” Look, if the coronavirus pandemic is somehow getting omnivores to branch out and eat more plant-based proteins, I’m here for it. Seriously, eat that tofu. Eat those beans. I’ll be more than fine over here with my Soy Curls and Gardein Fishless Filets. I’d give up tofu forever if it meant just one more person would go vegan. But… it probably doesn’t mean that. So seriously, if you’re hoarding the tofu, chill out. Get a couple tubs. Then save some for me.
https://medium.com/tenderlymag/thanks-for-eating-tofu-but-save-some-for-me-a972937bd5ea
['Darcy Reeder']
2020-05-12 02:57:56.347000+00:00
['Vegan', 'Lifestyle', 'Veganism', 'Coronavirus', 'Food']
Keeping Your Startup Team Motivated
Keeping Your Startup Team Motivated Strategies for Startup Founders from the Dreamit Community One of the most important roles for a founder is keeping employees engaged. Study after study shows that engaged workforces outperform their competition. But every company is different and there is no one-size-fits-all approach to optimize for engagement. As a founder, you’ll have to craft a bespoke approach, but here are some methods that have proved to work for our portfolio companies AND for other startups. #1 — Communicate Your Mission from the Get-Go From the first moment you interact with a potential employee, you should be communicating your mission. “Culture starts in hiring,” states Kofi Kankam, founder of college admissions startup Admit.me. Kankam uses his own personal story in the hiring process and has built a team for his startup with people who appreciate the value of college. Many of his hires are first generation college students and some are first generation Americans. “We speak a lot about how we are changing people’s lives. We help people get into school and to change their life trajectory.” To show the tangible results of their mission, Kankam prints out pictures of the students they work with and hangs them in their office. You can also do things like showing your employees emails from the people who you help or have your customers come in to office. “Your mission gives you a framework for making decisions.” — Kofi Kankam This type of mission gives meaning to work which in turn gives employees a sense of ownership in the company. Many founders mistakenly believe that crafting a strong mission statement is enough, but the mission statement is just the first step of communicating your mission. You should constantly be reminding employees of the massive transformative purpose (MTP) of your startup, or the big problem or issue you are solving that will have a positive impact on society. This sense of purpose is apparent even to outsiders among the most successful organizations (think about the SpaceX, TED, Tesla). To figure out the MTP for your own startup, you have to ask 1.) Who is your startup impacting? 2.) What problem are you trying to solve? For Simon Lorenz of healthcare messaging startup Klara, this MTP means building the central communication system for healthcare. Simon writes at the end of every offer letter that he knows the person might be able to earn a higher salary at another company, but he describes the impact of the problem that the company is solving for patients to convince them to join his team. “This is fundamental for any startup,” states Lorenz. He also emphasizes the need to remind employees about the mission all the time. “If you don’t talk about it constantly, then it’s not there,” said Lorenz. #2 — Emphasize Teamwork When employees feel like they are part of a team, they feel like they are doing something greater than themselves. This feeling will not only power them through rough patches during the day but will give employees endurance to get through rough patches that startups inevitably face in their early years. Startup founders have begun to change the way employees are compensated to incentivize for collective rather than individual outcomes. (If you want to see an example of the opposite tactic, i.e. what not to do, see what happened at Sears when the company created a hyper competitive individualized incentive program.) To increase teamwork, you first need to build trust. Group activities are one way to build trust. Increased transparency about what everyone is doing is another way. If you’re startup employees less than 15 people, you should be so tight-knit and aware of what your coworkers are doing that you’d be able to pick up the slack if something happened to any one of them. One startup coined the term “bus factor” to describe the importance of team members being able to quickly pick up slack if one employee leaves unexpectedly (or, as the name suggests, gets hit by a bus). “When the members of a startup team are a closely knit group with shared ambitions, they have an innate respect for each other and are not keen on letting each other down. This helps in increasing the accountability of everyone’s work because peer pressure factors into the equation and motivates everyone to work harder for the team’s success. Even on days you don’t feel particularly driven to work, your concern for the team overpowers the lack of motivation and you focus on doing your work industriously.” See our recent story on creating community within your startup for further examples on team building. #3 — Offer Sufficient and Transparent Compensation Compensation is inseparable from recruiting and thus in indispensable motivating factor for team members. In an early stage startup, founders might spend as much as 40–50% of their time searching and convincing the right people to join the team. Inadequate compensation can be a major demotivating factor, but a lack of transparency around compensation can be even worse. You should have conversations once or twice a year around the topic of compensation. “There is nothing wrong with commission-based compensation, but if you’re only getting started, sales may be slow for a while. Make sure that you are filling in the gaps; otherwise, your team members may leave to pursue better (read: higher paying) opportunities with more established companies. Provide a sign-on bonus for the first 30 days, for example, or offer a weekly base salary to everyone who meets certain goals for their sales attempts.” See this Guide to Compensation at startups for more information on creating a fair, transparent system. #4 — Meet with Your Team Often If your startup team feels as if they’ve been tossed out into the ocean to sink or swim, they’ll quickly lose the motivation they need to help you build your company. For this reason, it’s important that you meet with them often — perhaps even daily at first — to discuss some of the challenges the startup faces. When you actively reassure and motivate people, and when you address their concerns legitimately, they’ll keep fighting for your success. When you are meeting with your team, don’t be late for meetings and don’t reschedule. It implies to employees that you don’t care that much about them. When you’re meeting with your startup team, make sure that you’re keeping them in the loop about the current state of affairs and the future of your company. If you’re doing very well, share that with them. If things have started to take a bit of a nosedive, share that with them, as well. The goal is to show them the impact that they’re having on the company and give them a chance to have a say in things that are happening. If you do choose to share problems with your team (which you should do), wait until you have a roadmap to work through those problems. #5 — Be Transparent and Show Empathy As you scale, you have to be more transparent. If you can’t make payroll, you do NOT want employees to figure this out on their own. If your employee knows when you are being challenged, they may feel some uncertainty, but in the good times it will feel a lot better. If they are aware of your problems, they might come up with solutions that you did not think of before. If you are on the road or apart from your team, you should consider using video to communicate with the team regularly. Email is not always personal. When you speak with them, you must show empathy. In the height of the financial crisis, Dreamit CEO Avi Savar still worked as founder and CEO of Big Fuel. At the height of the crisis, the company experienced a cash crunch. “We had contingency plan upon contingency plan. Finally, we told the entire staff we did not want to let anyone go but we knew we had to make some cuts.” Instead of hiding the cash crunch, he proposed a 20% pay cut across all employees to get through the trying time, rather than laying off employees. He and his cofounder took a $1 a year salary. The team sacrificed collectively and did not complain, and his employees showed up in his office later and thanked him. #6 — Give Feedback (Positive and Negative) There is no one tool that will solve the problem of feedback. You have to figure out your own methodology. One important thing is meeting with everyone one on one and making sure you are asking them a few questions on a regular basis. Your intent as a manager should be to take away the obstacles in front of them. When giving feedback remember to discuss things that are fact-based, talk about how it had an effect on others, and give a recommendation on how they can improve that behavior going forward. Then offer your assistance going forward. The praise is public, the critique is private. — Kofi Kankam Further Reading: On Emphasizing Teamwork Energizing Your Coworkers Importance of Mission Full Dreamit Podcast on Motivating Your Startup Team Giving Good Feedback at Startups in Forbes
https://medium.com/dreamit-perspectives/keeping-your-startup-team-motivated-413d3dec6e59
['Charles Lacalle']
2016-12-15 21:37:58.840000+00:00
['Entrepreneurship', 'Startup']
How to Move a Mountain
I love this quote because it captures so much in such an aphoristic statement. To move a mountain may be a miracle, but it’s a miracle that comes through incredibly hard and persistent labor and (self) belief. It’s a lesson on habits. It’s a lesson in persistence. What I also find so inspiring about this book is that it gives you hope that you can and will succeed in finding your path to happiness if you still with it. You have to try and you have to keep trying. It will happen. You can live a virtuous and contented life. The Analects by Confucius may be the most influential book of all time. As Leys states in his introduction to the Penguin edition: “No book in the entire history of the world has exerted, over a longer period of time, a greater influence on a larger number of people than this slim volume.” As the philosophy of Plato and Aristotle emerged during a period of conflict between Greek and Persian power so too did Confucius (and Sun Tzu) emerge during the “Warring States” period of Chinese history from roughly 475–221 BCE which overlaps with the emergence of the Greek philosophers. Confucius lived and taught in the 6th century BCE. To put things in perspective, that’s when Buddha and Zoroaster were active, and 10 years after Confucius dies, Socrates is born. One of Confucius’ most revolutionary ideas was redefining the term junzi, meaning noble person or person of status, as anyone who was educated and moral.
https://medium.com/big-self-society/how-to-move-a-mountain-68d99c8173a0
['Chad Prevost']
2020-12-03 12:08:26.546000+00:00
['Self Improvement', 'Authors', 'Books', 'Psychology', 'Two Minute Takeaway']
The Best Thing for Your Productivity? Get Uncomfortable.
I moved to Virginia Beach, a city where I don’t know a soul besides my boyfriend and his mother. Under normal life circumstances, I would never pick this city as a destination I desired to call home. But it’s 2020, so hellooooo new normal. A decade ago, I moved to Los Angeles for my career in the entertainment industry. I swore I would never leave, I was wildly determined to “make it” in LA. I told my family they’d have to cart me off in a straight jacket to get me over state lines. Flash forward to October 1st of this year, I shipped my belongings across the country and my ass willingly walked onto a plane My career has never been better. And it’s not because I’m some badass. It’s because I’m so damn uncomfortable. I have tossed everything in my life that made me feel safe out the window. The place I called “home” for the last decade is gone. The “normal” of my life doesn’t exist anymore. My perspective has completely shifted. The mental barriers I had set on what I thought to be true in life have dissipated. What I believe I’m capable of has expanded and the possibilities in my life seem more limitless. We avoid getting uncomfortable because… it’s not sexy or fun. It’s coupled with a lot of emotions, soul searching and change. Humans resist change, yet change is the only thing that’s certain in life besides dying. We try to create as much routine as possible and fill our schedule up with plans to predict what is going to happen. If we know what’s going to happen, we can expect how we’re going to feel, and we can try to avoid pain at all costs. When trying to be more productive, you take on the challenging task of looking at yourself in the mirror. And being honest as fuck with yourself about where you’re at in life and with your goals. It’s more “comfortable” to keep using the same tactics towards your goals, hanging out with the same people, going through the same routine of life. And then sitting in the “woe is me” victim club complaining about how hard you’re working and still not achieving any results. Instead of analyzing what’s working, what needs to change, and then shaking up your life. Forcing yourself to get fucking uncomfortable and changing how you’re working towards your dreams. Look at your life. What routines are you so ingrained in? Are they serving you? Or do you feel like a newborn baby that’s swaddled into an infant burrito, feeling all snug and safe? It’s time to bust out of that blanket baby and climb some mountains.
https://medium.com/curious/the-best-thing-for-your-productivity-get-uncomfortable-6893e4310b69
['Maddie Mcguire']
2020-11-30 22:18:58.892000+00:00
['Self-awareness', 'Self Improvement', 'Productivity', 'Goals', 'Change']
Meet The Startup That Convinced Hundreds To Drop Out of College.
When you go out into the world, there’s no structure. … A job doesn’t give you a syllabus.” -Dale Stephens, Founder of UnCollege, The Bureau of Labor Statistics reported in 2013 that of the 30 jobs projected to grow at the fastest rate over the next decade in the United States, only five typically require a bachelor’s degree. To fill this gap, the opt-out industry has swept in, promising students a new credential to secure in-demand jobs in less time with little damage to their wallet. These educators have a diverse set of experiential learning techniques, usually promoting personalized instruction and a focus on teaching digital skills like online marketing, coding, and data analysis. Though their exact programs differ, all of the companies lambast the traditional college education as incapable of producing productive adults. A post on Praxis’s website tells readers to “Throw Away Your Resume,” and describes how little companies care about the standard achievements of college students (leadership positions, internships, studying abroad). While this may seem counterintuitive to the average student, Praxis claims that so few of them realize “that hundreds of others applying [to jobs] have the same, or similar, resume.” “They’ve all jumped through the hoops, played by the same rules, interned here and there. They’ve all been told their entire lives that they’re the best and brightest — their resumes speak for themselves, they think.” For opt-outs, colleges heard students into an antiquated, uniform way of looking for work that doesn’t highlight their individual skills or personality. Despite their careerist rhetoric, opt-out companies believe deeply in the importance of education, they just think that students learn best when they direct their own studies. That’s why many of them don’t provide courses. Instead, they ask incoming participants what interests them. After they compile a list, they’re hooked up with experts in their fields. Praxis offers one-on-one video chats with experts in psychology, philosophy, and literature. UnCollege pairs its participants to work with a charity of their choice and a personalized career coach. Peter Thiel’s foundation tell students “how you spend your two years in the Fellowship is up to you — we’re here to help, but we won’t get in the way.” On his personal website, Derek asks skeptical students to consider a thought experiment. It’s designed to rope them into the idea that their current education, in terms of course material, is largely pointless. He calls it The Dean’s Test. Imagine you are an incoming freshman at a prestigious school. On your first day of orientation, the Dean comes to the microphone to make a special announcement. “Degrees will no longer be awarded,” he says with a smile. “School is about learning,” he goes on. “Tuition remains unchanged and students will be able to pay to take classes in their interests and desired fields like normal. Also, the entire world will follow this policy as well.” He concludes: Do you think most students would still remain students? Would you yourself remain a student? Would your parents still tell you to stay in school? It’s an effective message, especially in the era of sky-high tuition. For most of its existence, college was a place for young adults to nurture their intellectual curiosity. If a company recruiter walked onto a campus a hundred years ago, they’d be driven out of the halls for trying to corrupt higher education with the nasty business of commerce. Nowadays, anyone who is willing to pay tens of thousands of dollars for the privilege of learning probably isn’t too worried about their degree getting them a job. For the majority of students, however, the cost of college needs to have a clear monetary return. That’s why Derek began to market Praxis’s fairly steep $12,000 enrollment fee as a form of “net-positive” tuition. Praxis guarantees members a paid gig at a startup after the program ends, and claims that companies are not only willing to pay it’s 18–21-year-old newbies $15 an hour to learn but are actually eager to do so. After their six-month apprenticeship ends, students make a net profit of $2,000, a swath of professional connections, and actionable experience in their chosen field. It’s not hard to see why Derek sometimes struggles with convincing prospective members about the authenticity of his program — in 2018 the average young person left college with $30,000 in debt and took eight times as a long as a Praxis participant to do so. Though Derek’s “Dean Test” seems to make enough sense, common knowledge suggests that you should go to college. Common knowledge also suggests that if you do go and drop out, it’s probably not because you’re the next Bill Gates or Mark Zuckerberg, but a loser. Of all the criticism that the opt-outs have had to hear, the “Zuckerberg Fallacy” reigns as the most irritating. The logic of it goes like this: College is necessary to succeed unless you’re a boy-genius who is already nurturing a billion dollar startup in your dorm room. If you’re like the average college student (poor, aimless, and confused) you should probably stay because you have nothing waiting for you if you leave. Proponents of the Zuckerberg Fallacy point to the popular statistic that college graduates, on average, enjoy more than $700,000 in lifetime earnings compared to nongraduates. The vast majority of America’s 30 million college dropouts are, statistically speaking, more likely than graduates to be unemployed, poor, and in default on their debt. This statistic poses a fundamental challenge to the opt-outs faith in individual effort. To rebut it, they’ve swarmed the internet. “Claiming the data proves college helps you make more money is like claiming basketball helps you grow taller,” Derek wrote on his personal blog. In other words, smarter people attend college at higher rates than the less intellectually endowed. But the most effective rebuttal, opt-outs have found, is one about opportunity costs. If you buy the idea that students could be building a career for four years instead of studying, then this often-cited statistic fails to include the potential earnings of someone with a four-year head start. But arguments like these, whether they are true or not, sometimes attract even more anger and criticism at programs like Praxis. What value, if any, do they find in a college’s supposedly exclusive career resources, like campus recruiters or access to a global network of alumni?
https://lukejacobs.medium.com/meet-the-startup-that-convinced-hundreds-to-drop-out-of-college-43080705bf1a
['Luke Jacobs']
2019-03-22 23:28:07.394000+00:00
['College', 'Startup', 'Work', 'Education', 'Entrepreneurship']
Therapists Face Different Challenges with Telehealth
Therapists Face Different Challenges with Telehealth ”In a moment, saving a bird” Photo by Pierre Bamin on Unsplash The pandemic has forced medical professionals including mental health professionals to rethink how they do their jobs. I called my client promptly at 10 am for an appointment we had scheduled. No answer. I called a few more times. About 10:10, I got this text. ”In a minute, saving a bird.” Ok, then. I like birds. I wonder what kind of bird? I pictured her putting it back in a nest or in a box in a warm place, or? I waited til about 10:30 then got another text “do you have time later today? Well of course I have time but…. Isn’t it my time? Already designated for relaxing with my husband. Or taking a walk or starting dinner? Or just being quiet and alone? Boundaries get blurred The pandemic has forced medical professionals including mental health professionals to rethink how they do their jobs. Since early March, after a scare with a client who had been exposed to the virus, I have been doing therapy from home. Of course, this is safer for me and my family. I am in my 70s and live with a man in his 70s who has several risk factors as well. The thought of sitting in a small room across from people who may not be as careful as I am, talking and gesturing and laughing and crying, (and how do you do therapy with a mask, for Pete’s sake?) sounded like a set up for catching and spreading the virus. I was freaked out in March. I went home, set up an office there, and began using the telephone and internet to connect with my clients. My main goal was to keep talking with my people, to console, encourage, and steady them as we all went through this pandemic for a few months. The few months turned into six with no end in sight. I didn’t want to keep paying rent and utilities for an unused office so decided to give it up. Now I work from home permanently. I set up some office hours but it’s hard to stick to them, because of birds. And other things. So I tell my client, yes, we could talk at 2:30 this afternoon. That’s within my stated office hours. Even though I had hoped to be finished before then, I can deal with it. That’s what I’m here for. Some time before 2:30 she calls to cancel, saying she’s too tired and can we talk tomorrow morning. Tomorrow is Friday, my day off. No, I’m not doing that. My usual policy with people who miss appointments for non-emergency reasons is to have a talk about the value of my time and the need to prioritize their treatment. If this were a client with private insurance I could charge my standard $75 missed appointment fee. This is not possible in this case. So I take the loss of income, because of a bird. But I like birds. Creative excuses When I worked at the mental health practice of the local hospital system, the staff kept a running list of creative excuses clients gave for missing their appointments. Here is a sampling: I missed my appointment because: My dog just had puppies I’m stuck on the rotary and can’t get off I’m expecting my period I might be getting depressed I’m too anxious (the only reasonable one) My kids won’t stop whining To which I could add I’m saving a bird While clients don’t have to drive to see me now, invalidating all the excuses involving flat tires, keys locked in the car, money for gas, and the danger of driving in ice and snow in winter, instead, there are different excuses like this one: I need to get my husband/child breakfast, the internet just went down and Joey has to go online for school, my wife can’t find the car keys, etc. And there are many more interruptions than in my pleasant, private office. I listen to children fighting and ruining their parent’s train of thought; I tolerate cats swishing grandly past the screen, tails high (and close-ups of cat butts). There are the occasional tantrums of younger children, endless arguments with older ones, requests for help from unknowing, and sometimes undressed, spouses. On top of that, there are situations like the long-awaited plumber arriving just as we are about to start. Or a virtual meeting with a doctor rearranged for our appointment time. Of course, if my clients were organized and assertive with good time management skills and clear priorities, meeting with me might be unnecessary. But I know they need help with these things. This is a new age for all of us and most of us are trying our best to stay afloat, help our fellows and figure out creative ways to solve problems. It is harder to keep my own boundaries Another problem is that, mixed in with my clients personal lives, now a part of each session, is my own life. And it’s harder to keep them separate.. I have an office but it’s upstairs. Sometimes I inadvertently answer a call from an unknown number, thinking it might be a repairman I’ve called, and there I am in the middle of frying okra trying to sound professional to a potential client. My private life would be interrupted constantly if I didn’t at least try to screen calls. Even so, I am often caught apologizing to someone I’ve never met for my frustrated or too-familiar tone, and attempting to explain. I am not quite the same person as I was in the office. Maybe this is a good thing, I’m not sure yet. I find myself talking to people during their commute to work and wanting to send them a summary of our session so they remember the important points. This is probably a useful change. I am sending links to relevant online articles to clients instead of giving them handouts, saving a lot of paper, I guess. If people don’t use the internet, not always a ’given’ with my clients, I snail mail things to them. That’s ok, in fact, I think we have all been exercising unused brain cells to solve some of the new problems we face. I’ve noticed that I tend to think about clients more often during the day, sending them useful links to things I read or pictures I’ve taken. I occasionally call them at odd hours to check on them if they were having a hard time. When I was a therapist in an office I rarely thought of clients when I went home. I know my work is wheedling it’s way into the very core of my personal life. Is this a “boundary issue” or just how it is in a pandemic. Who knows? To sum up I am not quite the same person at home as I was in the office My clients and I know more about each other’s private lives than we did before It is harder to keep my personal life separate from work During sessions, it is harder to avoid distractions on both sides The pandemic has become a routine part of our check-in, sometimes part of our goals I feel kinder, softened toward my clients facing the same pandemic restrictions and challenges as I do. I break “the rules” more often. I would like to hear how doctors and nurses and other caregivers perceive the difference created by the pandemic in their relationships to patients or clients. Though these people may not be working from home, there must be changes in how they perceive and accomplish their work. I know one thing. This is a new age for all of us and most of us are trying our best to stay afloat, help our fellows and figure out creative ways to solve problems.
https://medium.com/curious/therapists-face-different-challenges-with-telehealth-1f2062b247c1
['Jean Anne Feldeisen']
2020-09-20 09:21:00.906000+00:00
['Self-awareness', 'Work From Home', 'Work Life Balance', 'Boundaries', 'Mental Health']
What to Do When Your Pitch Gets Stolen
In the 1985 young adult novel “I’ll Take Manhattan,” 15-year-old Amanda gets burned when she shares an idea with a powerful person who profits off it without her permission. Amanda quickly learns that she was naive to share her ideas with someone in her industry. But if you’re a freelancer pitching a story to a publication, it’s not naive to share your idea — it’s simply how the industry works. And sadly, just as Amanda learned, getting taken advantage of can seem like just part of the business. “You pitch a pub, they turn down your story, and then something really, really similar runs at the same publication in short order,” freelance writer Britni de la Cretaz explained. “If you pitch an idea to a publication and they like the idea, but they don’t want you to write it, they might assign that idea to another writer,” National Writers’ Union vice president David Hill said. “It’s something that we hear about a lot, and it’s something that I personally deal with a lot as a freelancer myself. And we all scream bloody murder when it happens, but there’s very little we can do about it.” “I’m so angry right now,” writer Nylah Burton wrote on Twitter in August. “I pitched five ideas (detailed pitches I worked on for a while) to an online publication and they turned them down. No biggie. That’s the business. This publication has now published 3/5 of my pitches. All written by other authors.” As Hill suggests, Burton’s experience is by no means unique. Freelance writer Britni de la Cretaz described an almost identical scenario when she was applying for a job with a publication. An organizer for the IWW’s Freelance Journalists Union, who spoke anonymously about this phenomenon, called it “something quite a few people have brought to our attention,” adding, “It’s difficult to address … because the evidence to suggest that something is outright ‘stolen’ is difficult to put together.” This uncertainty, Burton said, is part of the frustration. “I still don’t really know what happened,” Burton wrote in September via Twitter. “It could have been a coincidence, or part of the editorial process that I just don’t understand because I’m a new writer.” Theft — or coincidence? These coincidences, freelance journalist and editor S.I. Rosenbaum explained, do happen — and they can look very different outside the newsroom than within it. “I had an experience where a friend pitched me a story …, and I told her ‘no,’” recalled Rosenbaum, whose career as a reporter and editor included stints at the Providence Journal, the St. Petersburg Times, the Boston Phoenix and Boston Magazine. “Meanwhile, unbeknownst to me, another staff writer was pitching a similar story to another editor, and we ran it. “There was no way I could prove to her that we didn’t take her pitch away from her,” Rosenbaum said, adding, “What probably would have been better practice would have been to say, ‘Let me see if anyone else wants it.’” Keeping those lines of communication open is just what Jesse Hirsch, managing editor at The New Food Economy, said he tries to do. Hirsch described a recent scenario in which a freelancer pitched a story very similar to one a staff writer was already working on. “That felt awkward, because I had to say, ‘Look, I swear we aren’t stealing this from you, we already have someone working on this,’ and trust he would believe me,” Hirsch said. But Hirsch acknowledged that “the line can be fuzzy.” “We recently got a pitch on cell-based seafood that was fairly general, kind of a summary of where the technology stands now and how soon we can expect it on shelves,” Hirsch said. “We chose not to take it, but that is a pretty expansive topic, and I wouldn’t say that pitch prevents us from writing about it in the future.” “Our ideas are maybe not as unique as we think they are,” writer Britni de la Cretaz agreed. As much as we may believe an idea has been stolen from us, she said, “I think it’s really hard to prove. You pitch a pub, they turn down your story, and then something really, really similar runs at the same publication in short order.” Protecting an idea So what can freelance writers do when they feel their ideas have been stolen? The IWW FJU has discussed the idea of “banking” pitches — creating a database for freelancers to store and register their pitches before sending them out. This model, NWU Vice President David Hill points out, is similar to the Writers Guild of America’s Script Registration service, which allows script writers to “register” a script or treatment with the Guild before sending it out. But, Hill said, “That works in their world because they have some industry density” — something both the NWU and the IWW FJU are trying to establish. Some freelancers, like Nylah Burton, are fighting back individually — either by speaking out generally about the practice, or by naming and shaming publications or editors who have engaged in this practice. “Freelancers need to know where they’re safe and where they aren’t,” S.I. Rosenbaum said. “And there should be repercussions from somebody stealing an idea or exploiting a freelancer.” ‘It can destroy careers’ But Britni de la Cretaz warns that speaking out in this fashion can be risky — especially without solid proof. “I’m all for speaking truth to power, and standing up for the little guy, but when it comes to this particular thing, it’s a damaging allegation for both parties,” de la Cretaz said, adding, “It can destroy careers.” Her advice? “Before you take to Twitter and accuse a publication of theft, reach out privately and try to see if you can figure out what happened.” Doing so in one instance netted de la Cretaz a finder’s fee of $200 — far short of the thousands of dollars she could have earned by penning the $2-per-word story she had pitched, but still, in some ways, better than nothing. If the publication doesn’t respond, or if their response is unsatisfactory, both the NWU and the FJU stand ready to intervene on behalf of freelancers. “We have a whole division that handles grievances,” Hill explained. “We’ll reach out to publications and try to mediate.” Know when to hold ’em Barring that, Hill suggested that writers can take steps to make their pitches less ripe for poaching. “The best thing for us to do is, when we think of a good idea, don’t run off and pitch it half-cocked,” Hill argued. “Do a little bit of work so that the pitch is something unique. Have a source, have a piece of information, something that the publication doesn’t have. But de la Cretaz warned that this can be a fine line to tread. “If you’re too specific in your initial pitch, and you give away all the info that somebody would need to write their own story, I think that’s a risk,” de la Cretaz said. “Use your wiles. That’s all we’ve got.” — S.I. Rosenbaum For S.I. Rosenbaum, this delicate balance is one of the frustrations of freelancing. “As a staffer, you can say, ‘I want to do something about this broad topic,’ and have the editor say, ‘OK, go do it,’” Rosenbaum said. “As a freelancer, you’re essentially an unemployed person showing up to ask questions, right up until you get the contract.” In the teen novel “I’ll Take Manhattan,” Amanda’s mix-up is easily smoothed over. But for those of us who are still in the trenches, finding our way in the media landscape can require us to be strong, determined and cunning. “Use your wiles,” advises Rosenbaum, adding, “That’s all we’ve got.”
https://emily-farmer.medium.com/what-to-do-when-your-pitch-gets-stolen-edc82b5fdde5
['Emily Popek']
2019-10-10 00:55:17.300000+00:00
['Unions', 'Writing', 'Freelance', 'Journalism', 'Pitching']
Building Real-time Streaming Applications Using .NET Core and Kafka
By: Srinivasa Gummadidala In the world of MicroServices, .NET core gained more popularity because of its powerful original .NET Framework and ability to run in any platform. The purpose of writing this post is to illustrate how to use the Confluent .NET Core libraries to build streaming applications with Kafka. As you might have already heard, Kafka is currently the most popular platform for distributed messaging or streaming data. The key capabilities of Kafka are: -Publish and subscribe to streams of records -Store streams of records in a fault tolerant way -Process streams of records in real-time -Replayable and real-time In the Kafka world, a producer sends messages(records) to a Kafka node (broker) and these messages get stored in a topics and consumers subscribe to the topics to recieve new messages. Kafka is popular among developers because it is easy to pick up and provides a powerful event streaming platform complete with just 4 APIs: — Producer — Consumer — Streams — Connect Let’s take a simple use case of an e-commerce company. Assume we are building a simple “Order management” APIs to sell products like “Unicorn Whistles.” Our objective here is to build fast & scalable backend APIs to take more order requests and quickly process or trigger other workflows to speed up the delivery process. To address scaling individual apps and other performance related key metrics, let’s assume that we have decided to build the below two critical components: — An Order API(RESTful) that takes users orders and responds immediately with some acknowledgment info. — A background service that actually processes these order requests. VSCODE or some .net code editor .NET Core 2.1 Docker Kafka Installation and Topics setup Kafkacat command line tool To install Kafka locally using Docker, please follow the instructions here. In our use scenario, Order API takes user’s order requests and stores them in a topic called “OrderRequests.” So this Order API is going to be one of the producers for the topic “OrderRequests.” Let’s build OrderAPI as a .Net Core WebApi and add some code to produce messages to the Kafka topic. Step 1: Create a base template for web API — Create a directory with the name “OrderHandler” and run the below commands. dotnet new webapi -n Api dotnet new xunit -n Test dotnet add Test/ reference Api/ Project Structure Step 2: Create OrderController and implement POST handler to receive user’s order requests. Note: Here, I have created a new POCO class called OrderRequest to represent the user’s order request. Refer my github repository for the complete code. Step 3: Install package for Kafka “Confluent.Kafka” dotnet install package “Confluent.Kafka” With this, you get access to the producer and consumer APIs and its dependency classes. Step 4: Use Confluent’s “Producer” class to connect and produce messages to KAFKA. How do we open the connection to Kafka brokers? Creating a producer class instance with the recommended settings will maintain a connection with all the brokers in the cluster. You should generally avoid creating multiple Consumer or Producer instances in your application. new Producer<string, string>(this._config) How do we produce messages to topics: Use ProduceAsync(…) async method to write messages to one or more topics and await on the result. Note: Make sure that you import the Confluent.Kafka and Confluent.Kafka.Serialization namespaces. Because you need these namespaces for accessing Kafka APIs. To do things in OOP way and maintain clean code, wrapper the above code inside a new wrapper class(Here I named it as “ProduceWrapper”). Refer to my github repo Make sure that you invoke the producer wrapper code from your controller code as below. Issue “dotnet run” command to start the web server. Trigger user order requests from POSTMAN or any rest client: Verify if our requests were produced to Kafka’s “OrderRequests” topic : Use “kafkacat” command line utility tool to verify Use Confluent’s “Consumer” class to connect and consume message from Kafka topic. How do we open the Kafka connection and consume messages? Create a consumer class instance with the recommended settings and subscribe to the topic. var consumer = new Consumer<string,string>(this._consumerConfig); consumer.Subscribe(topicName); Use “Consume()” method to start reading messages from the topic. ConsumerConfig Settings: GroupId : Records will be load balanced between consumer instances with the same group id. : Records will be load balanced between consumer instances with the same group id. AutoOffsetReset : Kafka lets you consume topic records in any order, you can consume from the beginning or latest, or reset to a particular position, and so on. : Kafka lets you consume topic records in any order, you can consume from the beginning or latest, or reset to a particular position, and so on. EnableAutoCommit: Kafka uses a concept called OFFSET to track each consumer position in the entire topic consumption. When this setting is on, your consumer instance is going to commit its Offset to Kafka every time you fetch a record. As we have already implemented a POST handler to capture user order requests into “orderrequests” Kafka topic, now we need to build something to process these records and write to “readytoship” Kafka topic. And this should happen in real time means as soon as records arrive in orderrequest topic. We can create a background service that streams records from “orderrequests” topic. Step 1: Create a new .NET Hosted Service called “ProcessOrdersService” Step 2: Use Confluent library’s Consumer class to read messages from topic. .NET Core lets you to host background services inside webhost or generic host, please refer this article. Since we already have a webhost(created for webAPI), keeping our HostedService inside webAPI project will be sufficient to run this service. “dotnet run” command will start both web server and this background service. Project Structure .NET Core HostedService Verify if messages are produced to “readytoship” topic using Kafkacat utility. Using Confluent’s .NET Producer and Consumer APIs is simple and straightforward, which makes it easy to adopt them for real microservices/streaming apps. For detailed code, please refer my githhub repository. Srini is an Agile Transformation Engineer at TribalScale based out of Boston office, .NET web developer focused on micro-service-first architecture design. TribalScale is a global innovation firm that helps enterprises adapt and thrive in the digital era. We transform teams and processes, build best-in-class digital products, and create disruptive startups. Learn more about us on our website. Connect with us on Twitter, LinkedIn & Facebook!
https://medium.com/tribalscale/building-real-time-streaming-applications-using-net-core-and-kafka-e97b6ecb2ef1
['Tribalscale Inc.']
2019-03-18 17:24:34.493000+00:00
['Kafka', 'Net Core', 'Microservices', 'Development', 'Thought Leadership']
Is Authenticity Being Abused as a Marketing Ploy?
Is Authenticity Being Abused as a Marketing Ploy? Genuine narratives don’t need empty buzzwords, do they? Photo by explorenation # on Unsplash Why are humans becoming brands, complete with taglines? As our interactions become increasingly monetized, everything gets commodified, including people. In a country where celebrity is the only measure of success, there’s no shortage of wannabes vying for our eyeballs and validation. To attract attention is the only goal, and it has to be achieved by any means necessary. Now that we can all avail ourselves of the megaphone that is the internet at the touch of a screen, anything goes. Applied to writing, this can lead to peculiar copy as earnestness, integrity, and humility occasionally lose out to grabby fingers. Instead of using the individual experience to humanize universal issues, this kind of copy hijacks clicks and bucks with overwrought pathos to bring readers to their knees. Whether in annoyance at such transparent greed or from being overwhelmed with empathy for a fellow human’s suffering, we’re forced to react. Often, it’s difficult to parse the emotions that arise when reading this material. Does it monetize human misery instead of invite reflection? Or is it a genuine, and thus credible, attempt to transcend difficulties and survive?
https://asingularstory.medium.com/is-authenticity-another-marketing-ploy-777520e7a9a
['A Singular Story']
2020-12-14 12:50:32.334000+00:00
['Language', 'Internet', 'Writing', 'Psychology', 'Culture']
Nostalgia and Regret
The Field Guide to Feelings Nostalgia and Regret Change is inevitable: Lessons from a whaling ship and relationships. Image by jaypofromvox, Wikimedia Near where I grew up in Connecticut, there’s an old whaling ship you can visit, the Charles W. Morgan, and learn what it was like to go to sea to hunt whales. I used to go there a lot for that feeling I get when I imagine living back in history. I was hoping for a good strong shot of nostalgia. In most cases, nostalgia is a sentiment for a period or place with happy personal associations. The way most use the term, I can’t be nostalgic about anything from the 1840s, since I never lived back then. The sixties, seventies, eighties, nineties, and oughts are within my living memory, so that’s what I should be nostalgic about. Sometimes I am. But, although I have never hunted whales, my association with whaling is so strong, due to reading Moby Dick fourteen times, that I sometimes feel I might as well have gone to sea on a whaler. One time I was on the well-oiled quarterdeck of the Charles W. Morgan, nostalgia was just beginning to hit when the tour guide said something that ruined it for me. The guide said the ship had been totally renovated since it was acquired by the museum, not all at once, but bit by bit till it seemed that barely one rope or plank of the original vessel remained. That being the case, I wondered, was it still the same ship? The answer, to my way of thinking, was no. I was reckless enough to ask the tour guide that very question. The rest of the people in the group screwed up their faces at me as the guide patiently replied, “Yes, sir. Her appearance has been restored to what it was during most of her active career.” I understood the tour guide to mean this: if they had not renovated the entire ship, everything would have deteriorated to the point where it would be unrecognizable. We would be standing on the very same dead trees, under the very same sails, and surrounded by the same rigging, but those components would be so changed to render the Charles W. Morgan a different ship, even though no one had altered a thing. In other words, change is inevitable. It’s impossible to keep things the way they were, so the best we can do is reconstruct them. The Charles W. Morgan could never remain the same ship that sailed in the 1840s. For that matter, I’m not the same person who used to visit that ship, for not only have the cells of my body continuously died off and exchanged with new ones, but my mind and behavior have also changed. For instance, once, when I was a teenager, my father wanted me to play golf with him. I said I’d rather spend that time with my girlfriend, but he got his way. I played golf, but for the entire eighteen holes, I punished him by not saying a single word. Whenever he would speak, even if all he said was that I made a good shot, I would glare back in a way every teenager perfects. Remembering this, I’m embarrassed to have been such a moody, petulant teenager. What was wrong with me? What would I give to be able to play golf with my father again? What could I have been thinking? I remember that round of golf with regret. Regret is nostalgia with a painful feeling. Nostalgia and regret look very different, but they are the same mental operation, arriving at different feeling. I think about whaling with longing and I think about the golf outing with disappointment, but in both cases, I reconstruct an event. I could just as easily regret whaling if I reconstructed it differently. Those whalers did, after all, hunt many species of whales almost to extinction. Could I also be nostalgic of the golf outing, despite glaring at my father when all he wanted was time with his son? I would need to start by understanding my state of mind during adolescence, I must reconstruct the way it was during that time, much like the restorers did with the Charles W. Morgan. I would activate an historical imagination, the ability to see the world of the past with the information and the sensibilities I had available at that time. To understand, I really must go back to the thoughts I could have had and detach myself from everything I’ve learned since then. When I was a teenager, I believed my parents were overprotective. I wanted to go on adventures, just as my father had when, at age 17, he joined the Navy to fight World War II. I used to daydream of being born a hundred years ago, running away from home, and shipping out on a whaler. My fantasy had me returning years later, a respected harpooneer, bronzed and muscular, flush with the earnings from my voyage. Understanding my mind as a teenager takes the same effort as it would to understand the people who hunted whales almost to extinction. It’s easy to be critical of people of the past, but you don’t earn the right until you imagine living the life they genuinely lived. Diligent scholarship into the realities of the whaling industry reveal I had a romanticized version of life on those ships. Voyages were long, tedious, and dangerous. The crews of those vessels were desperate, marginalized, violent men. Almost all returned poorer than when they left, having been exploited by unscrupulous shipowners and captains. Mine was a dream of an adolescent, facilitated by a well-scrubbed, family-friendly, Disneyized museum piece, sitting in the harbor at Mystic, Connecticut. It’s one thing to meticulously reconstruct the ropes and planking of a ship like the Charles W. Morgan. To know what it was really like, I would need the tour guide to grab a rope or a marlinspike and beat me till I climbed the rigging in a hurricane. Then I’d be cast adrift in the middle of an ocean, forced to row miles, and throw darts into a beast that could kill me with one swipe of his tail. Then I would need to peel the animal’s skin off, light a fire on a wooden ship, and smell the stench of the boiled blubber. I would conclude the tour with a meal of hardtack and salt pork with weevils and spend not only the night, but the next four years of nights, sleeping on a wooden bunk on a doused and pitching ship. Then, I’d understand the men who hunted magnificent animals to near extinction. Only then could I have earned the right to be critical of their choices. To get that pleasurable feeling of nostalgia, it’s necessary to practice selective recall. To get the painful feeling of regret, it’s also necessary to be selective of what you remember. Both feelings are dependent on sampling of only the parts that support the resulting value judgement. Nostalgia and Regret in Therapy On a single day, I heard similar stories from two different clients. One was told with nostalgia, the other with regret. Both had been sexually abused by their stepfathers before they were eight. One became a cop, tough as nails, and wasn’t happy about being ordered by her commander to see me. People in therapy are whiney, weak, and looking for excuses, she said. She was fine and would be on her way. I said hold on, now that you’re here, tell me how you became so strong. Maybe then I’d be able to help my other clients be as resilient as you. She said she had been abused. She regretted being so weak and vulnerable, to allow it to happen. Now, she refuses to take crap from anyone. She can’t get close to anyone either. She couldn’t even get along with any of her partners on patrol. The second client had been abused, too, but talked about it with longing. She loved her stepfather, and she said, he loved her. She became the kind of person cops are sent to the home to protect from domestic violence, only to end up staying with the one that mistreats them; the kind the first client despised. Far from being tough as nails, the second client consistently made excuses for other people’s bad behavior. In both cases, the women thought they had the true story of their abuse. I hadn’t been there to witness it, but knowing what I know about the Charles W. Morgan and my own golf outing, it’s safe to say they both made very selective reconstructions of the past. Leave them alone, you might say; they both are at peace with being abused, each in her own way. What’s the point of dredging up old memories and challenging conclusions that wrap things up for them so well? It’ll only make them feel worse. It is true that, in treating people who have been traumatized, therapists often make them feel worse before they feel better. It’s no fun to look at things you’d rather not look at. I never go there without a full warning that it’s going to hurt. But the reason for doing it with these two women was this: their present-day troubles were based on incomplete and misleading restorations of the past. I had them both bring me pictures of themselves from a period just before the abuse occurred. We looked at them together. What I saw were sweet, defenseless girls. There was no way the cop at that age could have been strong enough to prevent being abused. There was no way the other woman’s affection could have been perceived as a sexual invitation. They both had been children. It’s easy, once you are grown, to forget that you were a child and what it had been like. It requires historical imagination. Even when both women handed me their pictures, they had forgotten. They looked at the child they once were and saw adults. They didn’t see what I saw. Eventually, they were both able to reconstruct themselves as children, not by seeing the pictures, but by seeing me see the pictures. They had to see the whole picture before they could see themselves. The point of seeing yourself is the same as knowing the truth of the whaling industry or understanding the half-baked thoughts of an adolescent. It helps us get in contact with the truth. The past is never as simple as our nostalgia or regret would have us believe.
https://medium.com/change-becomes-you/nostalgia-and-regret-15907a0c12c8
['Keith R Wilson']
2020-12-23 19:02:48.173000+00:00
['Mental Health', 'Self', 'Regret', 'Psychology', 'Nostalgia']
Feed file Generation from data stored in HDFS and exposed using Hive
Pre-requisite: Data is already stored in HDFS and exposed using Hive tables to the end users. Note: We can also use MapReduce/Spark to generate the feed files, but it requires coding skills as it involves extensive coding, and also maintenance of the code is more than hive. As we already have data exposed as hive tables so we can directly use Hive to generate the feed files. In Hive, there are multiple ways to generate a text or csv feed file from the information available: Using hive -e “query” option Using hdfs cat option And my personal favorite (which i used in my use case)- Using INSERT OVERWRITE LOCAL DIRECTORY option lets discuss them one by one with example, pros and cons: Using hive -e “query” option hive -e “HIVE_QUERY” > /FEED_LOCATION/sample_feed.txt “-e” option with hive is used to run SQL from command line and then result of the query is redirected to the output file as can be seen in the below snippet Contents of the feed can be seen in the below snippet: This command can be run from command line, no need for the user to log in to hive cli, and the output file will be generated on client node as well. Pros : Simple to use and Column header can be included in the file feed Cons : Output files generate with a Tab delimiter, need to convert Tab delimiter to comma delimiter which can take lot of time when generating huge files In case attribute needs to be enclosed in quotes, there is no provision 2. Using hdfs cat option hadoop fs -cat hdfs://servername/user/hive/warehouse/databasename/table_csv_export_data/* > ~/output.csvCopy We can issue this command from the client node and the output file will be generated locally as well as can be seen from the below snippet: Pros : Simple, with comma as delimiter in CSV output. Cons : Column headers can’t be generated. Can’t perform any transformation or filter while generating the feed 3. Using INSERT OVERWRITE LOCAL DIRECTORY option INSERT OVERWRITE LOCAL DIRECTORY ‘FEED_LOCATION’ ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘,’ STORED AS TEXTFILE SELECT * FROM TABLE; OVERWRITE- This keyword overwrites everything present in the “FEED_LOCATION” with the file generated with this command LOCAL- Usage of LOCAL keyword is optional, In case output file is expected to be generated in HDFS directory then “LOCAL” keyword is not used, else if the file is expected to be generated in local directory then “LOCAL” keyword is used Sample query can be seen in the below snippet: Output feed file can be seen in the below snippet Pros: Any transformations or filters can be applied on the data stored in hdfs while generating the output file We can generate the output file either in HDFS directory or local directory depending on our use case Not all output files shall require to be generated with comma delimeter, we can define our custom delimiting character while generating the output file No need to take extra overhead to redirect the result to output file, we just need to define the output location Cons: File will be generated with name “000000_0”, might require additional step to rename the file to the required name Additional tips to keep in mind before generating the feed Hive compresses the data by default when it loads the same into the table, as we need to generate the uncompressed data, it is important to set the below hive parameter SET hive.exec.compress.output=false; You can always speed up the generation of the feed by using tez engine on top of hive SET hive.execution.engine=tez; In case the attributes of the feed are required to be enclosed in quotes, then you can use the OpenCSVSerde along with the command as can be seen from the below command: INSERT OVERWRITE LOCAL DIRECTORY ‘FEED_LOCATION’ ROW FORMAT SERDE ‘org.apache.hadoop.hive.serde2.OpenCSVSerde’ WITH SERDEPROPERTIES ( ‘separatorChar’=’,’, ‘quoteChar’=’”’ ) STORED AS TEXTFILE SELECT * FROM TABLE;
https://medium.com/analytics-vidhya/feed-file-generation-from-data-stored-in-hdfs-and-exposed-using-hive-b19f68f2b3a1
['Sumit Goel']
2020-12-20 15:03:58.081000+00:00
['Big Data', 'Hdfs', 'Data Lake', 'Data Generation', 'Hive']
The Fire From Within
Photo by John T The Fire From Within a poem about the passion, success and failure . dancing around a smoke sent free lifted to rafters expanding until eyes cannot see what nose smells between walls unfolded . stairs snake up steps of potential trying to outrun older flames with a passion born from knowledge it will end in light gray ashes . leap it’s the only chance I have before the weight of those hanging on limbs of my tired body bring me down like an autumn rain . on my back I lay deserted with liquid sheets and concrete bed cupping eyefuls of rainwater stabbing at my skin from heaven . blistered hands lay unmoved at sides evidence of fire burning like a good letter stayed with me long after thin paper faded .
https://haikulovebites.medium.com/the-fire-from-within-b5d9e42a12a0
['Haiku Love Bites']
2020-01-15 14:21:17.670000+00:00
['Poetry', 'Self-awareness', 'Motivation', 'Life Lessons', 'Life']
I Hate Being a Writer in New York City
The thing is, once that fear started to settle in, my writing collectively started to take a turn for the worse. I started to care less about making anything happen through writing. I started to put less effort into writing and I nearly lost interest in it completely. I felt inadequate, unmotivated, and irrelevant. Plus, the struggle to not compare myself to others was a losing battle in and of itself. Witnessing the overflow of new content that was published each day created this constant pressure for me to keep publishing [anything] to the point where the quality of my writing began to suffer due to the mass quantities in which I was producing. I started to hate calling myself a Writer. I was ashamed to even consider myself as one. And the thing that I hated most was being a Writer in New York City. I hated knowing that there were so many others out there, just in this city alone, who were so much more talented, more intellectual, more creative, more focused, more driven than me. I hated it so much that I made myself become a statistic — I became another one of the many people who called it quits before things really started happening…Though, I wouldn’t say I’m down and out quite yet — After all, I did write this article. Instead, I’d say, I’m taking it slow. I’m taking it down a few notches and I’m trying to find that raw passion that once drove me to writing in the first place. Maybe if I keep digging for that, then something good will come from it. Maybe.
https://lindseyruns.medium.com/i-hate-being-a-writer-in-new-york-city-25db3511b68c
['Lindsey', 'Lazarte']
2019-07-23 00:40:20.926000+00:00
['Personal Development', 'Writing', 'Life Lessons', 'Self', 'Productivity']
Why you should build a differentiated startup, not a disruptive unicorn
Why you should build a differentiated startup, not a disruptive unicorn It’s a smart time to be a flamingo. Turn the extra room in your home into a hotel to pay your rent. Use your car as a side gig. Get to know millions of people you’ve never met — all on a single platform. The digital age has been a golden era of disruption. Here’s what we didn’t anticipate: societal disruption so massive it surprised even the disruptors themselves. In 2020, we’ve learned what happens when the world outside becomes so prominently disrupted that we must return to home as haven, instead of income generator. We saw a platform meant to wholly democratize friendship disrupt the process of democracy. We witnessed when services fueling a gig economy were suddenly the only economy — and yet made it impossible to pay the rent on gigs alone. Can it be that we have suddenly entered an age in which disruption as business has run its course? According to Canadian ecologist Crawford Stanley Holling, the moment we find ourselves in isn’t just natural — it’s key to realizing the change disruption ignites. One of the conceptual founders of ecological economics, Holling offers disruption as a single phase in “Panarchy,” or the cycle of transformation. Within this model, disruption is not an end goal, but part of a shift toward the next part of the transformational cycle: reorganization. And, with it, roles like activation, pathfinding, facilitation, enhancement and connection. In a world in which proving to be socially and environmentally net positive has suddenly become necessary vs nice to have, a growing movement of startups are being built to carry out these generative roles. But that’s not all: they’re slowly, yet surely, usurping and outperforming those meant for quick, disruptive scale — giving investors reason for pause in their pursuit of disruptors alone. To better understand their approaches, I studied dozens of organizations and startups with track records of driving reorganization. Here’s four, strategic ways they’re modifying typical startup behavior in order to effectively build the new normal (and you can too!): 1. Replace win-lose scenarios with infinite games Today’s tech giants maintain domination through disrupting the OODA Loop, or “observation-decision-action cycle” of their opponents. More maneuver warfare than market insight, this approach is rooted in war theory that says winning depends not on force, but the interruption of the enemy’s existing system. “The way to win in a battle according to military science is to know the rhythms of the specific opponents, and use rhythms that your opponents do not expect,” Japanese samurai and philosopher Miyamoto Musash explains. While it may seem like unicorns are masters of the OODA effect, their disruptive approach typically only focuses on disrupting the system of their competitive market. In the meantime, all of their success is predicated on preserving the patterns of another system: venture capital. The fact that unicorns exist to sustain bigger, better rounds of capital, enormous amounts of equity, revenue over user-growth, accelerated timelines and significant spend makes them uniquely vulnerable when the scalability focused system they’re built to support is disrupted — say, for instance, by a massive global force that forces them to preserve users over massive profit. To navigate change in multiple systems, consider multiple, or “nested” systems that affect one another — making success more about navigating complexity than a win-lose scenario, suggests Robert Ricigliano, a systems and complexity coach at The Omidyar Group. Unlike the winner-takes-all Silicon Valley culture, Ricigliano suggests leaders think of success as the ability to learn and adapt in an infinite game, with progress being focused on small wins that align toward an ultimate, shared goal. 2. Seek tension vs singularity It takes an infinite chain of trade-offs to achieve a unilateral goal. Take Uber, for example: a massively disruptive, but terrible, business. In a world in desperate need of solutions to complex problems, unilateral value is no longer enough. Instead, solutions must integrate dogmatic and even opposing forces, viewpoints, needs or observations. “A point of tension more often than not informs an insight,” explains Landor JPAC president Nick Foley. Great, in theory. But what does it look like to practically engage with the kind of multifaceted political, social and economic tension we’re seeing today? At IDEO CoLab, a platform built to drive collaborative impact, directors Holly Bybee and Lauren Yarmuth suggest four key strategies. First, create space for tension by getting curious about it — ask “what is happening?” instead of immediately asking why. Second, give it some kind of name, even if it’s just — “I’m feeling xyz tension.” “Mapping out seemingly conflicting interests or goals is a great tool for synthesis and identifying opportunities for design,” the designers explain. Third: begin. Collaborating or making in the midst of tension or unanswered questions creates room for unexpected outcomes. Lastly, focus on opportunity for synthesis, versus what keeps elements apart. 3. Prioritize truth over time on sight It wasn’t necessarily intentional that unicorns became a unilaterally focused business. But it wasn’t intentional that they did not, either. According to Dr. Astrid Scholz, co-founder of Zebras Unite, it’s the kind of thing that happens in a world where what exists is created by and for an exceedingly select breed of people. “The evidence against toxic ‘unicorn’ company culture has only mounted,” says Scholz, whose organization supports a community of entrepreneurs, investors and allies dedicated to building companies that balance both profit and purpose. “Exposés on sexual harassment; damning studies revealing how funders ask biased questions and speak differently about male and female founders; lack of access to capital for anyone who isn’t white and male.” It’s becoming increasingly obvious that investors are missing out. Companies with a female founder outperformed those with all-male founding teams by 63 percent. Similarly, the Small Business Administration found that investing in women-led businesses improved the performance of venture firms. But it’s more than that. It’s a function of the very elements that inform the design of how we live and thus, inform how we behave. “The current technology and venture capital structure is broken,” write Zebras Unite, a community of entrepreneurs, investors and allies dedicated to building companies that balance both profit and purpose. “It rewards quantity over quality, consumption over creation, quick exits over sustainable growth, and shareholder profit over shared prosperity. When VC firms prize time on site over truth, a lucky few may profit, but civil society suffers.” The leaders of Zebras Unite further point out that when scaling disruption is the only priority — what we are actually creating is a path through which to scale disruptive behavior. “When shareholder return trumps collective well-being, democracy itself is threatened. The reality is that business models breed behavior, and at scale, that behavior can lead to far-reaching, sometimes destructive outcomes.” 4. Look to regulation and responsibility vs rogue innovation To create a more viable future, Doteveryone, a leading think tank advocating for responsible tech, suggests starting with the concerns of talent within the supply chain, versus focusing solely on entrepreneurial vision. “Significant numbers of highly skilled people are voting with their feet and leaving jobs they feel could have negative consequences for people and society,” the report’s writers share. “This is heightening the UK’s tech talent crisis and running up employers’ recruitment and retention bills. Organisations and teams that can understand and meet their teams’ demands to work responsibly will have a new competitive advantage.” Regulation and guidance kill innovation, according to Silicon Valley’s elite. But this research shows that regulation and guidance are actually essential ingredients for talent management, retention and motivation. In it, worker’s show an appetite for action in the following three areas: guidance and skills to help navigate new dilemmas to help navigate new dilemmas more responsible leadership clear government regulation so they can innovate with awareness To integrate talent as a key stakeholder, the report suggests that businesses take the steps: Implement transparent processes for staff to raise ethical and moral concerns in a supportive environment Invest in training and resources that help workers understand and anticipate the social impact of their work Use industry-wide standards and support the responsible innovation standard being developed by the BSI — 78% of workers favour such a framework Engage with the government to share best practice and support the development of technology literate policymaking and regulation Rethink professional development, so workers in emerging fields can draw on a wider skills and knowledge base — not just their own ingenuity and resources What these insights give us is a future of industry fit to truth. One that applauds the infinite nature of our small wins, engages a world in constant tension, moves toward inherent complexity and works with, instead of against, regulation and responsibility rooted in protecting talent.
https://medium.com/rare-animals/why-you-should-build-a-differentiated-startup-not-a-disruptive-unicorn-45b3d15abe2
['Shanley Harruthoonyan']
2020-09-14 01:35:00.578000+00:00
['Social Enterprise', 'Startup Lessons', 'Startup', 'Entrepreneurship', 'Zebrasunite']
10 Ways to Make Extra Money as a Designer
Whether you’re laid off, unemployed, furloughed, or want to make some extra money on the side as a designer — this article will give you ten ways to start making money from your design chops today. There is no magic to making money as a designer, but I’ve had personal experience or seen all of the following methods be fruitful for anyone who gives it their best effort and is patient for results. As a college dropout, I needed to hustle to make a living. I’ve tried dozens of ways to make money online — everything from getting rich quick schemes to building businesses from scratch. I ultimately found success through developing my design skills and using them to create content, build products, and sell services. In this post, I’ll review methods to earn money with design skills and share my personal experience with a few of these techniques. 1. Fiverr
https://uxdesign.cc/10-ways-to-earn-extra-money-as-a-designer-914229b9970e
['Danny Sapio']
2020-08-31 12:29:16.733000+00:00
['Entrepreneurship', 'Visual Design', 'Design', 'Graphic Design', 'Money']
The Ghost Island
It was supposed to be the perfect writing day. I was on an isolated island in the Great Lakes. Tourist season was over and it was mostly abandoned. The fall weather was working its way through the region. I was immersed in the stark northern country which was the perfect backdrop to the stories I was working on. To make it even better I had the cafe to myself. It was supposed to be the perfect day. Supposed to be. But now there was a woman lying unconscious on the floor. I should probably explain. There are few things better than the perfect writing place. I’ve searched my whole life for it. Back in the day, way before the publishing deal, I was just another community college student. I would spend all day writing in the crowded atrium connecting all the buildings. It was always loud. There were a million people coming and going everyday. It was rough. So I was always looking for a little place to hole up in and write. I found a nook in one of the buildings. It was perfect. I’ve been chasing after the perfect place every since. And this little cafe attached to an Inn was the best yet. The furniture was handcrafted, everything in here was at least half a century old, handmade by the owner’s father. The fire was crackling nicely. I was awash in the place’s rustic charm. I was busy working when a young woman came in looking for the owner. We sat and chatted until the owner came back. However, when the owner saw the woman she fainted. But you would faint too if you’d just seen your daughter when she drowned three years ago. I came back downstairs after taking Mary to one of the rooms. “Is she okay?” Beth asked. “She’s fine,” I said, “She’s resting.” “I didn’t mean to scare her that bad.” “Well she’s thought you were dead.” “I am dead.” “You know what I meant.” “I just thought no matter what she’d freak out, so I thought it was best to just get it over with.” “Why did you wait so long?” “Well it took a while to stay like this.” “What do you mean?” “For the whole first year I barely existed. I would just appear randomly. I couldn’t control it at all. I must haven’t scared at least a hundred people half to death.” “Interesting.” “I couldn’t see my mother like that. I didn’t want her thinking I was haunting her. Took forever to stay like this. Took even longer to be able to interact with the world. Now for the most part I can pass for a normal person.” “How’d you learn?” “Have you seen the movie Ghost?” “Seriously?” “No,” she laughed, “Trial and error mostly.” “Is that how you were able to hold the coffee cup and move the chair?” “Yeah.” “How did you drink the coffee.” “I didn’t. I just pretended. I can’t really eat anything. It just falls away. Which is fine I can’t taste it anyway.” “Bummer.” “Right?” She looked at me for a second. “You’re taking this awfully well.” “I’m a writer. I deal in weird stuff all the time.” “There’s a difference between writing about a ghost and seeing one.” “Well you’re the expert.” Her eyes narrowed, “I’m not the first one you’ve seen am I?” “No just the prettiest.” “Awwww, if I could blush I would. So I’m not your first?” “First what?” “First ghost.” “No. I saw one once when I was younger.” “Do tell,” she said leaning forward in the seat. “I was still in high school. My brother and I were exploring an old barn near our families cabin. It was way out in the woods on this abandoned farm. I saw a woman in the barn. It was so strange. She was just staring at me.” “Hmmmm.” “She wasn’t like you.” “Not as pretty?” “Well no,” I said “But like she couldn’t talk. She walked a little. And she was transparent. You’d never mistake her for a real person.” “Did you mistake me for a living person?” “Obviously. I made coffee for a ghost. Though I did notice that the temperature dropped. I thought it was a draft.” “No it was me. That’s something I can’t get rid of. It used to be a lot worse. I talked to one guy for an afternoon he was shivering by the end.” “Haha, didn’t want to end the conversation did he?” “Nope he was willing to brave hypothermia apparently.” “I don’t blame him.” “That was good.” “Thank you.” “You remarkably smooth when speaking to the paranormal.” “Perhaps I’m just smooth in general you’d never know.” “Wouldn’t I? Maybe I’ve been keeping tabs on you.” “Oh you were spying on me?” “Perhaps.” “In that case you’re welcome.” “Oh jesus, you’re impossible,” she said laughing. It was a nice laugh but hollow sounding, the laugh of someone that couldn’t experience joy anymore and was doing her best to fake it. “How long have you known my parents?” she asked. “Two years. Everything was booked when I traveled here. The had the only open room on the island. And I just kind of never left. They seemed happy for the company.” “I’m sure. They haven’t been too sad?” “They were still pretty down when I got here, which is understandable. But they’ve been going better. They both work a lot. Too much really.” “God they should be relaxing. I guess they don’t have a choice with how slow everything is.” “They work to keep busy. The Inn’s doesn’t need money.” “Oh really?” she said looking around the empty cafe. “Can you keep a secret?” “You mean other than being a ghost?” “Yeah dumb question I guess.” “Everyone’s allowed one occasionally.” “Last year they were about to lose the place. Your parents wouldn’t take any money from me.” “Sounds like them.” “So they became the recipients of the Northern Michigan Small Business Hotel and Hospitality Grant.” “Ahhh and does this association really exist?” “It does. He’s sitting right across from you.” “Oh now that’s clever.” “Though I think they caught on. That’s why I get free food. We keep the charade going.” “I’m sure they’re thankful.” “Well I get to eat my weight in baked goods so I call it even.” “Good. I’m glad to know they’re taken care of. It makes it easier.” “I’m sure seeing you will help.” “It might but I might just make things worse. I’m stuck like this. They’ll get older and move on knowing their daughter is stuck here. No family. No kids. Just a whisper in the wind.” “Is there anything we can do?” “From what I can tell. Spirits tend to linger if something happened to them when they died. There’s something keeping me here.” “As in?” “I’m pretty sure someone killed me.” “Pretty sure?” “That’s the thing. I remember most of my life, but the last few days are pretty blurry. Almost everything else came back, but I don’t remember anything about how it happened. I just remember the water.” “That makes sense. They found you in Lake Michigan.” “Did they ever say how I got there?” “No. Your parents said there was an investigation but there was no evidence and nothing to go on so it never went anywhere. But-” “But what?” “The rumor was your boyfriend had something to do with it.” “Oh my god Kyle.” “Yeah.” “Is he still here?” “No. He left the island. He was shunned by everyone. They all assumed he had something to do with it.” “I haven’t been able to find him.” “You can find people?” “I can like sense their energy. Like I was able to find my parents. But Kyle was part of the fuzzy part. I couldn’t remember his name until you said something.” “That’s probably a sign.” “Maybe.” “Your parents didn’t like him.” “I know that’s why they always kept a room open here here for me.” Then it dawned on me, “The room I’m staying in. It was still open even in Summer.” “Bingo.” “I’m sorry.” “No worries.” I got my notebook out. “So what do you remember about the day?” “Not much,” she looked at the notebook, “You’re going to play detective now?” “I do write mysteries for a living.” “But have you solved one?” “One. Remember the ghost I mentioned.” “Yes.” “I did some digging later when I got older and first started writing. I put enough together that I wrote an article saying that her husband killed her. It must have helped because I’ve gone back to the barn and she’s no longer there. That article eventually became my first book.” “Michigan Winter. It did sound awfully real when I read it. So you think you can do it again?” she said getting up. “I can try.” “I’m going to need more than that.” “I’ll figure it out. I’ll do it for Lawrence and Mary.” “But not for me?” “I hardly you know you,” I said smiling. “I know but I’ve very pretty it usually makes guys do things for me,” she said comically fluttering her eyelashes. “Yes but they usually have something else in mind. I don’t think that’s an option for ghosts,” I said. “I know,” she said “But you’d be surprised what I can still do.” She kissed me on the cheek. It was like the winter wind brushing against my face. “Where do we start?” she asked. “Your boyfriend’s house. That’s where you were living right?” “Yes.” “Let’s go.” We walked the half mile to her old house. The fog was coming in off the lake enveloping the small island. It added an additional spooky element to our travel. That and my ghost companion. We made it to her old house. It was a nice sized old victorian home down the road from any of the others. “I haven’t seen this in so long.” “No one comes here anymore. They were never able to sell the house.” “I know how I’m getting in but what about you?” “Easy,” I said giving the door a hard shove with my shoulder. It was so old that the wood gave way around the lock. “I’m not gonna lie that was kinda hot.” “Whatever floats your boat.” “That’s really not nice to say to someone who drowned.” “Sorry.” “Relax if you don’t have a sense about it you’ll go crazy.” “How do I know you’re not already crazy.” “You the one talking to a ghost. You could be the one going crazy.” “Fair enough.” We walked through the old house. Everything was covered in three years’ worth of dust. “Your parents took most of your stuff to their house,” I said. “I’m not really looking for my stuff. I’m just trying to remember.” She was looking at an old picture when we heard the stairs creak. “Oh my god,” I said. “What?” she asked. “Beth, you’re not the only ghost on this island.”
https://medium.com/the-inkwell/the-ghost-island-2a3afbc86a3c
['Matthew Donnellon']
2020-11-27 05:06:20.058000+00:00
['Books', 'Writing', 'Relationships', 'Fiction', 'Short Story']
Stop Negativity Bias From Limiting Your Success
How to Make Negativity Bias Work for You Growth mindset A considerable number of the negatives that one could be dwelling on would be about personal interactions. It’s a vast category indeed, but the list could include: Criticisms or feedback received from someone else An argument or disagreement with someone An action taken or not taken An interaction with someone that could have potentially been done better Whatever the action that took place, remember to approach it with a growth mindset. That is, play the scenario again in your head and figure out how to better approach it next time; look for iterative and continuous betterment. Then stop! Try not to think about it anymore, and simply move on. Limit your exposure Negativity bias, unfortunately, is not limited to your personal interactions only. Any information that is negative that reaches you could have this effect. In this day and age, we are constantly bombarded by breaking news, and regrettably (as the age-old adage goes) if it bleeds, it leads. That is, taking advantage of negativity bias, the news outlets are strategically choosing their headlines and the topics to maximise the chances of grabbing your attention. If you’re finding this to have a negative effect on you, then consider limiting your consumption of not only news but also social media, where news can easily be shared. Relationships Negativity bias is, of course, also relevant to relationships. That is, both personal and professional relationships can suffer because of it. It has been thought that to have a successful marriage, there needs to be a 6:1 ratio of good to bad. In other words, make sure to highlight all the sweet and beautiful things that you observe. It takes six compliments to undo the one sarcastic comment you made in the heat of the moment. The same goes for professional relationships. As noted in the Harvard Business Review article, it was found that the highest-performing teams gave a 6:1 ratio of positive to negative feedback to one another. For medium-performing teams, the ratio was 2:1, and low-performing teams were making more negative comments than positive. At the same time, when someone was seriously underperforming, it was found that direct negative feedback was the most effective way to drive performance back to par. Take a risk The negativity bias is also responsible for what is known as loss aversion. That is, most people will play to not lose rather than play to win. In other words, there is once again an asymmetric relationship between the enjoyment we get out of a win and the pain we experience out of a loss. This would immediately highlight why a lot of people are risk-averse in their decision-making. The knowledge of the existence of loss aversion, however, should empower you to look into what risks you can safely take on for an extra boost. Tell your story Finally, with negativity bias affecting how people see results, it’s easy to imagine how your management team may not be seeing things the same way as you. It takes one bad thing to happen, one missed deadline, to completely overshadow all your hard work and continuous overperformance. So, how do you deal with it? Easy! Make sure to tell your story. Capture all your successes and all the value that you add. Clearly articulate the benefits you have brought about. Ensure that by the time you are done telling your story, the negatives are simply a parenthesis. Explain what you have learnt out of them and how you will avoid them next time.
https://medium.com/better-programming/stop-negativity-bias-from-limiting-your-success-a3398559d8b8
['Costas Andreou']
2020-05-29 14:00:08.895000+00:00
['Leadership', 'Productivity', 'Programming', 'Psychology', 'Management']
Crossing the Metaphors
Crossing the Metaphors It means nothing, yet everything came from it. Photo by Paul Murphy on Unsplash For a very long time, I struggled to write linearly on unruled pages. Consequently, my teachers and my parents convened their best efforts to keep me in line. They taught me to make lighter straight lines with a ruler, at the top, and the bottom. Consequently, my words learned to stay within their boundaries. With time, these preparatory lines grew lighter and faded soon after. A little later, I saw my dad write official mails at home on unruled sheets. It was the time before electronic mails. He had this nice trick he used to create vertical margins on the paper. He folded the page lightly on both sides and used these creases as a reference to write in his glorious handwriting. I was too young to make sense of the content, but, in those letters, he never swayed outside of the uninked margins, and the letters — they looked pretty. I grew up, and I carried these imaginary lines with me and laid my words upon them. They kept them from falling or flying away while stacking neatly one over the other.
https://medium.com/afwp/crossing-the-metaphors-fc6c2130ae62
['Pratik Mishra']
2020-05-26 15:47:31.068000+00:00
['Storytelling', 'Motivation', 'It Happened To Me', 'Self', 'Life']
EDA — EXPLORATORY DATA ANALYSIS. DEEP DIVE INTO EDA
EDA — EXPLORATORY DATA ANALYSIS DEEP DIVE INTO EDA Till now I’ve not explained what’s EDA and those who have come so far, CONGRATULATIONS!!!, we will now start the EDA process. Heads up before we dive. • There are no thumb rules for EDA or not we would have a designated template • It entirely depends on how your data is and the EDA will take up its on course based on your data • Interrogate the data and it will speak I’ll try my level best to explain you to get started and consider this as a kick start for any of your EDA. Photo by Jouwen Wang on Unsplash So, million-dollar question, What the heck is EDA? In a layman terms, it’s a process where you try to get meaningful insights from given data. EDA is process which gives you confidence on your data and you are in better position to build your model. The “D” data in EDA is the broadly classified into Structured and Unstructured form. Structured data is basically your tabular data, which tells the literal meaning in its current form, like sales data, banking transaction etc. Unstructured data would be your video files, image files and audio files. We will be focusing on Structured data in this post to understand the EDA process. The structured data is further divided as shown below. (Source — Intellspot.com) I will be using Kaggle’s Pulsar Star data to move ahead with EDA. Please make a note that data wouldn’t be same as we have in Kaggle. Also, try to make visuals simple(2D) to understand. Fancy stuff looks good, but hard to grasp without explanation. LET’S GET STARTED a. Importing Python Libraries # Importing required libraries. import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns #magic line to get visuals on same notebook %matplotlib inline Use below code only if your using GoogleColab. You will get a choose file button using which you can upload the data. from google.colab import files upload = files.upload() To check is the file has been uploaded properly, write the below code. This will display all the file available in your current working directory. !ls b. Download the data from Kaggle If you are going to use this notebook more than once, it is preferable that you delete the files that you are going to download and decompress, to avoid errors and problems. ! rm -f beer-recipes.zip ! rm -f recipeData.csv ! rm -f styleData.csv https://github.com/Kaggle/kaggle-api To download we are use Kaggle’s API ! kaggle datasets download -d jtrofe/beer-recipes Unzip the file downloaded ! unzip beer-recipes.zip c. Loading the data Now let’s load the file into pandas data frame using below code. Please make a note that, in this problem the data was provided in csv format, hence we have used “read_csv” method. However, in real world you would be provided with different format and you have to use different method based on file format. GOOGLE IT!!!!!!!! raw_data = pd.read_csv("recipeData.csv", encoding='latin-1') Until here, you have just loaded the raw data, and you have no idea what’s there in this file. Let’s now peek into the data by using below codes. This will help you to get the gist of how data look like in the file provided. Numbers inside the bracket can be changed based on how many lines you want. By default, it will throw 5 lines. Head is for lines from top and Tail is for lines from bottom. raw_data.head() raw_data.tail(2) #provides the no. of rows/columns raw_data.shape (73861, 23) # Select Categorical variables raw_data.describe(include=np.object) d. Checking Data Types For now, looking at the 15 lines we can assume that all the columns are of float (with decimals) nature. Columns in Machine Learning are called as Features or Independent Variables and Target column or the column which we want to predict are called Labels/Dependent Variables. Let’s confirm our assumptions. Below code will let you know the data type of all the fields in your data. raw_data.dtypes BeerID int64 Name object URL object Style object StyleID int64 Size(L) float64 OG float64 FG float64 ABV float64 IBU float64 Color float64 BoilSize float64 BoilTime int64 BoilGravity float64 Efficiency float64 MashThickness float64 SugarScale object BrewMethod object PitchRate float64 PrimaryTemp float64 PrimingMethod object PrimingAmount object UserId float64 dtype: object With these two lines of code you now have a brief idea about what my data is and its structure. Please note that meaning of each column are domain specific and before starting EDA you should get hold of these meaning. Do lot of research to understand the implication of these features Let’s now run a code to check certain statistics on each feature/column. raw_data.describe() With this one line of code you get most of the statistics inference of your data • Count – Number of items in that column • Mean – Average of that column • Std – Standard Deviation • Min – Minimum value in that column • Max – Maximum value in that column • 25% - 25% of your data is below this value • 50% - 50 % of your data is below this value • 75% - 75% of your data is below this value With this we can infer that not all the feature/columns have all the data. We have some missing values. And with percentile number and mean we can gauge the central tendency of the data. Please note that describe will only provide data for Quantitative data and for categorical you need to mention include = “all” in the brackets. raw_data.describe(include = ‘all’). Now, we know this is a classification problem, where we need to predict based on the features whether it’s a pulsar star or not. So, 1 in target column means it’s a pulsar star and 0 means it’s not a pulsar star. In real world you normally get entire set of data and you divide the set into Training, Validation and Test. In this problem, Kaggle has provided 2 sets Training which we loaded and Test set where we need to test our model. In this post we are doing EDA only on train set to keep things simple. But make sure if you are making any changes to train data, like changing column name or converting the values of the column, make sure you are changing the same on test data. Now, let check our target variable, how many are “All Grain”,”extract”,”BIAB” or “Partial Mash”. Also, if we have any more category in our target variable. sns.catplot(x ="BrewMethod", kind ="count", data =raw_data) raw_data.BrewMethod.value_counts(1)*100 All Grain 67.277724 BIAB 16.268396 extract 11.678694 Partial Mash 4.775186 Name: BrewMethod, dtype: float64 It’s now evident that we only have four class in our target variable (“All Grain”,”extract”,”BIAB” and “Partial Mash”), however, we have 67% of data with “All Grain”, 16% “BIAB”, 11% “extract” and only 4% are “Partial Mash”. It’s highly imbalance data. When I say it’s an imbalance data, it means any one category is dominant and when we build the model, its most likely to predict that category. Ideally, anything which is above 80:20 proportion is not an imbalance data. And in real world you will always get imbalance data. How to treat this imbalance data is not in scope of this post. e. Dropping Columns At times you get data where not all columns are relevant, though we don’t have any irrelevant column as of now but I’ll let you know how to remove the columns. Below axis = 1 means columns and axis = 0 means row raw_data = raw_data.drop(["BeerID"], axis=1) f. Changing Column Name When you pull data from different system, there are always chances of some wired column name so to change the column name we can use below code and its always best practice to have some meaningful name. I’ll show you how to change the name of the output column in this case. raw_data = raw_data.rename(columns={"StyleID":"ID"}) raw_data.head() G. Checking Duplicate Rows Most of the time when we pull data, there are high probability of duplicate entries and identifying those at the start would be beneficial. duplicate_rows_raw = raw_data[raw_data.duplicated()] print("Duplicate rows: ", duplicate_rows_raw.shape) Duplicate rows: (0, 22) duplicate_rows_raw h. Checking Missing Values Missing value in a field may be because of incorrect data entry or data is not mandatory in that case. To get good model accuracy, we need to treat this missing value. Either to drop or replace it with mean, median or mode based on scenario. sns.heatmap(raw_data.isnull(), yticklabels=False, cbar=False, cmap='viridis') <matplotlib.axes._subplots.AxesSubplot at 0x7f8cf3be92d0> # Drop data column raw_data = raw_data.drop(["Name","URL","ID","PrimingMethod","PrimingAmount","UserId","MashThickness","PitchRate","PrimaryTemp"], axis=1) # Finding the null values. total =raw_data.isnull().sum().sort_values(ascending=False) print(total) BoilGravity 2990 Style 596 BrewMethod 0 SugarScale 0 Efficiency 0 BoilTime 0 BoilSize 0 Color 0 IBU 0 ABV 0 FG 0 OG 0 Size(L) 0 dtype: int64 # Drop null values raw_data = raw_data.dropna() total =raw_data.isnull().sum().sort_values(ascending=False) print(total) BrewMethod 0 SugarScale 0 Efficiency 0 BoilGravity 0 BoilTime 0 BoilSize 0 Color 0 IBU 0 ABV 0 FG 0 OG 0 Size(L) 0 Style 0 dtype: int64 raw_data.info() <class 'pandas.core.frame.DataFrame'> Int64Index: 70517 entries, 0 to 73860 Data columns (total 13 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Style 70517 non-null object 1 Size(L) 70517 non-null float64 2 OG 70517 non-null float64 3 FG 70517 non-null float64 4 ABV 70517 non-null float64 5 IBU 70517 non-null float64 6 Color 70517 non-null float64 7 BoilSize 70517 non-null float64 8 BoilTime 70517 non-null int64 9 BoilGravity 70517 non-null float64 10 Efficiency 70517 non-null float64 11 SugarScale 70517 non-null object 12 BrewMethod 70517 non-null object dtypes: float64(9), int64(1), object(3) memory usage: 7.5+ MB # Convert columns dtype object to factor raw_data['SugarScale'] = raw_data['SugarScale'].astype('category') raw_data['BrewMethod'] = raw_data['BrewMethod'].astype('category') We can clearly observe that there are 3 columns which have missing values. Now it’s a call you have to make, should we impute the columns which have missing values or replace it with come constant value or random value. It’s altogether different topic and will come up with a post that will tackle this problem. Please note at times the above code will show zero missing values, but there may be some garbage values in the field which python with consider it as a valid entry, if this is the case you need to meet the domain expert to check this value. i. Detecting Outliers Outliers are basically data points or set of points that are not in sync with other data sets. With this task we need to identify the outliers and check with domain expert on how treat this, is it because of error or its one of a case. Treating outliers help in simplifying the data. Below code will let us know visually how each variable stands in terms of Outlier. plt.figure(figsize=(5,5)) plt.title('Box Plot',fontsize=10) raw_data.boxplot(vert=0) <matplotlib.axes._subplots.AxesSubplot at 0x7f8cf5982910> Its quite evident that all the variables have outliers and this would be the case in real world. However, we can see that “Skewness of the DM-SNR curve” and “Mean of the DM-SNR curve” are major contributors in outlier. j. Checking Distribution The process of standardizing corresponds to equalizing the information in its same scale. We are going to use skearn library from sklearn import preprocessing # select numeric columns numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64'] newdf = raw_data.select_dtypes(include=numerics) newdf.head() x = newdf.values #returns a numpy array x array([[21.77 , 1.055, 1.013, ..., 75. , 1.038, 70. ], [20.82 , 1.083, 1.021, ..., 60. , 1.07 , 70. ], [50. , 1.06 , 1.01 , ..., 90. , 1.05 , 72. ], ..., [10. , 1.059, 1.01 , ..., 60. , 1.034, 60. ], [24. , 1.051, 1.014, ..., 60. , 1.043, 72. ], [20. , 1.06 , 1.01 , ..., 60. , 1.056, 70. ]]) min_max_scaler = preprocessing.MinMaxScaler() x_scaled = min_max_scaler.fit_transform(newdf) data_scale = pd.DataFrame(x_scaled) data_scale.columns= ['Size(L)' ,'OG' ,'FG','ABV','IBU','Color' ,'BoilSize' ,'BoilTime' ,'BoilGravity' ,'Efficiency'] data_scale # Now check outliers plt.figure(figsize=(5,5)) plt.title('Box Plot',fontsize=10) data_scale.boxplot(vert=0) <matplotlib.axes._subplots.AxesSubplot at 0x7f8cf57df350> k. Checking Distribution Distribution again is one of the methods to check how your dependent variable is distributed and if we can treat to simplify the data sets. raw_data.hist(figsize=(15, 5), bins=50, xlabelsize=5, ylabelsize=5) array([[<matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf584edd0>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf56f3a90>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf5680e10>], [<matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf5641650>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf55f7e50>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf55b8690>], [<matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf556de90>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf552d6d0>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf5538250>], [<matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf54ecbd0>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf5456f10>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf5419750>]], dtype=object) Now, looking at the above plot, its evident that there is bit of skewness in every variable, but Mean & SD of DM-SM curve are heavily right skewed. data_scale.hist(figsize=(15, 5), bins=50, xlabelsize=5, ylabelsize=5) array([[<matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf56fbed0>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cfa692b90>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cfa4d9e50>], [<matplotlib.axes._subplots.AxesSubplot object at 0x7f8cfa3057d0>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cfaac5890>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8d0abfef50>], [<matplotlib.axes._subplots.AxesSubplot object at 0x7f8d0a5a2950>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf60e4dd0>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cf59e9950>], [<matplotlib.axes._subplots.AxesSubplot object at 0x7f8cfa699310>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cfa2f2650>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f8cfa4cce50>]], dtype=object) As you can see the distribution in data with or without scaling are the same l. Finding Correlation Major weightage of EDA is on correlations, and often correlations are really interesting and used to find the features which are highly correlated with our dependent variable, but not always. # Get the n most frequent values, n = 5 list_top=raw_data['Style'].value_counts()[:n].index.tolist() list_top ['American IPA', 'American Pale Ale', 'Saison', 'American Light Lager', 'American Amber Ale'] sns.catplot(x ="Style", kind ="count", data =raw_data) raw_data.Style.value_counts(1)*100 American IPA 16.370521 American Pale Ale 10.391820 Saison 3.584951 American Light Lager 3.229009 American Amber Ale 2.746855 ... Lichtenhainer 0.008509 Apple Wine 0.008509 Pyment (Grape Melomel) 0.005672 Traditional Perry 0.002836 French Cider 0.001418 Name: Style, Length: 175, dtype: float64 data = raw_data[raw_data['Style'].isin(list_top)] sns.catplot(x ="Style", kind ="count", data =data) data.Style.value_counts(1)*100 American IPA 45.069103 American Pale Ale 28.609354 Saison 9.869603 American Light Lager 8.889670 American Amber Ale 7.562271 Name: Style, dtype: float64 # Select Categorical variables data.describe(include=np.object) Also, below visual will clearly indicate how independent variable are corelated, and if, we have independent variables which are highly corelated, we can drop those columns/features as one of them is sufficient. # reduce dataset size data_sample=data.sample(n=1000, random_state=1) data_sample plt.figure(figsize=(12,7)) sns.heatmap(data_sample.corr(), annot=True, cmap='Blues') <matplotlib.axes._subplots.AxesSubplot at 0x7f8cf5836910> Also, below visual will clearly indicate how independent variable are corelated, and if, we have independent variables which are highly corelated, we can drop those columns/features as one of them is sufficient. # Drop data column data_sample = data_sample.drop(["OG","FG","Size(L)","SugarScale","BrewMethod","BoilTime","BoilGravity","Efficiency"], axis=1) data_sample.info() <class 'pandas.core.frame.DataFrame'> Int64Index: 1000 entries, 25893 to 44807 Data columns (total 5 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Style 1000 non-null object 1 ABV 1000 non-null float64 2 IBU 1000 non-null float64 3 Color 1000 non-null float64 4 BoilSize 1000 non-null float64 dtypes: float64(4), object(1) memory usage: 46.9+ KB sns.pairplot(data_sample, hue="Style") <seaborn.axisgrid.PairGrid at 0x7f8cf44fc4d0> The image is not clearly visible, you can check it on the colab which I’ve shared at the end of this module. From this graph, we can clearly figure out relation between two variables and how it impacts the output. Also, below visual will clearly indicate how independent variable are corelated, and if, we have independent variables which are highly corelated, we can drop those columns/features as one of them is sufficient. plt.figure(figsize=(12,7)) sns.heatmap(data_sample.corr(), annot=True, cmap='Blues') <matplotlib.axes._subplots.AxesSubplot at 0x7f8cf0940c10> It’s evident that, “Excess kurtosis of the integrated profile” and “Skewness of the integrated profile” are highly corelated with dependent variable. Darker the blue higher is the positive correlation and vice versa. Conclusion If you have come so far, Congratulations!!! and you would have got a gist that — there is no one size fits all for EDA. In this article, I have tried to give you basic idea on how to start, so that you don’t think to much on where and how to go about this EDA process Out there all datasets would always have missing values, errors in the data, unbalanced data, and biased data. EDA is the first step in tackling a data science project to learn what data we have and evaluate its validity. Before I sign-off, do check on Pandas Profiling and SweetViz as an alternative for what I’ve done above.
https://medium.com/swlh/eda-beer-dataset-3651b923bcba
['Oscar Rojo']
2020-09-17 21:00:48.184000+00:00
['Eda', 'Python', 'Dataset', 'Beer', 'Exploratory Data Analysis']
Challenges of Working Independently
I quit my job a year ago to get started on a journey of my own. What seemed initially like an exciting path soon exposed its own share of challenges. One of the biggest challenges is accountability. Since our brains are hardwired to not let other people down, we end up working best when we are accountable to say our manager. Working independently, you lose that critical aspect of being accountable to someone. You are working for yourself and procrastination becomes too real. “man lying on grass” by Sander Smeekes on Unsplash I often end up spending hours just lying on my bed not accomplishing anything. Thinking of how it would feel to have accomplished my goals. Deadlines become movable and goals are changeable. This creates a very dangerous cocktail of not getting anything done. Finding a way out of this is really important for any independent professional. Here are a few things that work for me The Five Second Rule Mel Robins’ book, The Five Second Rule tells you to mentally countdown 5, 4, 3, 2, 1 and move! Physically move every time an idea or inspiration hits you. The simple act of counting down prevents our brain from rationalizing and shutting down the idea. It takes a bit of effort to remember to do this every time, but whenever you do it, you realize the impact it has. Mel got the inspiration for this idea when she saw a rocket launch countdown on the television. “photo of network satellite taking off” by SpaceX on Unsplash Though effective, this only helps you get started. How do you make sure when you have started something you don’t abandon it midway and start something else? I found the following method useful for this. The Five Minute Rule Often when we want to do something, there is inertia within us that prevents us from committing to it. This inertia is overcome to some extent by the five-second rule, but to really get the momentum started, you can tell yourself, “I will only work on this for five minutes”. Now look at the clock and remind yourself it is only for five minutes. Working on a task for five minutes with undivided attention sets your momentum and the chances are you won’t realize when the five minutes have passed as you are engrossed in the task. “person holding analog watch” by Jaelynn Castillo on Unsplash Projects are often long and overwhelming and we tend to become demotivated in some time. In order to keep our motivation levels high, we need a constant reward, a feeling of achievement. Small Wins Dividing a big project into small, seemingly insignificant tasks and physically checking them off gives you that reward, that feel-good sensation, to motivate you in achieving the next task. For example, for programming projects, the simplest tasks of initializing an empty repository, naming the project, cloning it into your local machine and writing your first commit can be as many checks to your task list to get your motivation levels high. “person writing bucket list on book” by Glenn Carstens-Peters on Unsplash The five-second rule, five-minute rule and small wins can help you achieve more every day and really give you that satisfaction and joy of working independently and actually being productive! [Update] Here’s a mindmap of the article
https://medium.com/swlh/challenges-of-working-independently-82e8380c8ee4
['Abhinav Shrivastava']
2020-04-30 16:48:17.370000+00:00
['Work', 'Freelancing', 'Motivation', 'Lifestyle', 'Productivity']
How to Turn Your Anxiety Into Excitement
I’m trying to bask in the chaos. “Anxiety is the dizziness of freedom.” — Soren Kierkegaard It was a Monday of my senior year in college. I was sitting in COMM 101 — Public Speech. Apart from the whole public speaking thing, COMM 101 is a hilariously easy class. There were about 20 students. It’s a gen-ed composed of mostly freshmen. I was taking it for one of my last credits before I could finally apply for graduation. That Monday was the day of our first in-class speech, and I was sitting in the back corner of the room obsessively going over my notes. I was terrified. Public speaking is a huge fear of mine. As I studied, I pressed my hand on the paper that was my script and… shit. I left a sweat imprint that went through paper straight to the table. I was anxious, uncomfortable, and dreaded that speech. “Oh God, what’s going to happen when you freeze up and forget what to say?” This and other unhelpful thoughts raced through my brain. I couldn’t control what my brain was thinking and how it was making me feel. Let’s leave it at this: the speech did not go well. I stumbled through my words, I forgot an entire paragraph, and I could barely maintain eye contact with my script, much less the audience. Though no one else in the class gave a damn about how my speech went, I was embarrassed, humiliated, and irrationally annoyed with myself.
https://medium.com/curious/how-to-turn-your-anxiety-into-excitement-711a0e4b0b5
['Chris Wojcik']
2020-12-23 23:11:34.056000+00:00
['Self-awareness', 'Self Improvement', 'Anxiety', 'Mental Health', 'Meditation']
Start Using Pandas From the Command Line
If you work in the data analysis world, chances are you do a lot of data wrangling. If you use pandas in your data workflow, you’ve probably noticed that you often write the same bits of code. Although some complex datasets or data exploratory require going to Jupyter notebooks, on the other hand, some datasets require simple processing, going through the process of setting up an environment, and creating a new notebook can be a little overwhelming. So you probably end up opening it in a spreadsheet. However if spreadsheets are accommodating, they are difficult to automate and do not offer as many features as pandas. How to take advantage of the features of pandas while keeping the flexibility of spreadsheets? By wrapping pandas functions in a command-line interface with chainable commands. A command-line interface or CLI allows us to quickly open a terminal and start typing in commands to execute some tasks. Chainable commands mean the result of one command is passed to another, which is particularly interesting in processing data. In this article, we will use Click to build a CLI. Click is a Python package to quickly build CLI without having to parse the command line arguments with native python libraries. We will first install a template Click project taken from Click documentation that allows chaining commands. Then I will walk you through writing commands to read, filter, display, and write files using pandas under the hood. In the end, you would be able to write your own commands to fit your needs.
https://medium.com/swlh/start-using-pandas-from-the-command-line-5dcae6b2ccca
[]
2020-12-24 14:28:03.068000+00:00
['Programming', 'Software Engineering', 'Data Science', 'Technology', 'Productivity']
3 Practical Steps to Turn Your Dreams Into Reality
Photo by Christopher Sardegna on Unsplash If you are like most people, you have dreams of things you want to do someday. The harsh reality for most of us is that work/life gets in the way of us turning those dreams into reality. Or perhaps it’s the slick Apple Remote design that pulls you into Netflix more often than you’d like. Either way, these are real issues that we all struggle with. Here are some tips on how to create a roadmap to make your dreams a reality. 1. Formalize what your dream looks like. One of the most obvious but insightful things I ever read was, “If you wish to build a chair, you first must know what a chair is.” I know it’s so obvious, but for most of us we just think, I want to run my own business. Or, I want to be in better shape. Or, I want to travel. The issue is that we are not formalizing what those things mean. Here are two examples of transforming these generalist statements into formalized goals. I want to run my own business. To make this happen you first need to identify what type of business you want to run. This will likely be related to your passions; to the things, you return to over and over again. Identify those passions and identify what you don’t like about your current work. Use those guiding factors to help you solidify what the business you want to start is. If you are like me and don’t know the exact details of it, at least find the sector or type of business that you want to start. By narrowing things down you will slowly approach the target. I want to be in better shape. This is one I think we have all told ourselves and frankly it can seem daunting. The first step is again to define what “better shape” means. What would the result look like? Is it a six-pack, do you want to run a marathon? Figure out what your main motivator is and settle on a reality that can be achieved. Use specifics such as a number of pull-ups, or mile time. Once you have identified and defined your dream in tangible, measurable terms you are ready to move onto the next step. 2. Identify milestones. Most of our dreams are things that can’t just be accomplished in a weekend. For many their dream will only come to fruition through months or years of work. It is because of this truth that we must break the dream into digestible portions. For me, I have created an outline with target dates in the future that I will meet those milestones. Once the milestones along the way are defined, ie: secured investors in the business, or purchased high-quality running shoes, we break it down further. Depending on the complexity of your dream you will need to continually compartmentalize and further narrow the goal. Once your milestone is broken down into something that can be accomplished in under a week's time (ideally into a day) then you have hit the money. For me, I have daily tasks I do to advance me towards my dream, and I have weekly tasks that I perform on the weekends. The journey of a thousand miles begins with one step. ~ Lao Tzu This is the fundamental idea of advancing towards anything. If the goal is daunting how will you start? But if the goal requires constant effort in tiny spurts, you will eventually reach the destination. In the example of reaching a certain number of pull-ups, you simply need to start doing pull-ups every day. If you need to lose weight, slowly cut down on your eating. Start to flex your will by daily eating half as much as you would normally do. As you gain control over your will power you will be able to cut an entire meal, etc. 3. Find someone to hold you accountable. This is the secret sauce to the successful achievement of your goals in my humble opinion. Many people will set out on the tasks mentioned above, but few will follow through. The reasons for not following through are vast but ultimately they stem from a lack of accountability. What I’ve done to be held accountable is to join a group of people who attended the same boot camp as me (Praxis) who meet weekly. They discuss their goals and define them. Then they publicly put them in our Slack Channel and everyday people check-in regarding their daily progress. Then at the end of the week, we need to publicly affirm or deny our successful completion of those goals. This isn’t simply social pressure to complete those goals — while there is definitely some pressure — it’s a group of people who actively recognize that they want to advance in many areas of their life and take action to do so. Now that I’m checking in with them every day I’m able to advance closer to my dreams every day with sustained effort. A group of people to keep you accountable doesn’t work for everyone, but you can certainly find a mentor or good friend who will hold you to the goals you accomplish. That’s it. I know it seems oversimplified but that’s because it is. Accomplishing your goals is simple. It just requires actively thinking about and defining what you want to accomplish, breaking it down into sustainable weekly or daily goals, and proactively choosing to be held accountable. The rest falls into place as you go.
https://medium.com/the-innovation/3-practical-steps-to-turn-your-dreams-into-reality-5e9f426ba47f
['Silas Mahner']
2020-08-09 19:22:27.337000+00:00
['Productivity', 'Accountability', 'Entrepreneurship', 'Goals', 'Dreams']
Analysis paralysis in product design
Analysis paralysis in product design Improve decision making in your product As product designers, we encounter the wonders of the human brain through design. The human brain is incredible and a complex element, which makes humans act, and behaviours unique from one to another. The UX designer has to understand how each human thinks and behave into creating meaningful and applications that bring a delightful experience to the user. In this article, we are going to a common problem associated with e-commerce applications. We will focus on the definition of analysis paralysis, how analysis paralysis works, what happens when the content itself causes analysis paralysis, and how to simplify the decision-making process. My article will mostly focus on e-commerce based applications and human psychology for giving a better explanation for how the designer can tackle the analysis paralysis in their designs. What is analysis paralysis? The analysis paralysis refers to a situation in which an individual or a focus group is not able to go forward with a decision because of over analysing of data or over thinking of the problem. The analysis paralysis is a common occurrence in decision making for investment, or in buying items in e-commerce applications. Why analysis paralysis a problem? Humans like to make a profit and make proper decisions on the problems they encounter. Due to the analysis paralysis, people will face the inability to perform critical actions that creates lost options and miss chances of making large profits. How analysis paralysis works The analysis paralysis could occur in different interactions for a given user in a particular scenario. As humans, we should always consider our choices and the impact it creates in our life. The designers should know the behaviour of a healthy choice and a choice made through analysis paralysis. The choices which made though sequential thinking or logical thinking are the most common ones. We start narrowing down all the possibilities that we have interacting with the application. We remove all the items which are not logical to us when making decisions. Working with the analysis paralysis will make a user feel that the user is looking at multiple solutions that they would take to achieve a goal. The users will start feeling overwhelmed with the things or the choices that they have to make. There’s an overabundance of choice all around the web, from e-commerce stores with thousands of products to content generation machines pushing out new posts every day. While you can’t do anything to stop the flood of information or items going out to your visitors, you can design your interfaces in a way that makes the decision-making process easier to bear. What’s more, you can help them walk away feeling more confident with their choice, too. How to avoid analysis paralysis The UX design is all about engaging and allowing users to create meaningful and intuitive interactions. It will allow users to attain a delightful experience on the interaction and a positive memory about the application while achieving the goal. Limit the options to help the users to make decisions fast — Hick’s law The Hicks law helps UX designers to make proper decisions in application design. It helps designers to breakdown complex tasks into smaller steps. It also helps to use progressive onboarding to minimize cognitive load for new users. Hick’s Law is a simple idea that says that the more choices you present your users with, the longer it will take them to reach a decision. It’s common sense but often neglected in the rush to cram too much functionality into a website or application. As a designer, you will use Hick’s Law to examine how many functions you should offer at any part of your website and how this will affect your users’ overall approach to decision making. 2. Make the choices distinct to the users to reduce overthinking. The users can get into trouble with the number of choices given even though they are different from one-to-another. It is a common fact that the application users will start overthinking based on what to select or what to do when there are many options given by the application. If given similar options the users will start wondering which one to select over others. It is similar to a scenario where Wikipedia had a search box with two buttons labelled “go” and “search”. “Go” took you straight to a page related to your search query, while “search” took you to a search results page. The designers in Wikipedia has changed this behaviour so it is not visible anymore. The problem with subtle distinctions between choices is it adds complexity to the decision-making process, and that is something you need to avoid if at all possible. 3. Encourage users to make faster decisions. Another solution that the designers could come up with is to allow focus groups to make faster decisions. Humans tend to make slow and careful decisions when they have no confidence in their choices. However, there are ways of enabling people to make decisions faster, by making those choices feel like “no-brainer” decisions. Price is one way of achieving this. A low enough price will allow an impulse purchase involving little thought. Another way is to offer a fantastic return policy. Conclusion Analysis paralysis is one of the most dangerous barriers to conversion, and so we need to work hard to reduce it. As you finish this post, I recommend reviewing your analytics and checking any page with a high exit rate. You may well find that these pages contain a choice users are just not prepared to make. Fixing that choice could make all the difference in conversion.
https://uxplanet.org/analysis-paralysis-in-product-design-e3fda6e40cbf
['Muditha Batagoda']
2020-12-25 12:53:47.695000+00:00
['Design', 'User Experience', 'UX', 'Product', 'Psychology']
Things Are So Much Harder Than They Have To Be
Things Are So Much Harder Than They Have To Be Snapshot of a mixed-up mind Photo by James Pond on Unsplash As I begin this, it is 9:46 on Monday night. Normally, my highest creative and productivity potential happens in the early morning, before the muck of the day has soiled my mind. I have been trying to sleep for over an hour. But I keep grabbing my phone to check on random thoughts that cross my head from how easy would it be to make a quick batch of mini corn muffins to contemplating possible details surrounding the lastest too soon tragic celebrity death. I have writing assignments that are due in the next couple days, things I plan to look at in the morning sometime before 7 a.m. That’s when the house starts to bustle and my mind starts its fog all over again. I do not know how much is straight anxiety and how much is my ADHD in overdrive. I should get off my bed and start writing things that have a higher potential for profit on the actual computer, rather than dinking on my phone.
https://medium.com/brave-inspired/things-are-so-much-harder-than-they-have-to-be-f0bb77e6c9f5
['Gretchen Lee Bourquin', 'Pom-Poet']
2020-07-14 04:18:19.965000+00:00
['Anxiety', 'Adhd', 'Mental Health', 'Productivity', 'This Happened To Me']
Meat Processing Was Never Safe, and Now It’s Worse
On April 26th, Tyson Foods published full-page advertisements in Sunday editions of The Post, the New York Times and the Arkansas Democrat-Gazette. “Anxiety, doubt, and the fear of the unknown are now our constant companions,” the copy, apparently written by CEO John Tyson himself, claims. Tyson warned that the “food supply chain is breaking,” and within days President Trump signed an executive order classifying meat processing facilities as “critical infrastructure,” requiring plants to remain open. Corporate messaging, however, is getting mixed. Less than two weeks after those advertisements ran, representatives from Tyson Foods reported a “positive” long-term outlook as their retail business benefits from a 20% bump, part of an industry-wide trend of rising consumer purchases as more people are making their meals at home. These beings, bred and raised for their flesh, had bleak futures to begin with, and yet people have somehow found a way to make their suffering even greater. Executives at Tyson might be feeling “positive,” but the mood is gravely different on their slaughterhouse lines, where tightly packed workstations make coronavirus transfer especially easy. “There is no social distance that is possible when you are either working on the slaughter line or in a processing assignment,” explained Paula Schelling, acting chairwoman for the food inspectors union in the American Federation of Government Employees. Preventive measures have come too late. As of June 12th, more than 24,000 coronavirus cases have been tied to meatpacking plants in the U.S., and at least 87 workers have died — and these outbreaks impact their surrounding communities as well. The issue is also not confined to North America, although the United States is certainly handling it worse than European countries. All open U.S. facilities are running at max capacity, but there’s still an estimated 25% reduction in beef and pork processing power. This decreased production means millions of animals raised to be killed and eaten will, instead, be killed and discarded; Delmarva Poultry Industry Inc. “depopulated” 2 million chickens in April and current processing backlogs means an estimated 700,000 pigs will be destroyed each week. A recent Intercept exposé reveals how Iowa Select Farms exterminates their pigs using the “ventilation shutdown” method, a process that slowly kills the animals by restricting airways and steaming their barns. The intense heat and humidity within the pens slowly suffocates and roasts the pigs to their death. Any survivors are later killed with bolt guns. These beings, bred and raised for their flesh, had bleak futures to begin with, and yet people have somehow found a way to make their suffering even greater. While the fallout of a diminishing workforce has become increasingly clear, none of the most obvious business-oriented solutions seem particularly promising. Although frozen meat storages and in-store purchase limits might stave off a serious shortage, any undergraduate Econ major (or Marco Rubio) will tell you that steady demand plus decreased production is unsustainable. Even taking into consideration the significant drop in restaurant orders and institutional purchases, changes must be made to both processing and distribution if the meat industry wants to satisfy its customers. Line speeds could be quickened, but faster processing increases the chance of work-related injury. Longer shifts could compensate for limited staff, but what health risks might arise from extended exposure? I doubt additional workers could be packed into already crowded workstations, but, even with some creative spatial reasoning, this approach only exposes more people to dangerous conditions. If these meat processing facilities are ripe for coronavirus transmission, it does not look like companies can increase productivity without further harming their employees. That being said, I don’t expect particularly ethical decision making from the business executives who exploit people, animals, and the environment for profit. Telling someone in a financially precarious situation to reject a job for the sake of their health is essentially asking them to jump from the frying pan into the fire. In what is already a notoriously dangerous field, COVID-19 has put more stress on those employed at meat processing facilities. Attesting to the many risks and few benefits of these positions, annual turnover is thought to be extremely high, figures I have seen estimated from 75 to 100%, with one particularly bad case approaching 400. (Since most companies do not make this data public, researchers are generally unable to get a clear view of the situation.) Workers of color occupy almost half of all food production and processing jobs in the United States, more than 70% of our farmworkers are foreign born, and a majority of slaughterhouse employees are either Black or Latinx, statistics that fit the historical trend of those from marginalized and underserved communities being saddled with the country’s least desirable, lowest paying jobs. As the virus continues to spread, employees must “choose” between protecting their health, the health of those around them, and getting a paycheck. Against the backdrop of a pandemic — one in which people of color and those from lower socioeconomic backgrounds are significantly more likely to die from related complications — what are their options? Telling someone in a financially precarious situation to reject a job for the sake of their health is essentially asking them to jump from the frying pan into the fire: sure, there’s a chance they will contract the coronavirus at work and die, but one’s survival prospects are similarly poor if they cannot buy food, afford rent, access adequate medical care, and tend to other essential needs. ‘No, you don’t want to do that. I don’t want to do that. Nobody wants to do that. You’ll have bad dreams.’ Built upon a foundation of financial and emotional distress, the relationship between workers and management is dubiously consensual, and the death of Annie Grant, an employee at a Tyson poultry plant in Georgia, demonstrates how people can be coerced into dangerous situations by demanding bosses. When examined too closely, the illusion of choice begins to crack. Making matters worse, the psychological consequences of meat processing are more or less an open secret. In Every Twelve Seconds, Timothy Pachirat, an Associate Professor of Political Science at the University of Massachusetts Amherst, recounts his year of undercover work in a slaughterhouse. After expressing interest in working the “knocker” — the gun-like machine that quickly releases a metal bolt into the head of cattle, killing them in preparation for further dismemberment — a veteran of the knocking box advised, “No, you don’t want to do that. I don’t want to do that. Nobody wants to do that. You’ll have bad dreams.” We should never applaud people for tolerating their own mistreatment, or glorify oppressive work relations as some kind of heroic self-sacrifice, especially when we’re choosing between human life and ground beef. At a May 8th roundtable in West Des Moines, Iowa, Vice President Mike Pence praised food industry leaders. “I think this may well turn out to be your finest hour,” he said, “a time when an industry stepped up and met the moment, and at some personal risk to themselves.” We should never applaud people for tolerating their own mistreatment, or glorify oppressive work relations as some kind of heroic self-sacrifice, especially when we’re choosing between human life and ground beef. The CEOs of Hy-Vee, Kroger, Smithfield, and Tyson who attended that panel are not accepting “some personal risk to themselves,” they are risking their employees. I may be wrong, but I don’t think John Tyson spends forty hours a week working his processing lines. Her newfound hero status didn’t protect Annie Grant, and Pence’s affected show of empathy only rebrands systemic injustice with a patriotic twist. If the Vice President truly admired food chain workers, the Trump administration wouldn’t help absolve their employers of legal liability when they die on the job. They would make sure all slaughterhouse workers have regular bathroom access. They would ensure everyone has the proper gear, health benefits, paid sick leave, and hazard pay necessary to protect themselves and their families — not just now, but always. They would enforce disciplinary actions when managers pressure sick employees to return to work. Or, better yet, they would make it so no one is left with a job that fuels nightmares.
https://medium.com/tenderlymag/meat-processing-was-never-safe-and-now-its-worse-8a05316495c1
['Carlin Soos']
2020-06-26 19:34:34.383000+00:00
['Justice', 'Equality', 'Work', 'Vegan', 'Coronavirus']
Machine Learning Basics — For Beginners
Why should you learn Machine Learning? It’s no secret that machine learning and AI is one of the most popular buzzwords of this decade, but is the hype justified? Lets take a closer look at three areas in which Machine Learning has changed or will change our way of life: E-health: With the use of Machine learning we have in recent years seen several studies where a trained algorithm were capable of predicting what kind of disease a person was suffering from, despite the doctors could not. The use of such technology in everyday healthcare is however only in its early stages but in the future we would most likely see that Algorithms will give a second oppinion on the doctors judgment. Transportation: Transportation has for centuries been an important area that we as humans benefit from in our daily life. It’s therefore no wonder that every generation has been trying to optimize and improve the way we transport ourselves from A to B. With the progress in Machine Learning transportation is now ready to move into the next area namly autonomous driving, where our cars, trains and airplanes can operate without the interaction of humans. This will not only decrease the overall costs of transportation but most likely also decrease the amount of road accidents. Entertainment: An area which for years has already been benefitting from MI is entertainment. You might be seeing it right now as this article most likely got your attention through a newsfeed which based on tags and historical data recommended it to you. This can not only help creators keeping you on their site but it can also sort out all the content that you have no interrest in, making it a win-win situation. Machine Learning Types So now that we have seen three areas that MI is influencing in a positive manner, its time to take a closer look at the different types of Machine learning. Within the area of Machine Learning we have the following three main types: Unsupervised Learning The first area that we’re diving into is unsupervised learning, which is a learning type where there is “No Teacher”, in Machine Learning this means that the data we are using to train our algorithm is yet to be labeled. e.g Labeled data would be a data entity with the following specs: Length: 30, Height: 20, Type: Box Whereas an unlabled data entity would be: Length: 30, Height: 20 Type: ? Its therefore up to our algorithm to figure out what this entity is and where it belongs. This can for example be done using clustering algorithms. In a graph this would look like the following: In the above picture we see that all our data entities are circles but they still have a length and height that difference them from each other. Our algorithm can then based on its training decide which type a new entity belongs to by looking at the length and height. Supervised Learning In supervised learning all our data entities have been labeled. In the below example we are trying to use linear regression to make a Email spam algorithm. As we now know which emails is “good” signed with a green Plus and which is “Bad” signed with a red circle, our algorithm will try make a linear modelling approach to differentiate them. So when we get a new email in our Inbox our algorithm will just have to see whether it is placed below or above the line. Reinforcement Learning Compared to the first two types, Reinforcement Learning is in a category of its own. The idea behind this approach is that we are continuously training our algorithms, in order to make them smarter through their interactions with the environment. Our Algorithm is the Agent, and it makes Actions towards the environment(Env) that is sees. These actions then changes the State of the Agent. Besides changing the State of the Agent the Agent also receives either positive or negative Reward in order for it learn which Actions were right. For more see my Machine Learning Tutorial: https://youtu.be/E3l_aeGjkeI
https://medium.com/vinsloev-academy/machine-learning-basics-for-beginners-fee2eae84973
[]
2020-06-25 10:23:19.434000+00:00
['Machine Learning', 'Artificial Intelligence', 'Python', 'Supervised Learning', 'Programming']
Do You Want to Earn $1.00 a Word?
Do You Want to Earn $1.00 a Word? Eight Steps to a Powerful Writing Income In 2017, I received $7,300 for a white paper I’d written. The article — a 7.3K word leviathan on establishing a workplace safety culture — paid a dollar per word. Although one of the larger single projects I’ve developed, it reflects my standard rate. Depending on the depth of research, I usually charge $0.50 to $1.00 per word. Can you get this type of pay? Absolutely. My skills aren’t magical. Dear Writers, stop selling yourself short. There are markets that pay quite well for your knowledge. Earning a viable income simply requires refocusing your craft as a business commodity. Underpaid & Unappreciated Are Unacceptable Threads comparing earnings and reader views are regular sources of discussion in the social media writers’ groups I’ve joined. I don’t comment much, but it saddens me to see the labors of these gifted people unappreciated by trivial earnings. Writing is a talent, honed by countless revisions, successes, and failures, and fueled by well-caffeinated, sleepless nights. If you’re like me, my creative process is also peppered with colorful language verbalized as I bang on the keyboard, too. I write as if I have sudden onset of Tourette’s Syndrome. We all put our hearts and souls into our pieces. Historically, despite the labor creating our beautiful art, the industry is ridiculously underpaid. Worse, many skilled writers continually produce well-researched, informative pieces for free. Why do we torture ourselves so? It doesn’t have to be. You can make real money crafting words. The type of writing I’m covering here isn’t the personal narrative style that one finds on Medium. It’s often dry and technical. And sometimes, it’s soul-sucking. But through it, you’ll learn. This education is invaluable. It transfers to the totality of your writing skills. And when you agree to a contracted project, the income you’ll receive is known. For me, that’s relieved the stress from my creative writing. When I am paid well from other sources, I enjoy my personal free writing without worry about earning potential. I also have pride in the projects I’ve created. Sometimes I pull up a website in a random search and realize, I built that lovely content. Necessity is the Mother of Invention I wish I could say that I’d planned this concurrent career meticulously using my brilliant business mind. Alas, I stumbled on this lucrative market in 2008 when I was at a low point. In that one year, I’d remarried and divorced a second time, obtaining an honorable mention in the Guinness Book of World Records for the fastest crash and burn of wedded bliss. It was also the year one of my daughters began her decent into chronic illness. She has Crohn’s Disease, but it was a lengthy process to diagnose and stabilize her. In those twelve months, my daughter had six hospitalizations, and was often so ill and weak, 24-hour home care was needed. The consulting company where I’d been Vice President, had sold. I was on unemployment for the first time in my life, smack in the middle of a recession. I couldn’t leave my kiddo to do the type of consulting projects I’d previously performed or take a full-time salaried job doing similar work. I was depressed, desperately poor, and feeling the mental whammy of another failed marriage. And adding to this mélange of crisis points, my car had been totaled in a wreck and I had to wait three months for a settlement. Carless, jobless, poor, lacking a partner’s support, and with a critically ill kiddo — my life had become a tragic country music song. I needed money. How to obtain it and work from home? At the time, the content market was fairly new. Websites, like Demand Media, were churning out thousands of these pieces every week. Content mills paid a pittance, but it was income. The easiest articles for me to create were ones that reflected my career background in risk management and workplace safety. I knew the genre intimately. I had a decade of this work, mediating OSHA penalties and ensuring compliance in businesses. After I’d amassed several of these articles, I created a profile on Elance (which is now Upwork). With a portfolio of article links, and a career background on those topics, I began bidding on jobs, finding that I could double or triple the pay of content mills with these projects. It still wasn’t great income, but where I’d been writing a 500-word article for $15-$25, now I was consistently earning $50 for similar length pieces. I took any projects I could — eBooks, content articles, white papers, continuing education modules, and blogs. If it paid, I wrote it. The financial pressure began lifting. I was able to get off unemployment. My kiddo improved. I started a new risk management consulting business in 2009. Life had resumed and was on an upward economic trend. I continued to take occasional projects through Elance, but I didn’t depend on them. In 2011, I was approached by a legal editor through LinkedIn. She had seen the projects I’d posted on my profile and inquired about my availability for writing opportunities. In a few hours, I stared at a screen with the contract, budgeted at $5,000 for the first ten articles, roughly 500–700 words each. The rate was $0.75 a word. I thought I’d been scammed at first. The pay was wonderfully thrilling. That one project opened the door for others with similar or greater earning potential. I quickly learned there was a thriving market for the information I knew, and it paid well. I began pricing my work accordingly. The weighty 7.3K word white paper I’d mentioned at beginning of this article, followed with two years of additional content writing for that client’s long-term marketing campaign, all at the same payment rate of a $1.00 a word. I’ve written for Fortune 500 clients and enterprise businesses, as well as small companies. But my rate of $0.50-$1.00 per word stays similar. And clients don’t typically question it. As second income, I earn an annual average of an extra $20K-$30K from very part-time writing projects. I ghost write for some sites, but the earnings offset not having a treasured byline.
https://medium.com/swlh/do-you-want-to-earn-1-00-a-word-a629da2557ab
['Elle C.']
2020-12-11 03:50:02.461000+00:00
['Marketing', 'Writing', 'Careers', 'Médium', 'Career Advice']
Divi Theme Review
Main features of Divi Divi comes with a number of standout features that set it apart from many other WordPress themes. A comprehensive WordPress content editor/builder One of Divi’s main selling point is its easy-to-use drag and drop content editor. This handy but powerful builder can be used to make changes to chosen templates, as well as to derive completely new designs. You can easily make WYSIWYG changes via a visual editor and create reusable content sections. The Divi content editor also comes with a wireframe mode that you can use to review your page structure, along with preview modes that helps you see how the content will look on mobile, desktop, and tablet devices. You also have the option to add a range of page items and customizations into your design, including sliders, animations, players, CTA buttons, and more. A huge library of editable templates Divi offers hundreds of pre-designed templates that are suited to almost any website need you might have. In fact, the theme is among those with the largest collection of customizable templates, which can be used for business websites, blogs, online stores, learning management websites, and much more. There are also individual layouts that come with the theme, which can be used to set up your individual pages (home, about, contact, etc.) exactly how you want them. A special split testing tool you can use to optimize your content If you are unsure what looks and features of your website will work best with your audience, Divi’s convenient split-testing tool allows you to run A/B tests and compare the results. You can use this for multiple pieces of content and page features, as well as run several tests simultaneously. The ability to build your own theme With the Divi theme builder, you can customize more than just the design elements of posts and pages. You can also use it to make changes to your footers and headers, as well as category pages, 404 pages, blog templates, theme templates, etc. In fact, you can use it to change up your entire WordPress theme to match your individual design goals. Portability Divi allows you to save a lot of time when working between different WordPress websites due to its portability. You can easily transfer various website design elements, assets, and layouts between your sites, which is another reason why people find it so efficient to work with the Divi Theme.
https://medium.com/divi-theme/divi-theme-review-85caf20baa97
['Casey Botticello']
2020-10-30 02:45:52.464000+00:00
['Journalism', 'Technology', 'Social Media', 'Freelancing', 'Writing']
65 Websites to Boost Your Productivity
It can be said that time management is one of the keys to success. In this sense, it is important to work efficiently in addition to working hard. That’s why I have listed websites that can increase your productivity in many areas. I use most of the websites on this list regularly and I hope you will find some websites to benefit from now on. Here is the list: A great, easy to use screenshot tool for Windows, Mac and Linux. Plugins for browsers are also available. 2. Slidesgo I discover this recently. I have been using this regularly since then for my slides. Their templates are awesome, easy and FREE! You can find templates for Google Slides and Powerpoint for almost every subject. If you are someone who uses slides too often, then look no more! 3. PDFescape This one is really useful. I used it all the time. If you find Acrobat Reader too complicated to edit your PDFs, you can check out this easy to use and free tool. Works on Windows and browsers. 4. Ifixit A car, laptop, tablet or camera… This site shows you how you can repair your staff. They also offer an illustrated guide. And no, They do not own by Apple. Not yet. 5. Grammarly Perhaps no need to mention but I still wanna add to this list. It is perfect for writers to check grammar mistakes, spelling errors etc. I highly recommended it if you are not using Grammarly already. 6. Marker Do you enjoy using highlight feature on Medium? Then this one is for you. With this Chrome extension, you can highlight words and sentences anywhere on the web. Great for long, complicated articles! 7. Picmonkey A simple Photoshop alternative to editing your photos. 8. WolframAlpha Sometimes the regular search engines won’t enough. If that is the situation, you can check out this search engine instead. They use a unique algorithm and artificial intelligence to give the best search results. There is also a mobile app. 9. Letsenhance Thanks to this free website, you can make your low-resolution photos more clear. The results are really impressive! 10. Which Date Works Group works can be annoying for obvious reasons. This website can help you with that. You can create group plans so everyone can figure out what and when to do it. 11. Similarsites You find a website that works for you. Good. Now you can find similar websites with this website. For apps, you can check out this one. For movies, TV shows and music in here. 12. Hundred Zeros With this website, you can read free kindle books. 13. Ge.tt One of the easiest ways to share files on the web. 14. Free Images This is really useful for Medium to find copyright-free high quality stock photos. 15. Teuxdeux Fascinating to-do list tool! I find popular to-do list tolls are too complicated then what it should be. A simple, minimalistic interface makes a difference. 16. Ifttt This is really useful if you use social media frequently. Ifttt helps you with managing all of your social media from one point for free. Works for especially business. 17. Lastpass Lastpass has millions of users already but I thought there must be quite people who never heard of. With this password manager, you can say goodbye to creating a solid password each time and trying to remember when you sign up for a website. 18. Cloud Convert A great tool for converting any kind of file. Audio, video, document, ebook, archive, image… Name it. Can be a lifesaver. 19. Bonanza A background remover. No, unfortunately not for your past. You can not get away from your past! You can remove the background from your image though. 20. Open Culture This is one of my favorites. You can find free movies, audiobooks, podcasts, language lessons and courses. They describe their website as “the best free cultural and educational media on the web”. I say well put it! Photo by Marvin Meyer on Unsplash 21. Wallhaven A magnificent wallpaper source for your articles, desktop and your phone. I use this regularly. Highly recommended. 22. Pocket You can save articles, videos or any other content from a web page or app. If you are constantly consuming a lot of content online, this is a must-have. 23. Ninite Switching a new computer can be hard to process. You always have to install a lot of programs. Ninite here for you to save the trouble. From this website, you can install the most popular programs at once. Works on only Windows. 24. Mailinator Opening an email is an easy process these days. How about getting rid of them? Not so much. This website helps you with that. You can destroy your email with Mailinator. 25. Easybib This one is perfect for academicians. With this tool, you can easily make quotes and bibliographies. There is also a plagiarism and grammar check feature for your paper. Cool ha? 26. File Pursuit This is a search engine for documents, videos, audios, eBooks, mobile apps and archives. I use it regularly. Can be very beneficial. 27. Eggtimer Do you forget things regularly? You can use this simple timer then. 28. Audiotrimmer An excellent tool for cutting and editing your mp3 files. Great for making ringtones! 29. Howstuffworks If you are like me, always curious about how things work, then this website is definitely for you. You can learn a lot of things by just surfing on this website. Can be useful for research also. 30. Visitacity This is for travelers. When you travel somewhere you often want to use your time and money wisely, right? When you specify the city you will go to and the number of days you will stay, this site provides a program for you. Definitely worth checking. 31. Myfonts This website lets you find out what the font is in the image you upload. 32. Calligraphr Do you have fancy handwriting? You can create fonts from your own handwriting through this site. 33. Quora This is one of my favorite social networks. You can ask questions or answer questions about pretty much everything. 34. Mathway Math problems can very tricky. This site helps you with solving math equations in seconds. 35. Futureme You can write a letter to the future yourself. Almost like time travel. Almost. 36. Cleverbot Bored talking to Siri? You can talk with this AI-oriented bot instead to practice English. 37. Yummly With this website, you can access millions of recipients from all over the world at once place. Also, when you type in whatever ingredients you have, you can see which dishes you can make. And that user-friendly interface is awesome. 38. Coursera Online education getting more important, especially with the current Covid-19 epidemic. The website offers you free courses from respectable universities in the world. If you are willing to pay a certain fee, it also gives you certificates. 39. Codecademy You can learn coding for free. 40. 10minutemail You don’t wanna give your personal email every time you have to sign up? Use this one instead. Photo by Nick Morrison on Unsplash 41. Tasteofcinema Troubled to find a good movie to watch this night? From this website, you can find numerous film lists in different genres, times and categories. 42. Worldometers Real-time world statistics. Coronavirus updates, word population, health, social media are some of the categories. 43. PhET A website where you can learn complex scientific topics with short and simple simulations. 44. Ctrlq Search engine to find RSS feeds. 45. QR Code Generator A qr code generator. You can use URL, phone number or business card. 46. Keepmeout Are you addicted to a website and trying to reduce your visiting time? Check this one out then. 47. Songsterr A great tool for guitar players. This website shows you step by step where to put your fingers on the guitar for the music you want to play. 48. Deletionpedia From the website, you can access deleted Wikipedia articles. 49. Nap The National Academic Press offers you their huge academic database. You can search for academic books, articles or journals for various academic divisions. 50. Foxyutils Foxyutils not for everyone. However, if you are dealing with pdfs every day, then you should be familiar with protected pdf files. With Foxyutils, you can open and edit protected pdf files. 51. Crunchbase It is a platform where you can find detailed development processes, initiatives and future plans of almost all enterprises. 52. Musclewiki Exercise is great, especially these days when we spend a lot of time at home. So how about working out efficiently? Through this website, you can choose the muscle you want to work out in your body and learn what you should do for that specific muscle. Easy, practical and efficient. 53. Xe An easy to use currency converter. 54. Bigjpg You can increase the size and resolution of your pictures for free. 55. Inhersight Inhersight is a platform for women to learn the working conditions in pretty much any company in the world. 56. Writewords This website allows you to quickly group words in any text by number. 57. Bulkurlshortener A free and effective url shortener. Great for Twitter. 58. Chronas This is really cool. By choosing any region from this world map, you can find out which wars took place in that region in which year. You can also access Wikipedia information about the matter. 59. Numbeo Do you wanna move to another city or country? You should definitely check that out then. Here, you can find the monthly living cost of the city or country you wanted to move in. And yes, the website has a compression feature. 60. Howlongtoread This website shows the average time it takes to read your chosen book. Photo by krisna iv on Unsplash 61. Manualslib You can quickly access the user guide of the products. 62. Typingstudy Need to type faster on the keyboard? Through the lessons on this site, you can learn how to write faster and more efficiently on the keyboard. 63. Voicedocs You can easily convert audio recording into text. 64. Justgetflux If you are a person who spends a lot of time in front of the computer during the day, you should definitely check this out. This tool protects your eyes by adjusting the backlight for different time periods of the day. Works only on Windows. 65. Archive As they wrote in the description of the website, “Internet Archive is a non-profit library of millions of free books, movies, software, music, websites, and more.” Definitely one of my favorite corners on the internet.
https://medium.com/illumination/65-websites-to-boost-your-productivity-196b854f3922
['Mustafa Yarımbaş']
2020-07-23 22:27:38.584000+00:00
['Technology', 'Personal Development', 'Writing', 'Self', 'Productivity']
Starting Diversity and Inclusion at Your Startup
Design a Bias-Free Hiring Process As you’re educating yourself and your leaders on what D&I means to your business, you’re likely going to be adding to your team. Before hiring, it’s critical to design and implement best practices to mitigate bias throughout the interview and selection process. Otherwise, your D&I efforts will be stripped away by unconscious bias from the most well-intentioned of staffing personnel and interviewers. Most people try to do the right thing when deciding on the best hire for their team. You want to hire people who will be successful in their role, easy to work with, and fit in with your company’s culture. However, bias can start seeping in from the start of your search — from something as simple as not having gender-neutral language in your job descriptions to rejecting a resume due to preconceived notions about a candidate based on where they went to school. Biases can continue to appear when you pick up the phone to talk to a candidate or walk into the interview room and make quick judgments based on a candidate’s appearance, accent, or manner of speaking. Whether the search for a fit is conscious or unconscious, keep in mind that you want to consider someone who can be a culture add instead of just a culture fit. Overcoming unconscious bias in the hiring process requires investing in the education and training of your employees to set up a structure in the interviewing process. Training can help people think about good questions that will provide the most accurate signals about someone’s ability to do a job well rather than relying on their gut feelings. Train your interviewers to ask the same questions to each candidate for calibration purposes. Test questions with internal hires first to set interviewing expectations. Consider the person’s hunger for the role, problem-solving abilities, and ability to learn and adapt. Ensure that underrepresented employees have a voice in hiring decisions, but also give them the opportunity to opt into processes rather than assuming that they want or have the time to participate in every D&I interview. When it comes time for offers, overcoming bias involves knowing that women initiate negotiation less than men due to the negative social costs attached. In 2019, 13 states implemented a salary ban law that prevents employers from asking applicants about their previous or current compensation. A recent study shows that the law has been effective in improving pay by 8% for women and 13% for Black people. This law is a good reminder that policies can make a difference, and companies can take additional steps to create policies or structures within their organization to ensure the continuation of gender and racial equity.
https://medium.com/better-programming/starting-d-i-in-your-startup-f043b684e1e6
['Linh M. Phan']
2020-07-20 14:53:13.937000+00:00
['Diversity And Inclusion', 'Equality', 'Startup', 'Diversity In Tech', 'Software Engineering']
3 Stories for All Immigrant (Indian American) Kids
3 Stories for All Immigrant (Indian American) Kids They are a must-read for parents as well Photo by Mat Reding on Unsplash When my family and I arrived in the land of opportunity, the free public library system, not the Statue of Liberty, reaffirmed our decision to immigrate. As a “good-boy”, I read voraciously. The public library, the greatest invention of Benjamin Franklin, opened the world to me. Like many others, I was astounded by the vast volumes of fiction. Occasionally, I would yearn for stories featuring Indians. Apu from the Simpsons did not make his debut until 1989. Fast forward a few decades and I present novels featuring immigrant Indian American characters. I cannot express the joy at having my children read these amazing works. I probably enjoyed these stories more.
https://medium.com/books-are-our-superpower/3-stories-for-all-immigrant-indian-american-kids-a44fb8da0bc6
[]
2020-12-23 12:22:21.139000+00:00
['Storytelling', 'Book Review', 'India', 'Books', 'Parenting']
The Literally Literary Weekly Update #9
Literature Doesn’t Have to Make Sense by Matthew Ward (Art) “There’s something weird about the internet age that has caused us to lose touch with art — especially story-driven art forms like movies and literature. As soon as anything new comes out, there are hundreds of youtube videos, magazine articles, and twitter threads tearing it to pieces.” I’m Here With You by Mary Keating (Poetry) “Built a fortress strong enough to bend where only Love holds the key” To Pamela by Sydney Duke Richey (Poetry) “even before opening it I knew I would have paid a dollar or more for a book with the title” I Met My 8-Year-Old Self by Omar Gahbiche (Fiction) “I, a twenty-something guy, was looking at my eight-year-old self. And I was not hallucinating. It felt real. I was still aware that it couldn’t be, but strangely, it was as real as a regular sunny summer afternoon.” The Evolution of Dating by Jerry Windley-Daoust (Fiction) “The love you share with him is the easily domesticated kind, and it becomes as comfortable as a faithful old dog, happy to see you every time you come home.”
https://medium.com/literally-literary/the-literally-literary-weekly-update-9-9fa9c660ac8c
['Jonathan Greene']
2020-02-19 17:26:00.985000+00:00
['Poetry', 'Nonfiction', 'Ll Letters', 'Fiction', 'Writing']
Create Stunning Circular Progress Bars with Flutter Radial Gauge: Part 1
The Syncfusion Flutter Radial Gauge widget is a multi-purpose data visualization widget. It can be used to visualize different types of data and display the progress of various processes in circular format. We have already published blogs on different use cases, such as the creation of a speedometer and temperature monitor, using the Flutter Radial Gauge. To continue these use-case series posts, we are now going to create different styles of animated circular progress indicators using the Syncfusion Flutter Radial Gauge. The circular progress bar is used to visualize the progress of work or an operation such as a download, file transfer, or installation. It can be used for showing different progress states such as: Determinate Indeterminate Segmented progress I’ve separated designing various styles for circular progress bars into a two part blog. In this first part, you will learn about building different styles of a determinate-type circular progress bar. Circular Progress Bar Styles Let’s get started! Configuring the Radial Gauge widget Creating a Flutter project First, we need to configure the Radial Gauge widget in our application. Follow the instructions provided in the Getting Started documentation to create a basic project in Flutter. Add Radial Gauge dependency Include the Syncfusion Flutter Radial Gauge package dependency in the pubspec.yaml file of your project. syncfusion_flutter_gauges: ^18.2.44 Get packages To get the package to the local disk, run the following command in the terminal window of your project. $ flutter pub get Import package Import the Radial Gauge package in main.dart using the following code example. import 'package:syncfusion_flutter_gauges/gauges.dart'; Add Radial Gauge widget After importing the Radial Gauge package to the sample, initialize the Radial Gauge widget and add it to the widget tree as shown in the following code example. @override Widget build(BuildContext context) { return Scaffold( body: Center( child: SfRadialGauge(), ), ); } Now, we have configured the Radial Gauge widget in our application. Let’s see the magical ways to create charming styles in it. Various styles in determinate-type circular progress bar A determinate-type progress bar is used when it is possible to estimate the completion percentage of a process. Let’s see how to design the following styles in a determinate progress bar by using the Flutter Radial Gauge: Normal progress bar Filled-track and filled-style progress bar Gradient progress bar with marker Semi-circular progress bar style Buffer progress bar Segmented circular progress bar You need to use the radial axis, range pointer, and annotation features in the Radial Gauge to achieve all these determinate-type circular progress bar designs. Normal progress bar style Normal Progress Bar To get the normal progress bar style, we need to disable the labels and ticks properties in the RadialAxis class and set the axis range values in the minimum and maximum properties based on your design needs. To show 100 percent progress, you need to define the axis’s minimum value as 0 and maximum as 100. You can also show the progress line (track) of the progress bar by customizing the axisLineStyle, as shown in the following code example. SfRadialGauge(axes: <RadialAxis>[ RadialAxis( minimum: 0, maximum: 100, showLabels: false, showTicks: false, axisLineStyle: AxisLineStyle( thickness: 0.2, cornerStyle: CornerStyle.bothCurve, color: Color.fromARGB(30, 0, 169, 181), thicknessUnit: GaugeSizeUnit.factor, ), ) ]), Progress Bar with Progress Line (Track) You can add a pointer to the progress bar by customizing the position and size of the RangePointer class. To show progress, the pointer Value should be updated with a delay action, such as a timer. Then, the pointer value will be dynamically updated by a timer for a specific duration of time. Refer to the following code example. pointers: <GaugePointer>[ RangePointer( value: progressValue, cornerStyle: CornerStyle.bothCurve, width: 0.2, sizeUnit: GaugeSizeUnit.factor, ) ], Progress Bar with Custom Range Pointer To add custom content to the center of the circular progress bar in order to indicate the completion of a progression or to convey the current status of it, you can use the annotations feature. To display the current updated progress value, set the pointer value to the annotation text. annotations: <GaugeAnnotation>[ GaugeAnnotation( positionFactor: 0.1, angle: 90, widget: Text( progressValue.toStringAsFixed(0) + ' / 100', style: TextStyle(fontSize: 11), )) ]) Progress Bar with Annotation Filled-track and filled-style progress bar To fill the track color, set the axis line thickness to 1, thicknessUnit to GaugeSizeUnit.factor, and the color of the axis line to fill the entire gauge radius. Add the range pointer with the offset position to show the progression. RadialAxis( minimum: 0, maximum: 100, showLabels: false, showTicks: false, startAngle: 270, endAngle: 270, axisLineStyle: AxisLineStyle( thickness: 1, color: const Color.fromARGB(255, 0, 169, 181), thicknessUnit: GaugeSizeUnit.factor, ), pointers: <GaugePointer>[ RangePointer( value: progressValue, width: 0.15 color: Colors.white, pointerOffset: 0.1, cornerStyle: CornerStyle.bothCurve, sizeUnit: GaugeSizeUnit.factor, ) ], Add an annotation with the current progress value, as explained in the example of the previous code. Filled-Track-Style Progress Bar To fill the progress color, set the range pointer width to 0.95, sizeUnit to GaugeSizeUnit.factor, and the color of the pointer to fill the entire gauge radius. Add an axis line with appropriate thickness to show the track color. RadialAxis( minimum: 0, maximum: 100, showLabels: false, showTicks: false, startAngle: 270, endAngle: 270, axisLineStyle: AxisLineStyle( thickness: 0.05, color: const Color.fromARGB(100, 0, 169, 181), thicknessUnit: GaugeSizeUnit.factor, ), pointers: <GaugePointer>[ RangePointer( value: progressValue, width: 0.95, pointerOffset: 0.05, sizeUnit: GaugeSizeUnit.factor, ) ], ) Filled Progress Style Gradient progress bar with marker style To apply a gradient to the progress bar, set the SweepGradient with its appropriate colors and offset values to the gradient property of the range pointer. Also, add the MarkerPointer and the range pointer and use the same progress value to update both pointers, as shown in the following code example. pointers: <GaugePointer>[ RangePointer( value: progressValue, width: 0.1, sizeUnit: GaugeSizeUnit.factor, cornerStyle: CornerStyle.startCurve, gradient: const SweepGradient(colors: <Color>[ Color(0xFF00a9b5), Color(0xFFa4edeb) ], stops: <double>[ 0.25, 0.75 ])), MarkerPointer( value: progressValue, markerType: MarkerType.circle, color: const Color(0xFF87e8e8), ) ], Gradient Progress with Marker Style Semi-circular progress bar style You can customize the startAngle and endAngle properties of the radial axis to design full and semi-circular progress bars. To make a semi-circle progress bar, set the startAngle value to 180 and endAngle value to 0, as shown in the following code example. RadialAxis( showLabels: false, showTicks: false, startAngle: 180, endAngle: 0, radiusFactor: 0.7, canScaleToFit: true, axisLineStyle: AxisLineStyle( thickness: 0.1, color: const Color.fromARGB(30, 0, 169, 181), thicknessUnit: GaugeSizeUnit.factor, cornerStyle: CornerStyle.startCurve, ), pointers: <GaugePointer>[ RangePointer( value: progressValue, width: 0.1, sizeUnit: GaugeSizeUnit.factor, cornerStyle: CornerStyle.bothCurve) ], Semi-Circular Progress Bar Style Buffer-style progress bar In this buffer-style progress bar, you can use a secondary progress indicator to denote the primary progression, which is dependent on the secondary progression. This style will allow you to visualize both primary and secondary progressions simultaneously. To add primary and secondary progress pointers, use two range pointers with different progress values. pointers: <GaugePointer>[ RangePointer( value: secondaryProgressValue, width: 0.1, sizeUnit: GaugeSizeUnit.factor, color: const Color.fromARGB(120, 0, 169, 181), cornerStyle: CornerStyle.bothCurve), RangePointer( value: progressValue, width: 0.1, sizeUnit: GaugeSizeUnit.factor, cornerStyle: CornerStyle.bothCurve) ], Buffer Progress Bar Segmented circular progress bar style The segmented circular progress bar style allows you to divide a progress bar into multiple segments to visualize the progress of multi-sequence tasks. Segmented Circular Progress Bar Styles Design the segmented progress bar by customizing the RadialAxis and the RangePointer. In addition, you need to add one more RadialAxis over the first axis to create the segmented line in the progress bar. The segmented lines are generated by enabling the major ticks for the secondary radial axis with a certain interval and disabling the other axis elements. axes: <RadialAxis>[ // Create primary radial axis RadialAxis( minimum: 0, maximum: 100, showLabels: false, showTicks: false, startAngle: 270, endAngle: 270, radiusFactor: 0.7, axisLineStyle: AxisLineStyle( thickness: 0.2, color: const Color.fromARGB(30, 0, 169, 181), thicknessUnit: GaugeSizeUnit.factor, ), pointers: <GaugePointer>[ RangePointer( value: progressValue, width: 0.05, pointerOffset: 0.07, sizeUnit: GaugeSizeUnit.factor, ) ], ), // Create secondary radial axis for segmented line RadialAxis( minimum: 0, interval: 1, maximum: 4, showLabels: false, showTicks: true, showAxisLine: false, tickOffset: -0.05, offsetUnit: GaugeSizeUnit.factor, minorTicksPerInterval: 0, startAngle: 270, endAngle: 270, radiusFactor: 0.7, majorTickStyle: MajorTickStyle( length: 0.3, thickness: 3, lengthUnit: GaugeSizeUnit.factor, color: Colors.white), ) ] Segmented Progress Bar Animate progression with real-time data We have looked at applying different styles to the circular progress bar. Now, let’s see how to update real-time data in it. In the real-time application, the progress value will be fetched from a service and updated in the pointer. This demo uses a timer to stimulate the progress updates with a duration of 100 milliseconds. The app state will be changed to rebuild the widgets. The progressValue variable is used to set the pointer value and the annotation’s text value. In each timer tick, the progressValue variable is incremented by 1 in the setState callback, as shown in the following code. This is to update the pointer and annotation text values. _timer = Timer.periodic(const Duration(milliseconds: 100),(_timer) { setState(() { _progressValue++; }); }); In the RangePoiner, set the animationType property to linear and set the timer duration to the pointer’s animationDuration property to make the animated progression. pointers: <GaugePointer>[ RangePointer( value: _value1, width: 0.05, sizeUnit: GaugeSizeUnit.factor, enableAnimation: true, animationDuration: 100, animationType: AnimationType.linear) ], GitHub reference: You can download the entire sample code of all the explained circular progress bar types from this GitHub location. Conclusion I hope you have enjoyed reading this blog and have a clear idea about how to create various styles for a determinate-type circular progress bar. In the upcoming part 2 blog, you can expect some more beautiful styles. The Syncfusion Flutter Radial Gauge widget has been designed with flexible UI customization options to adapt to your app easily. It includes developer-friendly APIs to increase your productivity. You can find the complete user guide here, and you can also check out our other samples in this GitHub location. Additionally, you can download and check out our demo app in Google Play, the App Store, and our website. The Radial Gauge is also available for our Xamarin, UWP, WinForms, WPF, Blazor, ASP .NET ( Core, MVC, Web Forms), JavaScript, Angular, React, and Vue platforms. Check them out and create stunning circular progress bars! If you have any questions about this control, please let us know in the comments section below. You can also contact us through our support forum, Direct-Trac, or feedback portal. We are always happy to assist you!
https://medium.com/syncfusion/create-stunning-circular-progress-bars-with-flutter-radial-gauge-part-1-94a76db342dc
['Rajeshwari Pandinagarajan']
2020-08-13 11:47:24.655000+00:00
['Mobile App Development', 'Dart', 'Flutter', 'Productivity', 'Web Development']
Travel virtually during lockdown
Travel virtually during lockdown How to escape right from your living room with this simple tip Photo by Simon Migaj on Unsplash Being at home for so many weeks on end can become a real drag while the world is experiencing the COVID-19 crisis. You’re stuck inside seeing the same four walls each day, with little escape on weekends. But fear not, I’m here to show you how you can travel over vast distances without ever leaving your living room. What you’ll need If you have a decent internet connection and a Google Chromecast, Apple TV, or other streaming device, then you have everything you’ll need for this simple escape. YouTube is a weird and wonderful thing Yes, there is plenty of entertainment available on YouTube, but there are some beautiful things available to stream absentmindedly in the background while you’re stuck at home, and some of these are ambient experiences from video games. Those who love good ambient music will probably already be familiar with YouTube’s unending supply. But did you also know that there are people who upload ambient experiences from popular game titles like Skyrim, Fallout, and Red Dead Redemption? Now these aren’t just static, they are records from right in the game with the trees waving in the wind, water flowing down rivers, and even people moving around. Sometimes these are also accompanied with music from the game that fits the scene. Here are a couple of examples of the ambient experiences from the hit title Red Dead Redemption 2. Of course you aren’t at all limited by just the outdoors, if it’s the inside of a quiet tavern you’re looking for with the roar of an open fireplace warming you through, I’ve got you covered. Or perhaps its the quiet, radiation fuelled post-apocalyptic climbs of the Fallout 4 wasteland that interest you. If video game locations aren’t your thing, then there are plenty of other ambient escapes for you to experience. Like this beach in Thailand. Or if you really have a hankering for the on-plane experience. Wherever you’d prefer, it’s almost a guarantee you’ll find somewhere interesting to visit. Not only will this give you a different experience from the inside of your home, but it might even help clear out the cobwebs and support your mental health at the same time. Completing the experience Now, you’ve chosen a video to put on while you’re at home and perhaps you’ve setup a playlist on YouTube to change throughout the day. Great! Time to dust off your Chromecast or other streaming device and send the video to your home entertainment system. Once the video is streaming, turn the speakers up to an acceptable level and enjoy the bliss of somewhere else, be it the plains of New Mexico, a cosy tavern in a medieval setting, or the post-apocalyptic sea-side vistas of Far Harbour. Remember that a new escape is just a quick YouTube search away… Originally published at hellotimking.com on April 8, 2020
https://hellotimking.medium.com/travel-virtually-during-lockdown-baf142aa8318
['Tim King']
2020-07-19 03:10:19.737000+00:00
['Travel', 'Escape', 'Mental Health', 'Covid-19', 'Coronavirus']
How to be Your Own “Writers’ Contest” Judge
If you’ve been struggling, working on your book for more than a couple of years, then the odds are pretty darned high that from time-to-time you’ve lost perspective about how good your writing is, how interesting your characters are, how clear their motivations are, and whether readers will care about or connect to your story. Entering your story into a contest can give you fabulous feedback on all kinds of important areas about your story that can help boost your confidence and make your story even better. But, if you’re still in the editing stage, you may not be ready to enter real contests. So, creating a contest situation just for your book is a perfect solution! Although you could absolutely use the questions in this mock contest evaluation to review your own manuscript, it really will work better if you find two or three trusted readers to “judge” your first 25 pages. By ‘trusted’ I mean a couple of specific things: Your ‘judges’ read and love the genre you’re writing in. They have an appreciation for the fact that books do not fall from fingertips in their final form and know how to offer constructive feedback. They are people whose opinions you trust If you send your pages to three judges you may get back three very different scores and comments. So, when your judges send back your scoresheets, it’s often helpful to review their comments and then set them aside for a few days to see what resonates and what you can ignore. And make a conscious point to celebrate all the high scores and positive comments!
https://medium.com/love-and-stuff/how-to-be-your-own-writers-contest-judge-45030a7d2cf6
['Danika Bloom']
2019-08-13 01:29:49.464000+00:00
['Books', 'Writing', 'Writing Tips', 'Writer', 'Fiction']
Maker Mindset for Data Entrepreneurs
Academia 🤝 Industry In line with my previous post I asked Arjan about his view on the differences between the two worlds: “Most companies are very far away from science. That is a bit of a shame, but experiments can close this gap. Once companies start adopting a data-driven mindset they come closer to the basic principles of science. On the other hand, I expect that scientists will shortly reap the benefits of industry collaborations due to the exponential growth of data collection. This will lead to new research areas which you already see happening at tech giants such as Twitter and Facebook.” Despite his general concerns about the lack of an experiment-driven culture in business, it turns out Arjan is very excited about the tech giant whose head office is located in the beautiful capital of our very own country: Booking.com. In his former role as — what his mentor Mats Einarsen likes to refer to as — Travel Scientist Arjan was trying to find answers to questions such as: “Are we able to accurately forecast at which holiday destination potential travelers will have the best experience of their life?” What he relished most in this position was the data-driven culture: “In the morning you drank a cup of coffee, thought of an experiment and delivered it to the ‘Experiment Tool’ (i.e. Booking’s in-house built A/B testing platform). You thus executed the experiment and evaluated its results after which the process started all over again the day after. This approach was so radically different than what was common for most companies at the time: first a meeting to brainstorm ideas, then someone needed to write a project plan, another meeting to discuss the project plan, yet another meeting to create a budget to find out two years along the line that the experiment did not work at all.” He continues: “In business assumptions are often still the norm: ‘I think that…’ [followed by a list of personal beliefs]. The first thing they asked at Booking.com once you had shared your new idea was: ‘Do you have any data to support that claim?’. I have not yet come across a single company that has integrated this data-driven approach across all departments and processes to such an extent.”
https://medium.com/the-outlier/maker-mindset-for-data-entrepreneurs-5727a8e944b5
['Roy Klaasse Bos']
2018-05-10 21:42:36.252000+00:00
['Startup', 'Entrepreneurship', 'Data Science', 'University', 'Academia']
How to Start and Keep a Daily Journaling Habit
How to Start and Keep a Daily Journaling Habit Writing down your thoughts every day can change your life. Photo by Hannah Olinger on Unsplash I’ve been journaling almost daily since I was a kid, long before everyone started touting its mental health benefits. I’ve also known that I wanted to be a writer since I first learned to read at age three. Because I had a difficult time making friends, books were my only companions for several years in my childhood. As a result, I wanted to grow up to create more books so that awkward kids like me would feel a little less alone. I started writing stories whenever I could. My parents bought me new notebooks all the time and I filled them up just as quickly. At age six, I started keeping my first diary. Every night before bed, I’d write down what had happened that day and how I felt about it. That diary is actually still at my parents’ house. I reread it last year when they moved, and it brought back so many memories I’d thought would never resurface. It reminded me of one of the main reasons I journal in the first place: to record memories. I tend to forget things unless I write them down. Maybe that’s a side effect of my ADHD. Whatever the reason, I like to go back and read old journals to remember how my life used to be. I also enjoy reflecting on how far I’ve come since that point in my life. For those reasons, I’ve kept a journal almost constantly since I was a child. Most of the time, I’d use it as a sort of record of my life, but I’d occasionally use it to organize story ideas, brainstorm characters, or scribble lines of poetry. More recently, keeping a daily pandemic notebook helped me figure out that I’m actually a lesbian. These days, my journal is a place for all of my random thoughts. I pick a time every morning and every evening after work to write down whatever is going through my head. I find journaling to be therapeutic because it helps to physically transfer bad thoughts from your brain to the page. This gets them out of your head so that you don’t think about them quite as much anymore. As someone with severe depression and anxiety, journaling is a lifesaver. But even if you don’t have any mental illnesses, journaling can be beneficial fo you too. If you’re part of the LGBTQ+ community, journaling can help you better come to terms with your true identity. If you’re a writer, journaling can help you organize your ideas and figure out what you really want to write about. No matter who you are, I believe that journaling can help you. But it can be a bit hard to get started. After all, I know better than many people how intimidating it can be to stare down the barrel of a blank page. That’s why I’ve written this easy guide that should help you get started.
https://medium.com/the-brave-writer/how-to-start-and-keep-a-journaling-habit-2e5139277140
['Danny Jackson H.']
2020-09-23 16:02:56.964000+00:00
['Writing', 'Productivity', 'Habits', 'Writing Tips', 'Journaling']
Data Labelling
The triple-barrier method labels an observation according to the first barrier touched out of three barriers introduced in Chapter 3 of Advances in Financial Machine Learning by Marcos Prado¹. The conventional way to label the data is by using the next day (lagged) return with the fixed-time horizon method. This method can be described as follows. and There are several drawbacks about this popular conventional labelling method. First, time bars do not exhibit good statistical properties. Second, the same threshold 𝜏 is applied regardless of the observed volatility. Basically, labelling doesn’t reflect the current state of the investment. Moreover, in a real case, the chance is that you may not want to sell the next day. Therefore, triple-barrier method makes more sense in practice as it is path-dependent. You can make sound decisions depending on how many days you are planning to hold the stock and what’s happening to the stock during that period. The original code from Chapter 3 of Advances in Financial Machine Learning is created for high-frequency trading, using high-frequency data, and most are intraday data. If you are using daily data, we need to tweak the code a little bit. I also refracted most of the code from the book to make it beginner-friendly by heavily utilizing padas DataFrame structure to store all the information in one place. By this way, it makes life so much easier later on when you start to analysis or plot the data. At the meantime, I employed more complicated approaches such as Average True Range as the daily volatility. You can see all the code at the end of this article. Intuition The intuition is like finding outliers as described in my previous articles. The outliers just like the breakthrough in stock trading, which define all the barriers and forming a window for you to make a buy or sell decision. If you haven’t read it, you can always go back to here, here and here. According to Advances in Financial Machine Learning by Marcos Prado¹, Triple Barrier method is: Basically, what we are doing here is: We will buy in a stock (let’s say Apple) and hold it for 10 days. If the price is going down and trigger the stop loss alarm we exit at the stop-loss limit, or if the price is going up, we take the profit at a certain point. In an extreme case, the stock price goes sideway, we exit at a certain day after holding it for a while. Assume we have a simple equity management rule: Never risk more than 2% of your total capital in a trade. Always look to trade only those opportunities where you will have a 3:1 earnings ratio. Based on those simple rules, we make a trading plan before we put real money into any stocks. To infuse that trading plan into stock price movement, we need 3 barriers. What are those 3 barriers? 4 lines form a frame, defines a window as showing below. The x-axis is the datetime, y-axis is the stock price. Line a,d belong to x-axis, which is the datatime index, and line b,c belong to y-axis which is the stock price. a: starting date b: stop-loss exit price c: the profit-taking exit price d: starting date + the number of days you are planning to hold it. b and c don’t have to be same. Remember we want to set profit-taking and stop-loss limits that are a function of the risks involved in a bet. And we are always looking to trade only those opportunities where you will have a 3:1 earnings ratio. Here to set c = 3 * b will do the trick. There are few videos on this topic, I just found one on YouTube. OK, without further ado, let’s dive in the code. 1. Data preparation For consistency, in all the 📈Python for finance series, I will try to reuse the same data as much as I can. More details about data preparation can be found here, here and here or you can refer back to my previous article. Or if you like, you can ignore all the code below and use whatever clean data you have at hand, it won’t affect the things we are going to do together. import pandas as pd import numpy as np import matplotlib.pyplot as plt plt.style.use('seaborn') plt.rcParams['figure.figsize'] = [16, 9] plt.rcParams['figure.dpi'] = 300 plt.rcParams['font.size'] = 20 plt.rcParams['axes.labelsize'] = 20 plt.rcParams['axes.titlesize'] = 24 plt.rcParams['xtick.labelsize'] = 16 plt.rcParams['ytick.labelsize'] = 16 plt.rcParams['font.family'] = 'serif' import yfinance as yf def get_data(symbols, begin_date=None,end_date=None): df = yf.download('AAPL', start = begin_date, auto_adjust=True,#only download adjusted data end= end_date) #my convention: always lowercase df.columns = ['open','high','low', 'close','volume'] return df Apple_stock = get_data('AAPL', '2000-01-01', '2010-12-31') price = Apple_stock['close'] 2. Daily Volatility The original code (below) of getting daily volatility is for intraday data, which is consecutive data with no weekend, nonbusiness days, etc.. def getDailyVol(close,span0=100): # daily vol, reindexed to close df0=close.index.searchsorted(close.index-pd.Timedelta(days=1)) df0=df0[df0>0] df0=pd.Series(close.index[df0 – 1], index=close.index[close.shape[0]-df0.shape[0]:]) df0=close.loc[df0.index]/close.loc[df0.values].values-1 # daily returns df0=df0.ewm(span=span0).std() return df0 If you run this function, you will get an error message: SyntaxError: invalid character in identifier , that is because close.index[df0–1] . It can be fixed like this: def getDailyVol(close,span0=100): # daily vol, reindexed to close df0=close.index.searchsorted(close.index-pd.Timedelta(days=1)) df0=df0[df0>0] a = df0 -1 #using a variable to avoid the error message. df0=pd.Series(close.index[a], index=close.index[close.shape[0]-df0.shape[0]:]) df0=close.loc[df0.index]/close.loc[df0.values].values-1 # daily returns df0=df0.ewm(span=span0).std() return df0 If you use daily data instead of intraday data, you will end up with lots of duplicates as the date moved backwards 1 day and causing many NaN later on as many dates will be non-business days. df0=close.index.searchsorted(close.index-pd.Timedelta(days=1)) pd.Series(df0).value_counts() 2766–2189 = 577 duplicates. With daily data, we can use a simple percentage returns’ Exponential weighted moving average (EWM) as the volatility. def get_Daily_Volatility(close,span0=20): # simple percentage returns df0=close.pct_change() # 20 days, a month EWM's std as boundary df0=df0.ewm(span=span0).std() df0.dropna(inplace=True) return df0 df0 = get_Daily_Volatility(price) df0 Depending upon the type of problem, we can choose more complicated approaches such as Average True Range (a technical analysis indicator that measures market volatility). The formula for ATR is: The first step in calculating ATR is to find a series of true range values for a stock price. The price range of an asset for a given trading day is simply its high minus its low, while the true range is current high less the current low; the absolute value of the current high less the previous close; and the absolute value of the current low less the previous close. The average true range is then a moving average, generally using 14 days, of the true ranges. def get_atr(stock, win=14): atr_df = pd.Series(index=stock.index) high = pd.Series(Apple_stock.high.rolling( \ win, min_periods=win)) low = pd.Series(Apple_stock.low.rolling( \ win, min_periods=win)) close = pd.Series(Apple_stock.close.rolling( \ win, min_periods=win)) for i in range(len(stock.index)): tr=np.max([(high[i] - low[i]), \ np.abs(high[i] - close[i]), \ np.abs(low[i] - close[i])], \ axis=0) atr_df[i] = tr.sum() / win return atr_df get_atr(Apple_stock, 14) atr_df 3. Triple-Barrier Before we start to work on the barriers, a few parameters need to be decided. #set the boundary of barriers, based on 20 days EWM daily_volatility = get_Daily_Volatility(price) # how many days we hold the stock which set the vertical barrier t_final = 10 #the up and low boundary multipliers upper_lower_multipliers = [2, 2] #allign the index prices = price[daily_volatility.index] Here, I will use pd.DataFrame as the container to add all the information into one place. def get_3_barriers(): #create a container barriers = pd.DataFrame(columns=['days_passed', 'price', 'vert_barrier', \ 'top_barrier', 'bottom_barrier'], \ index = daily_volatility.index) for day, vol in daily_volatility.iteritems(): days_passed = len(daily_volatility.loc \ [daily_volatility.index[0] : day]) #set the vertical barrier if (days_passed + t_final < len(daily_volatility.index) \ and t_final != 0): vert_barrier = daily_volatility.index[ days_passed + t_final] else: vert_barrier = np.nan #set the top barrier if upper_lower_multipliers[0] > 0: top_barrier = prices.loc[day] + prices.loc[day] * \ upper_lower_multipliers[0] * vol else: #set it to NaNs top_barrier = pd.Series(index=prices.index) #set the bottom barrier if upper_lower_multipliers[1] > 0: bottom_barrier = prices.loc[day] - prices.loc[day] * \ upper_lower_multipliers[1] * vol else: #set it to NaNs bottom_barrier = pd.Series(index=prices.index) barriers.loc[day, ['days_passed', 'price', 'vert_barrier','top_barrier', 'bottom_barrier']] = \ days_passed, prices.loc[day], vert_barrier, top_barrier, bottom_barrier return barriers Let’s have a look at all the barriers. barriers = get_barriers() barriers and have a close look at all the data information. barriers.info() Only the vert_barrier has 11 NaN value at the end as the t_final was set as 10 days. The next step is to label each entry according to which barrier was touched first. I add a new column ‘out’ to the end of barriers . barriers['out'] = None barriers.head() Now, we can work on the labels. def get_labels(): ''' start: first day of the window end:last day of the window price_initial: first day stock price price_final:last day stock price top_barrier: profit taking limit bottom_barrier:stop loss limt condition_pt:top_barrier touching conditon condition_sl:bottom_barrier touching conditon ''' for i in range(len(barriers.index)): start = barriers.index[i] end = barriers.vert_barrier[i] if pd.notna(end): # assign the initial and final price price_initial = barriers.price[start] price_final = barriers.price[end] # assign the top and bottom barriers top_barrier = barriers.top_barrier[i] bottom_barrier = barriers.bottom_barrier[i] #set the profit taking and stop loss conditons condition_pt = (barriers.price[start: end] >= \ top_barrier).any() condition_sl = (barriers.price[start: end] <= \ bottom_barrier).any() #assign the labels if condition_pt: barriers['out'][i] = 1 elif condition_sl: barriers['out'][i] = -1 else: barriers['out'][i] = max( [(price_final - price_initial)/ (top_barrier - price_initial), \ (price_final - price_initial)/ \ (price_initial - bottom_barrier)],\ key=abs) return get_labels() barriers We can plot the ‘out’ to see its distribution. plt.plot(barriers.out,'bo') and count how many profit taking and stop loss limit were triggered. barriers.out.value_counts() There are 1385 profit-taking and 837 stop-losing out of 2764 data points. 542 cases exit because time is up. We can also pick a random date and show it on a graph. fig,ax = plt.subplots() ax.set(title='Apple stock price', xlabel='date', ylabel='price') ax.plot(barriers.price[100: 200]) start = barriers.index[120] end = barriers.vert_barrier[120] upper_barrier = barriers.top_barrier[120] lower_barrier = barriers.bottom_barrier[120] ax.plot([start, end], [upper_barrier, upper_barrier], 'r--'); ax.plot([start, end], [lower_barrier, lower_barrier], 'r--'); ax.plot([start, end], [(lower_barrier + upper_barrier)*0.5, \ (lower_barrier + upper_barrier)*0.5], 'r--'); ax.plot([start, start], [lower_barrier, upper_barrier], 'r-'); ax.plot([end, end], [lower_barrier, upper_barrier], 'r-'); We also can draw a dynamic graph with easy. fig,ax = plt.subplots() ax.set(title='Apple stock price', xlabel='date', ylabel='price') ax.plot(barriers.price[100: 200]) start = barriers.index[120] end = barriers.index[120+t_final] upper_barrier = barriers.top_barrier[120] lower_barrier = barriers.bottom_barrier[120] ax.plot(barriers.index[120:120+t_final+1], barriers.top_barrier[start:end], 'r--'); ax.plot(barriers.index[120:120+t_final+1], barriers.bottom_barrier[start:end], 'r--'); ax.plot([start, end], [(lower_barrier + upper_barrier)*0.5, \ (lower_barrier + upper_barrier)*0.5], 'r--'); ax.plot([start, start], [lower_barrier, upper_barrier], 'r-'); ax.plot([end, end], [barriers.bottom_barrier[end], barriers.top_barrier[end]], 'r-'); Recap the parameters we have: Data: Apple 10-years stock price Hold for: no more than 10 days Profit-taking boundary: 2 times of 20 days return EWM std Stop-loss boundary: 2 times of 20 days return EWM std The rule we expect in the real case: Always look to trade only those opportunities where you will have a 3:1 earn ratio. Never risk more than 2% of your total capital in a trade. The first rule can be easily realized by setting upper_lower_multipliers = [3, 1] . The second one is about the trading size, the side times the size will enable us the calculate the risk (margin/edge). That will be meta-labelling in the next article. So, stay tuned! Here is all the code:
https://towardsdatascience.com/the-triple-barrier-method-251268419dcd
['Ke Gui']
2020-10-13 11:19:44.346000+00:00
['Machine Learning', 'Artificial Intelligence', 'Python', 'Finance', 'Data Science']
How Reading 50 Books in a Year Greatly Improved My Writing
Reading Purposefully Speed-reading may help us to cover more ground, but sometimes it comes with a compromise. We might inadvertently sacrifice some depth in thought for a few extra pages. To me, I think the beauty of reading is to really get to understand what the author is trying to convey. Reading speed can be slowly improved by simply reading more, so there is no need to race yourself to read faster all the time. It is one of the very few moments in life in which you can take it slow. I want to treasure it. The What Reading with purpose is to understand the text at hand rather than simply finish it. A good demonstration of understanding is knowing how to ask yourself the right questions that prompt you to think. The key is to draw connections to what you know and be radically open-minded to explore new possibilities. The How A good start would be to use the K-W-L method for learning. It is a graphical organizer developed by Donna Ogle to allow readers to take charge of their own learning. Typically, readers would have a chart that is divided into three sections: What I Know What I Want to Know What I Learned Firstly, examining what you know helps to establish a foundation that can be useful for comparing with what you learned in the end. A well-written book provokes you to rethink what you know and challenges your preconceived notions. Following this, I personally will read the synopsis and contents of the book to get the gist of what I aim to achieve out of reading it. I will then write down my learning objectives in the W section and subconsciously pay specific attention to whatever can enhance my knowledge. For example, in the book “Principles: Life and Work” by Ray Dalio, I wanted to learn about how I could change my day-to-day decision-making framework to become more systematic so that I can be decisive even in the face of ambiguity. For this section, there is also a need to strike a healthy balance between focusing on finding the information you seek and being receptive in the face of new information. There is a clear danger of tunnel vision, which we want to avoid as much as possible to gain the most out of our reading experience. Last, but not least, I write down the key takeaways for each chapter in the L column. I set a limit of five bullet points per chapter, keeping it concise and readable. Every time I finish a book, I take on the role of critic, summarizing the text and giving it my honest review. Doing so enables you to practice being an independent thinker as it empowers you to formulate your own opinions and encourages you to think more deeply about what you read. I encourage you to find whatever method that works best for you. If you’re a visual learner, why not try a mind map? Some concepts can be put into practice, so why not try them out in real life and see the results for yourself? As long as there is a way for you to internalize the lessons you’ve learned and review them in the future, you are set to go.
https://medium.com/better-marketing/how-reading-50-books-in-a-year-greatly-improved-my-writing-937da494c3e4
['Yang Chun Wei']
2020-03-04 02:33:07.416000+00:00
['Writing Tips', 'Books', 'Self Improvement', 'Reading', 'Writing']
Why You Don’t Feel Like You Deserve Compliments
I have a real issue. When people say nice things about me, I don’t believe them. Whether it’s how I look, an accomplishment I’ve recently achieved, an idea I had, or even just on cookies I baked, when people compliment me, I think they’re lying or trying to manipulate me somehow, or that they mean well but don’t recognize that what I’ve done is actually not that good. Like when your partner tells you you’re beautiful — they’re kind of obligated, so the meaning’s gone for you. So when someone says to me, “Wow, your dress looks amazing today!” I deflect. I’ll say, “Oh thanks — I don’t know, I guess I like it. I can ride my bike in it, which is the important thing!” Cue the classic: “Thanks, it has pockets!” Often, if someone compliments me, I’ll immediately flash back to the one time someone said something negative about me instead. This might sound familiar to some of you — the ability to overlook fifty positive comments in favor of the one negative one that someone said to you, once, ten years ago. It’s so easy to believe, dwell on, eternally reflect on the negative even when it’s overwhelmingly outweighed by the positive. Why does that happen? Why do we struggle to believe good things? Science tells us it’s a loop. There are three factors happening here, feeding into one another endlessly to make it hard to accept compliments: low self-esteem, cognitive dissonance, and high expectations. It goes like this: you don’t think much of yourself, for whatever reason. Maybe it’s imposter syndrome, maybe you’ve only been valued for one aspect for most of your life, so it’s impossible to see your worth in others. Maybe you’re continually comparing yourself to others and coming up short in your own estimation. Either way, you have low self-esteem. So when someone compliments you, this jars with the truth you hold about yourself. It’s uncomfortable for your mind, because you’re faced with two prospects: one, you’re wrong about yourself, or two, they’re lying. You can’t simultaneously believe you suck and believe someone else when they say that you don’t. So while your brain is working furiously to justify the two things concurrently, your mouth will pop open and start to justify things to the other person. “Oh, I just got lucky.” “I guess the stars aligned.” “Good thing they asked the right question that I knew the answer to.” Just like that, the pressure’s off. This plays into the last factor: high expectations. Because you have low self-esteem, because you struggle to believe other people when they’re kind to you, you want to shirk any expectations as soon as possible. So you respond to the situation in a way that lets you off the hook if you don’t succeed next time. This relieves a bit of the pressure and anxiety you feel when someone compliments you. You’ve successfully shucked off their expectation that you’re gonna do well. But it’s unpleasant to constantly be second-guessing every nice thing people say. Sometimes, people are just nice. It’s good for our brains to be told we’re good. We’ve established why this thing happens — because we don’t believe in ourselves, and it’s more comfortable for our brains when nobody else believes in us, either. But it’s healthy when, instead of forever dwelling on negative feedback, we linger on the positive. Being able to experience happiness when other people notice you can be an important source of fulfillment. — Dr Whitbourne, Professor Emerita of Psychological and Brain Sciences. It turns out that when you push those positive memories away, you’re actually losing the ability to experience happiness over positive remarks. You can tell they’re positive, but there’s no warm glow of pride accompanying them. Research showed that people who routinely dismissed positive comments actually had a harder time remembering the level of positivity of the feedback, more so than people who accepted them to begin with. Accepting compliments can be hard. It’s worth it to accept compliments, both for your memory and your mental health in the long run. But it can feel boastful, to accept that something you’ve done is good. Who wants to be the narcissist who says “Yeah, I’m aware,” when someone compliments you? But even when you’re just trying to say “Thanks,” it feels like pulling teeth. The urge to justify or qualify your success is overwhelming. You want to buckle and say that it wasn’t you, that you just got lucky, that you won’t do so well next time. I’m like that, so I know it sucks. Obviously, I’d rather just believe the other person, say thanks, and move on with my life instead of obsessing over the appropriate way to accept compliments. But I want to get better. What happens when you stop self-criticizing? For one week only, just while I was at work, I accepted compliments. I did not deflect, I did not qualify, I did not put myself down immediately after accepting it. I simply said “Thank you,” to anything nice that people said about me. If my success was partially due to someone else, I said thanks and acknowledged their hard work, too, but I didn’t say it was all them. If I didn’t agree with a compliment, I still took it, choosing to believe that someone else’s opinion could still be valid even if I didn’t think it was true. In short, I tried to live for a week as though I could take credit when I did stuff well. My brain tried to convince me that any would-be complimenters were just being sarcastic, or that I’d spectacularly fail the next time I tried at anything and embarrass myself. But you know what? I didn’t spontaneously combust. I did not get fired. I didn’t become amazingly self-assured either, and my self-esteem wasn’t fixed overnight, but it felt good to accept that I might be good at some things. I didn’t get better at believing it — yet — but it did start to feel more natural to simply accept and move on. I hope in time I’ll find it easier to believe compliments, and not just give lip service to the idea. Compliments can feel like just a minor form of social interaction. We all say them, we all receive them. But I believe that just as important as learning to give one, is learning to take one.
https://zulie.medium.com/why-you-dont-feel-like-you-deserve-compliments-5acd3bb28324
['Zulie Rane']
2019-06-12 08:38:58.725000+00:00
['Relationships', 'Lifestyle', 'Mental Health', 'Self', 'Psychology']
Should you use Shopify for your online marketplace selling?
Should you use Shopify for your online marketplace selling? Finding a one-stop shop for selling on Amazon, eBay, in stores and beyond Photo credit: Create Her Stock Online shopping has remained the MVP for Christmas shopping, but it came in handy for me long before then. One of my easiest hustles in college was selling textbooks I no longer wanted nor needed for cheaper prices. I also saved a lot of money by buying books on Half.com (owned by eBay) and Amazon in 2001–2003 (before it was running the world of online shopping). While I’ll never knock Craigslist bargaining and Freecycle giveaways for sharing freebies, by the time I wrote my first book, I was running into a lot of black-owned businesses and creatives who were trying to figure out how to get their products out to the larger public. Should you join Etsy? Maybe eBay or Half? What about Cafe Press? Or, ditch it all and put it on a website with a PayPal button? You name it, and I’ve done it. But there’s one online marketplace arena that I truly wish I’d have known about, a one-stop shop to sell on a lot of different platforms online and in person, without having to log in and out of each site to do it: Shopify. Photo credit: Create Her Stock What is the big deal with Shopify? Are you new to Shopify or a connoisseur in selling and buying online? The setup process for creating a Shopify store is pretty easy to do. But what is the full cost of selling through Shopify? Read below for details. The setup process for a Shopify store Before you can start setting your prices on Shopify, you need to plan a few basics about the cost to sell on Shopify: Are you already selling online or in a brick and mortar store? When did you plan on launching your store? Do you want to sell online exclusively or in person, too? If you’ve sold your products in person, do you prefer markets, fairs, pop-up stores or brick-and-mortar stores? What are your current earnings for your products, or are you just starting out and haven’t made a profit yet? And most importantly, what exactly are you trying to sell: beauty products, clothing, electronics, furniture, handcrafts, jewelry, paintings, photography, restaurant meals, groceries, other food and drinks, sports products, or toys? Have you already planned out these answers? Or, maybe you’re planning this Shopify store for a client. Can they answer these questions to help you start their store? You’ll need to know this stuff before you can launch your store. Keep in mind that you’ll also have to register a real mailing address, even if you plan to sell online only. If you’re worried about giving out your personal address, creating a P.O. box or using a co-working address can help protect your privacy. Choosing your startup rates to sell on Shopify Whether you’re a serious seller or don’t know where to start (or what to sell), Shopify offers a 14-day trial to figure it out. (If the site is not for you, and you cancel before the 14 days are up, you’re not charged.) Photo credit: She Bold Stock Basic Shopify For $29 per month, you will receive the following: Online store Sales channels Gift cards Shopify point-of-sale (POS) app for in-person selling Credit card purchasing rates Domestic: 2.9 percent + $0.30 International: 3.9 percent + $0.30 Amex: 2.9 percent + $0.30 In-person: 2.7 percent + $0.00 Staff accounts: Two (plus the account owner) Shipping discount (up to 64 percent), buy and print shipping labels Locations: Four (track inventory and fulfill orders at locations) Shopify For $79 per month, you will receive the following: All Basic Shopify features Professional analytics Credit card purchasing rates Domestic: 2.6 percent + $0.30 International: 3.6 percent + $0.30 Amex: 2.6 percent + $0.30 In-person: 2.5 percent + $0.00 Staff accounts: Five (plus the account owner) Shipping discount (up to 72 percent), buy and print shipping labels Locations: Five (track inventory and fulfill orders at locations) Sells in up to two languages Advanced Shopify For $299 per month, you will receive the following: All Basic Shopify features Professional analytics Data modeling Calculated shipping rates Credit card purchasing rates Domestic: 2.4 percent + $0.30 International: 3.4 percent + $0.30 Amex: 2.4 percent + $0.30 In-person: 2.4 percent + $0.00 Staff accounts: Fifteen (plus the account owner) Shipping discount (up to 74 percent), buy and print shipping labels Locations: Eight (track inventory and fulfill orders at locations) Sells in up to five languages You can also choose to sell online with a “Buy” button and/or in person with a Point of Sale for $9 per month. Note that for all three accounts, a POS Pro upgrade is available for $89 per month, per location. And all three accounts will accept 133 currencies. Photo credit: Create Her Stock Shopify Plus For $2,000 per month, more than 7,000 major brands use this Shopify Plus platform. It includes everything above, plus 60 percent faster checkout; Shop Pay; built-in augmented reality (AR), video and 3D media; discounted shipping rates; and Avalara tax automation. If you don’t have thousands of transactions per minute, skip this. Ways to save on Shopify An annual payment will save you quite a bit of money. For Basic Shopify, paying for a year will cost $312 versus $29 per month ($348). For two years, pay $558 at once versus $696. For three years, pay $783 at once versus $1,044. For Shopify, paying for a year will cost $852 versus paying $79 per month ($948). For two years, pay $1,518 at once versus $1,896. For three years, pay $2,133 at once versus $2,844. For Advanced Shopify, paying for a year will cost $3,192 versus paying $299 per month ($3,588). For two years, pay $5,640 at once versus $7,176. For three years, pay $7,884 at once versus $10,764. Shopify sellers can pause their store for up to three months without an extra charge. You’d have to have an active store for at least 60 days after the trial period. You can also pause your store to build the online store with upgrades and new products. Photo credit: She Bold Stock Sellers may be at risk of chargebacks As with any online seller platform, there will be users who change their minds. Unfortunately, that can result in the seller losing more funds than they expect. Although Shopify has fraud software to police dishonest buyers, sometimes it doesn’t work. Anytime there is a high-value sale, a general rule of thumb is to verify the customer’s identity. Click here for tips on how to verify a buyer’s IP address, and click here for potential fraud warnings. If/when the credit card owner catches on, they can dispute the credit card charge. If the bank leans in their favor, the bank will make a chargeback. The bank takes the disputed amount, plus a chargeback fee, so you could lose more money than the sale price. For attentive sellers, you can try to resolve or refund the product before the bank can get involved. For sellers who don’t check their messages, these chargeback charges may be a surprise. They’ll usually find out once the chargeback is deducted from the next available payout. What about pricing rates for other online stores? If you want to sell your products on other popular online retailers like Amazon or eBay, you can. Set up an Amazon sales channel and have a Professional Seller account before you can do so though. Visit Amazon Seller Central for steps to do so. If you’re wondering whether it’s worth it to do both, keep in mind that 300 million shoppers worldwide are on there. The site also charges a monthly subscription rate and per-item fees for selling. If you post items on Shopify and Amazon, you can add the Amazon sales channel from the Shopify App Store. To make sure all of your Sales Channels are connected, click your Sales Channels to confirm. Discussing your pricing on online sales channels As an online (and in-person) seller, you can discuss sales rates, prices and give refunds on Shopify. Download the Messenger option from the Sales Channel page. If you decide to sell items and have your own personal website, in addition to Shopify, make sure you filter customer messages out so you don’t miss any. All it takes is one missed message to ruin a sale. Other Shopify charges you can control As an online seller, you have the power to create discounts, too. First, you’ll want to compare what the usual rates are for your products versus what you want to charge. Second, you can decide how much you’re willing to discount your rates. Click here to create your own discount code as a percentage, fixed rate, a double deal or free shipping. Decide whether you want your deal to be based on a minimum number of items ordered or a minimum amount spent. For discounts, you can limit the amount of sale items, choose the number of buyers and schedule sale dates. Ready, set, start your Shopify account So now that you know the basics of the cost to sell on Shopify, here’s your chance to start posting. Start with a few items to familiarize yourself with pricing, messaging and shipping. The easier it gets, the more you can add. Before you know it, you’ll be a pro. Good luck!
https://medium.com/we-need-to-talk/should-you-use-shopify-for-your-online-marketplace-selling-88b79e48601c
['Shamontiel L. Vaughn']
2020-12-27 23:32:18.933000+00:00
['Online Shopping', 'Amazon', 'Diversity', 'Marketing', 'Multiculturalism']
An Intro to Apple’s Combine Framework
by Daniel Carmo, Agile Software Engineer Photo by AltumCode Combine was announced by Apple in the 2019 WWDC. This new framework is used to implement Reactive Programming natively in Swift. The Combine framework provides a declarative Swift API for processing values over time. These values can represent many kinds of asynchronous events. Combine uses three core concepts in order to create easier to read and maintainable code: Publishers Subscribers Operators Here’s a quick breakdown of the core concepts: Publishers Publishers are items that describe what values and errors can be produced. Publishers are value types to be defined as structs and allow for registration by Subscribers, which receive the values they produce. The Publisher protocol is defined as follows: (WWDC 2019 — Introducing Combine) The Publisher includes an associatedtype for the Output and Failure. The Output describes the type of value that the Publisher produces. The Failure describes the type of errors that it produces. In the event that no error can be emitted a special type of Never can be used. As you can see from the protocol, the Publisher has one function which is to have a Subscriber subscribe to it’s produced values. Based on the constraints of the function, the Subscriber must accept an Input of the Output type and a Failure type matching the Publisher. Subscribers Subscribers are items that describe the consumption of values from Publishers. These values attach to publishers, consuming values until the completion is reached. Subscribers are used to act on the Published values and hence are reference types and are declared as classes in Swift. The Subscriber protocol is defined as follows: (WWDC 2019 — Introducing Combine) The Subscriber includes an associatetype for Input and Failure. The Input describes the type of value that they expect to receive. The Failure describes the type of errors that they expect to receive. If no error is expected then they can use the Never type. The Subscriber has more methods on it than the Producer. Each method is invoked at different times in the lifecycle of the subscriber. When a subscriber initiates the subscribe process on a Publisher, the Publisher responds to it by creating a Subscription and sending it via the receive(subscription: Subscription) method. From here the Subscriber then demands N objects from the Publisher and begins receiving them on the receive(_ input: Input) -> Subscribers.Demand function. The Publisher will either run out of values to send or reach the demanded number of values and return on the receive(completion: Subscribers.Completion<Failure>) method. As demonstrated by the figure below: (WWDC 2019 — Introducing Combine) Operators Operators are the last piece to the Combine Framework. Operators are used to transform values from a Publisher to a more suitable value to be consumed by a Subscriber. They subscribe to the upstream (Publisher) and produce new values for the downstream (Subscriber). The operator is a small, self contained piece of functionality that can be chained together to produce a more complex final result. Let’s take a look at the Map Operator. The Map operator is described as follows: (WWDC 2019 — Introducing Combine) The Map Operator has the generic type Upstream which must extend a Publisher and an Output type which the values are transformed from. You will also notice that the Map struct extends from Publisher. The constructor of Map expects an upstream, and a transformation function that takes the Upstream.Output and converts it into it’s expected Output type. As you can see this is an extension on Publishers, so in order to use this method we’d have to use the constructor to accept the Upstream and transform the values. Luckily Apple provides us with a helper method which extends directly on a Publisher. (WWDC 2019 — Introducing Combine) This allows us to use an existing Publisher and apply this Operator directly to it. Example 1 Let’s take a look at an example of the above in action. [“12”, “15”, “20”, “10”] // Convert the array of Strings into a Publisher .publisher // Convert array of Strings to integers .map { Int($0) ?? 0 } // Double the values .map { $0 * 2 } // Subscribe to the Publisher with a sink and print the values // Up until this point, the Publisher has emitted no values and done no work. Once we subscribe it begins publishing it’s values .sink(receiveCompletion: { _ in print(“COMPLETE”) }, receiveValue: { print(“Doubled Value \($0)”) }) In this simple example, we are taking an Array of String and converting it to a Publisher. We pass the values through the map Operator and convert them into Int values. After which we multiply the values by 2 through another map Operator and then we use the sink Subscriber to begin receiving and printing the values. In the end we have the following printed: Doubled Value 24 Doubled Value 30 Doubled Value 40 Doubled Value 20 COMPLETE You can see that the Publisher has produced four values from the array to the Subscriber sink and once exhausted called complete on the Subscriber. Let’s take a look at converting the above into a function: func convertStringArrayToIntArrayAndDoubleValues(_ values: [String]) -> AnyPublisher<Int, Never> { return values // Convert array into a publisher .publisher // Convert array of Strings to integers .map { Int($0) ?? 0 } // Double the values .map { $0 * 2 } // Erase the resulting type to AnyPublisher .eraseToAnyPublisher() } There’s a few key things that are happening here. First let’s take a look at the function definition. Initially we are taking in an Array of String and then returning a type of AnyPublisher<Int, Never>. This AnyPublisher is a special type that helps us clean up the intermittent Publishers that are created by using Operators. Notice that the AnyPublisher return uses an Int as it’s output type. The Publisher will emit a single integer at a time as it’s converted instead of emitting the entire String converted to a single array. This means the Subscriber can act on the individual Ints as they are converted. Lastly take a look at the last line of the function. This line erases the intermittent types of the Publishers that are created when we chain the Operators together. Without the eraseToAnyPublisher() our return statement would be a large string of Publishers chained together. This method helps us perform some type erasure to the final type that is expected of the method. Example 2 Apple has already converted some of the functionality of Foundation to use Publishers. Currently there is support for Timer, NotificationCenter, and URLSession. Let’s take a look at URLSession for a more practical example. For this example we’ll use the Cat Facts API to get some facts about animals (https://alexwohlbruck.github.io/cat-facts/docs). First we’ll create the model for parsing: struct CatFact: Decodable { let _id: String let test: String let type: String } For this example we’ll just concern ourselves with the _id, text, and type of fact. Next let’s create our URLRequest and send the request with URLSession. var request = URLRequest(url: URL(string: “https://cat-fact.herokuapp.com/facts/random")!) request.httpMethod = “GET” let cancellable = URLSession.shared.dataTaskPublisher(for: request) .tryMap { element -> Data in guard let httpResponse = element.response as? HTTPURLResponse, httpResponse.statusCode == 200 else { throw URLError(.badServerResponse) } return element.data } .decode(type: CatFact.self, decoder: JSONDecoder()) .sink(receiveCompletion:{ completion in print(“COMPLETED PROCESSING \(completion)”) }, receiveValue: { (catFact) in print(“We received a \(catFact.type) Fact!”) print(“\(catFact.text)”) }) In this example we are using the URLSession’s new dataTaskPublisher method. This method returns us a Publisher that we can then Operate and Subscribe to. We are using tryMap here to map the response to its data and then follow up with decode into our CatFact type. Lastly we are Subscribing with sink and printing out the value of the Cat Fact and then receive a completion. You can see with this there’s no longer a callback, but instead clear concise steps that we take in the form of Operators to get to our final step, the fun cat fact that we requested! We received a cat Fact! The Havana Brown breed hails from England, where it was created by crossbreeding Siamese cats with domestic black cats. COMPLETED PROCESSING finished Conclusion Swift has become even more powerful with the addition of the Combine Framework. It gives us native Reactive Programming functionality that allows us to use the MVVM pattern more closely. This is a basic introduction to the layout and flow of Publishers, Operators, and Subscribers. There is a lot more to cover than the above and I’m very excited to unlock the full potential of Combine as it becomes more widely used. Combine and SwiftUI work together seamlessly and unlock greater potential for the MVVM pattern. The one major factor holding back using this fully in production is that it requires a minimum version of iOS 13. As applications continue to increase their minimum target versions, Combine will become the driving force behind our applications! Resources Introduction to Combine https://developer.apple.com/videos/play/wwdc2019/722/ Combine in Practice https://developer.apple.com/videos/play/wwdc2019/721 Processing URL Session Data Task Results with Combine https://developer.apple.com/documentation/foundation/urlsession/processing_url_session_data_task_results_with_combine Cat Facts API https://alexwohlbruck.github.io/cat-facts/docs
https://medium.com/tribalscale/an-intro-to-apples-combine-framework-693014315def
['Tribalscale Inc.']
2020-11-18 15:20:03.141000+00:00
['Swift', 'Development', 'How To', 'Apple', 'Framework']
Are Colours Easier to Read than See?
While he devoted most of his life to Christianity, John Ridley Stroop will be most remembered for his contribution to psychology. Born the second youngest of six children in 1897, he spent his early years on his family farm in Tennessee. Health concerns meant he wasn’t able to help much with farm duties, and so Stroop developed his mind. He excelled at school, graduating top of his class, and went on to study experimental psychology and education. His work in psychology continued a line of research that began when Wilhelm Wundt asked his student James Cattell to look into colour naming and colour-word reading. Stroop tested people on their ability to name the colour of words which themselves spelled different colours — for instance, when the word “red” is printed in red, and the word “green” is printed in blue, people are quicker to recognise red than blue. To try for yourself, go through this list and try to name only the colour of each word, not the words themselves: Chances are, you took a little extra time to identify the colours when the word spelled a different colour. It doesn’t need to be much, but when they don’t match, a little more effort is necessary to ensure we identify the correct perceptual characteristic. If we instead have to say the word while ignoring the colour, we don’t run into the same trouble. Our mind seems to more easily process the word than the colour of the word. Further research has found that the effect is not limited to words and colours. For instance, direction words (up, down, left, etc) appear in different locations of a screen, or travel in certain directions across the screen, and when the word and direction are congruent, people identify the direction quicker. Another study found that when people were presented with two numbers that differed both in numerical size and in physical size, they would take longer identifying the incongruent digits. Emotional words also seem to alter the classic Stroop effect, slowing down the naming of the colours they’re printed in. There was more to this study, as they found that the emotional words were also recalled more easily in a surprise test afterwards. Another study found a similar benefit in retrieval for incongruent stimuli. They presented faces of men, women, and houses, with the words “man,” “woman,” or “house.” Again, the congruent trials were correctly identified faster than the incongruent trials, and again, people showed better memories for the incongruent that the congruent. “Transient shifts of attention in response to perceptual processing difficulty … appear to strengthen the encoding of incongruent items as they are processed,” they write. The Stroop Test not only highlights how our mind sometimes has trouble ignoring one element of perception to focus on another, but that the difficulty in separating those differences improves our memory for them. Despite the growing influence of Stroops study, his finding didn’t become popular until later in his career, after he had put psychology behind him. He spent most of his career teaching at David Lipscomb College, and would preach each Sunday. By the end of his life, he had published seven biblical books, compared to four psychology papers. Yet one of those papers is in some way responsible for thousands that followed. That lasting influence, according to Colin MacLeod, is due to “its large and always statistically reliable effect and the lack of an adequate explanation for the effect.”
https://smbrinson.medium.com/are-colours-easier-to-read-than-see-c17ecef11272
['Sam Brinson']
2019-03-13 20:19:35.238000+00:00
['Memory Improvement', 'Design', 'Perception', 'Colors', 'Psychology']
How to excel at take home coding challenge
I reviewed a lot of take home coding challenges. Our challenge was designed for junior/beginning Java developers. In this article I want to compile list of recommendations on how to ace this challenge. These recommendations can be applied to all developers. In my experience person doing the review will not spend huge amount of time reviewing your code. In most cases they will not even run it. I am not saying that that you send them nonfunctional piece of code. I am saying that the most important thing you can do is have everything clear and obvious. Do not spend huge amount of time on some abstractions, complex algorithms or other bragging opportunities. Those challenges are usually designed to be simple and reviewer wants to see simple solution. Project setup Our assignment was always bit vague. But one of the instructions was to use a concrete library for parsing CSV files. As a reviewer you looked on how the library was integrated into the project. In Java you looked if Maven or Gradle was used. If not was the library present as a jar file? There were cases where we received full Eclipse workspace. In Java Script you would look for NPM or was the library just linked in the script tag? This tells how familiar you are with the tooling around the language you are interviewing for. As I was doing mostly junior positions it was not necessarily bad if the person did not used Maven. That is something you can teach them. But it always was huge plus if they did. There is a lot of project creation tools out there. For Java you can use Spring Initializer , for Java Script Yeoman. Your code will look much more professional. You increase the changes of reviewer actually running your code. And you will learn something you will need anyway. Clean code There is a lot of resources online on how to do clean code. You do not have to follow it one to one. But some general principles should be applied. The best principle you can follow is newspaper article analogy. Image your code structure as a newspaper article. Headline is your class name. Public methods are first few paragraphs. Private methods are rest of the article. Each class should be its own story. This will increase readability of your code. Reviewer will have easier time reading your code and it gives you a lot of plus points, as readable code is must for working in team. You should also choose some kind of formatting and stick to it. Most modern IDEs can reformat your code, please run it before submitting your code. It looks ugly if each method/class/line has different formatting. Do not worry about tabs vs spaces or any other topic. Just choose what fits you. Be careful naming things. Classes, variables, functions etc.. Do not use unnecessary shortcuts. Why use Stor instead of Store ? Names should be descriptive, but avoid long names that feel like straight out of Corporate Name Generator. Logging I always looked how logging was done. Was it present? How? Was it just System.out.print ? Next level would be java.util.logging and best would be SLF4J. Obviously some logs are better than no logs. I usually did not recommend solutions without logging. I would consider error handling as part of this. Every time I see e.printStackTrace() in catch block I am subtracting some points. Least you can do is log it properly. Writing tests Another huge part is test writing. Tests are the best thing you can do to standout in this challenge. I understand that testing may be hard for you, it takes some time and you want to do this challenge as quickly as possible. You do not need to TDD this challenge. You can just test the happy path. If you are out of time, you just write few tests, and rest you just prepare test methods and leave comments explaining what are you about to test. Tests tell me that quality of the code you are writing is important for you. It tells me that you can write testable code. And it tells me that you are willing to write those. Documentation You do not really need to document everything and append 50 pages long documentation with your challenge. But still nice README.md explaining how to run your app is a must. Look at your favorite open source project on GitHub and try to do something similar in your README. Bonus points Few bonus points you can get are Some CI integration App is actually running/is deployed somewhere You provide doc with next steps you would do to improve this app I can do quick code review for you if you are not sure about your results. DM me on Twitter
https://medium.com/dev-genius/how-to-excel-at-take-home-coding-challenge-5b25c03d6c3c
['Pavel Polívka']
2020-12-03 18:11:27.334000+00:00
['Interview', 'Development', 'Java']
5 Important Truths I Gained as a Romance Author
In my desire to learn more about writing income streams, I’ve started writing short romance books and publishing them for Amazon Kindle. I use more than one pen name, and I’ve experimented with writing different types of stories. I’ve only been doing this for a month or so. I’ve made a little bit of money — about $86 in royalties for January. But what’s been most valuable to me is what studying romance writing can tell us about real people — especially women. The Kindle romance market is huge A quick Google search will tell you that romance is the largest market for Kindle books. And with the rise of Kindle Unlimited, short, rapidly-released romance novellas are becoming consumables. You can read your fill for “free” once you’ve paid for the monthly fee of $9.99 — and romance fans read these books faster than authors can write them. Kindle romance books are like Starbucks specialty concoctions; a little self-indulgent treat those with a bit of disposable income can relish. In fact, In 2015, Nielsen found that almost half of romance book fans read the genre at least once a week. I’ve been teaching myself the art of writing romance, and I try to “write to market.” In my quest to figure out what sells, I’ve learned a thing or two about what people really want. Think about it — surfing the Kindle store is just like searching Google. You can search for what you want easily and privately. So, just as Google search trends can teach us what’s on society’s mind, Kindle search terms give us a rare glimpse into people’s fantasies. And the best part is that it isn’t just insight into sexual fantasies (though you definitely learn about those too). You get a look at people’s emotional fantasies. The type they would likely be embarrassed to admit to others. Here are five things I’ve learned about people writing Kindle romance. Above all, people want to be chosen There’s a reason so many books and movies are about a person who’s “the chosen one.” Being uniquely desired seems to be a common desire. For example, in a straight romance story, the hero becomes enthralled with the heroine. All other women cease to exist once he finds her. That’s why being “claimed” is a popular genre term — meaning sexually claimed and claimed as a prized-possession. The concept of a focused, unrelenting desire for a particular woman is the force behind the popular “shifter” romance niche. Shifters are supernatural beings who can transform from human to animal. When the male shifter, who has the instinct of a creature like a wolf or a bear, meets his one true mate, he has an unyielding urge to “claim” her. It’s biological, it’s primal, and it’s selective. Even conservative, vanilla people want hot sex Let’s admit one thing: most romance books are not feminist. And from interacting with Facebook fan groups, I get the idea that romance fans are often pretty conservative in lifestyle. They are not the type of people who would fall for the sexy motorcycle-riding drifter they bump into in a “meet-cute.” But what I find is that these unassuming women still love a hot, steamy sex scene. It may not be BDSM or anything wild, but readers want to be turned on. And the characters often go on to have frequent, fulfilling sex lives in their happily ever afters (which we often conveniently learn about through an epilogue). There is one caveat: clean, inspirational, or Christian romance also does very well. These are stories about chaste women who find love. They are often historical or set in a subculture like Amish country (yes, Amish romance is a thing). Any sex is left to the reader’s imagination after the characters walk into the sunset of marriage. Nothing’s more comforting than the familiar Comfort food is familiar food. It’s usually a family, cultural, or holiday favorite. No one wants to try experimental cuisine from an ethnicity they’re not familiar with when they are seeking comfort. When it comes to romance readers, they don’t want to read your creative, experimental hogwash. It’s all about tropes, tropes, and more tropes. Kind romance writers go as far as putting the trope right into the title. An example might be: “Home Again: a Second Chance Romance.” Or “My Night with the Billionaire: a Secret Baby Story.” I just made these up, but I wouldn’t be surprised if you could find very similar titles on Amazon. A few of the most popular romance tropes include: Secret baby (heroine hides her pregnancy/baby from the father for reasons) Friends-to-lovers Enemies-to-lovers Second-chance at love Fake relationship — Think “Holiday in Handcuffs.” Tropes are all about meeting reader expectations. If you’re reading a romance looking for an escape, you don’t want the secret plot twist to be that the man is a fraud or a serial killer. You don’t want the couple to fall on hard times that don’t get neatly resolved before the happily ever after. In fact, a happily ever after (HEA in the biz) is required, according to the Romance Writers of America. Readers will forgive a lot if your work makes them feel something One of the pitfalls of rapid-release self-publishing is that there aren’t the same quality controls in place that you’d get when reading a traditionally published book. Most Kindle romance writers are either hobbyists or light side-hustlers. But even those with a big following occasionally publish a book with a typo or mistake. You find a lot of incorrect character names and forgotten end quotation marks. But, if your story is good and your characters are likable, readers won’t crucify you. That said, I’m not sure if readers on this platform are equally forgiving. Here, the readers desire to feel intellectual stimulation — or even a sense of intellectual superiority. But romance readers are all about the love. They appreciate that you did the work to write a book for them to enjoy this weekend curled up on the couch. The Romance genre will never die During the last American recession, when the sales of most books were declining, the romance genre still grew. Romance readers are loyal, lifelong fans. Why? Some researchers say it’s evolutionary. Women love romance novels because they want strong alpha mates for protection and reproduction. Women want men who are “wealthy, fit, fertile, and committed.” But is that the only reason? I know that some would find that reasoning highly sexist. Romance may be driven by our ancient desire to find a caveman to drag us back to his cave, but in today’s stressful world, I think romance books serve as a simple, affordable escape. Like fantasy sports and social media, romances are a way to make yourself feel happy while giving your brain a break from our sometimes scary and bleak realities. The ubiquitous Hallmark Channel romance movie has flourished since Trump’s in 2016. “The environment is undeniable contentious. We are a place you can go and feel good,” says Bill Abbott, chief executive of Crown Media. I know this is true because I was one of those women. After mourning Trump’s surprising victory during November, I turned on cheesy, ultra-predictable Hallmark movies, where love always wins. All you need is love Love is the thing we desire most. The purpose of life has something to do with the mystery of love. Dan Pedersen Love is so precious that we will read about big, over-the-top love affairs that most will never have. Just like our culture values wealth, as displayed by the success of reality shows about rich people, we also value love so much that we will read about it romance novels. Even short, cheesy, self-published, Kindle romance novels — typos and all. Want my list of 10 nonfiction books that will improve your life in 2020?
https://medium.com/narrative/5-important-truths-i-gained-as-a-romance-author-d7507bde9771
['Courtney Stars']
2020-01-31 22:00:44.909000+00:00
['Books', 'Love', 'Relationships', 'Writing', 'Life Lessons']
The Move to React Native
About six months after joining ClassPass, I found myself managing all of the mobile engineers in the department. This wasn’t by design, I just happened to take over all of the engineering squads that contained mobile engineers. It was an opportunity to examine and improve mobile engineering at ClassPass. One of the biggest opportunities we identified was exploring just how widely we could use React Native in our mobile development. React Native is a tool; it’s right for some jobs and not for others. And like any tool, it has real power when applied to the right job. It can speed up development in a number of ways. You can surge web engineers into mobile projects in a way you can’t when writing native code, because it’s JavaScript and based on React. Its hot reloading increases engineering velocity since you don’t have to wait for compilation to see your changes. But the biggest benefit is that you can build features once and launch them on both iOS and Android apps. Like many startups, ClassPass had more work on our product roadmap than we would ever have time to do. It was clear that building features for our mobile apps with half the effort could be a huge win for the company. But when you already have two large mobile codebases and mobile engineers who have had little to no experience with React Native, a transition like this can be complex and challenging. Success meant identifying those complexities, figuring out how best to meet those challenges, and remaining agile enough to keep iterating based on regular feedback. Our adoption of React Native was, for all of us at ClassPass, a real lesson in the huge benefits that can accrue from both careful planning and successful adaptation. Buy-in From Key Engineers Changes like this are usually a hard sell to your engineering team. Engineers who are proficient in Swift or Kotlin code don’t want to learn JavaScript or the React framework, particularly when being told to do so by a manager who’s never submitted code to a mobile codebase. It was clear I’d need allies in the engineering team. Two quickly came to mind. The first was a senior iOS engineer who had established himself at ClassPass through technical skill and an eagerness to share his knowledge (he was later promoted to be the mobile lead of ClassPass). The second was a recently-promoted Engineering Manager who was one of the first engineers to see the potential benefits of React Native. Both were much closer to the work, could lead by example, and were well respected by the other mobile engineers. Find Executive Support A large technological shift like this means you also need champions from above. Changes like this can have a huge beneficial impact on the business, but realizing them can take time. There’s the basic learning curve of the language and framework. Time spent hashing out best practices and optimizing operations are vital to the project’s success, but carry opportunity costs. Here’s where ClassPass’ culture of transparency and debate really paid off. We had to clearly articulate the long term business impact this kind of transition would have. Writing code once and launching on both mobile platforms simultaneously would be a huge benefit, especially as we were starting to discuss a major international expansion, where many markets have a much more even distribution of iOS and Android devices than ClassPass’ domestic customer base. Once everyone understood that, the executive and technical stakeholders were now aimed toward the same goal, vastly improving the project’s chance at success. The Learning Curve At the onset of this initiative, I thought it would help if my team could speak to some engineers who had been successful using it. Back in early 2017 that wasn’t easy to find. It had only been two years since Facebook open-sourced React Native. Artsy was one of the few companies that had experience actually using React Native and had spoken openly about it through multiple blog posts. The CTO was a friend, so I reached out and set up some time for our teams to speak. Hearing first hand about their experience gave us more confidence that this technology was viable for ClassPass. The team started by using React Native for some smaller features, but we all agreed that some formal training would be valuable. One of our engineers knew of someone who’d led React Native training sessions, and so we scheduled a two-day class. The instructor introduced JavaScript, React, and eventually React Native. Our mobile engineers were very strong, senior, and already had some exposure to React Native; so within a few hours they pushed the instructor into material from the second day. This meant the second day could be used delving into advanced topics. It was great to see the instructor adapt so quickly into a bespoke curriculum based on my team’s desire to better understand advanced concepts like testing and best practices for a “brownfield implementation” like ours. The Mandate The team’s success with the training and these smaller projects convinced me we were on the right track but that I needed to keep pushing to make this happen. If each engineer had discretion to use React Native or write native code, many would choose native because it’s what they already knew. If that happened, we would have never realized the full benefits of React Native. So the mandate I laid out was simple: Use React Native for all new projects on mobile and come speak to me when you think there should be an exception. The team’s reaction was mixed. Some engineers were excited about it and excited to learn something new, while others were skeptical. But the skepticism was kept to a manageable amount due to the mobile lead and the manager being on board, having gone through the training, and having completed some smaller features in React Native. There was one clear exception to the React Native mandate from the beginning. Our Special Projects group were building a live video streaming product that used biometric feedback from body sensors via bluetooth. The work they were doing had deep integrations into the mobile operating systems, and was therefore not a good candidate for React Native. This exception was clear and non-controversial, and I was curious when we would find another clear exception. It took a full year. Operations At this point we had three repositories (iOS, Android, React Native). As we launched more of our features in React Native we faced more operational difficulties. For example, React Native code was pulled into the iOS and Android repos based on a pinned commit hash. This usually resulted in a broken build when an engineer updated the commit hash on either repo. And trying to proactively run UI tests for React Native code against both apps became a manual and time consuming process. After much discussion in the mobile guild, it was decided the time had come for the “monorepo”. This would be a large project, but a necessary one if we were to get the most out of React Native. The guild lead broke it down into some releasable milestones and over a few months we had monorepo up and running. In the new world of one mobile repository, changes to React Native code triggers a build to be run for both iOS and Android. Since UI tests are run against both apps, we ensure that master is always functioning for both iOS and Android. Code reviews are more productive since you can see what’s changing across platforms in one PR instead of three. Once we made this change, builds weren’t broken or full of terrible surprises. This was even more valuable for builds tied to our app releases. Reducing these issues helped our engineering velocity immensely. Engineers expect to catch their own bugs during a build. But when they find other people’s bugs, they have to reach out and both engineers are interrupted. Productivity takes a big hit and everyone gets frustrated. The new monorepo kept our mobile engineers much more productive and happy. The Feedback Loop Discussing and capturing feedback was a key part of our successful transition to React Native In addition to the large operational changes like the monorepo and the subsequent work on UI tests, many smaller decisions were made regularly. The mobile guild was the primary forum for these conversations. The mobile guild lead’s role in promoting and facilitating these discussions was critical. We also instituted longer “state of the union” meetings twice a year that were entirely focused around React Native. All of these mechanisms for capturing feedback would then feed into action items. Sometimes those actions were establishing and documenting best practices; at other times they were turned into projects on the mobile guild roadmap. It’s always important to separate naysaying from valuable critical feedback. Interestingly, throughout our transition, the amount of criticism an engineer had about React Native was inversely proportional to how often they used it. The biggest naysayers used it the least, and tended to like it more as they used it more. Distinguishing naysaying from good feedback was also a product of knowing the engineers very well. I made it a priority to have regular one-on-one meetings, observe them directly in guild meetings, and talk about them with managers. With all of this input, it wasn’t too hard to distinguish legitimate critical feedback from simple personal preference. Mobile Acquisition Flows and Concurrent Development For years our app was only usable by subscribers. We’d acquire users on the web, who’d then use one of our apps to search for and reserve classes. We had recently “opened up” the iOS app and added a number of flows where you could sign up for a trial subscription upon opening the app or several other points in the process of searching for classes. This was a huge shift for ClassPass and our apps soon became a significant source of user acquisition. As we were about to start launching in markets with much more Android usage, it was critical that this work be ported over as soon as possible. The mandate meant that all of this code was written in React Native, but the team that built it only had iOS engineers who were expected to launch it in the iOS app with an aggressive deadline. Because it was React Native we assumed it would be easy to port it to Android. This was a pivotal moment for mobile development at ClassPass because it showed great progress but also exposed some major issues. Android engineers were able to get the new flows into the app with roughly 25% of the effort (Two engineers in one month vs four engineers in two months). This was such a clear win for the business and for the team, that most of the skepticism around React Native evaporated. But it had also been a very frustrating month for those Android engineers. The frustration stemmed from our use of the “bridge” and the development of React Native code that was written only for iOS. The bridge is the mechanism for passing data between the native world and the React Native world. Since both our mobile apps started as native apps, the bridge is used on all transitions from one view to another. But using the bridge causes latency. It also means that bridge modules have to be duplicated in each native codebase. In this case, Android engineers had to go into the native iOS code and try to figure out what the bridge module was doing and then attempt to replicate that in Java. One of the most important action items that emerged from this project was that we needed to adopt patterns that minimize the use of the bridge. It was also clear that it was much more efficient to work on iOS and Android in parallel, rather than developing for one platform and then porting that code over at a future date. Many React Native components have either iOS or Android specific configurations, and in this project engineers would run into components that were configured for iOS but not Android which slowed down development and was generally a frustrating experience. Opening up our mobile apps for user acquisition was the last big project before we were ready to start developing features concurrently on both platforms. The experience and frustration our Android engineers had in porting over these user acquisition flows only reinforced that concurrent development was a move in the right direction. The Next Exception In early 2019 we completely rebuilt our search experience. Client-side development started in React Native as per the mandate. But early in this rewrite the mobile engineers saw latency in the search UX and started to doubt that React Native was going to work for this project. Tickets had been written and engineering work had already started, but now we were questioning what language to use. The product manager for search immediately flagged this a risk. A meeting was scheduled to discuss how to move forward. Because this topic touched both the search project and the mobile guild, most members of both groups came. It was a large meeting and I could tell a lot of people felt tense, like they were about to witness a React Native showdown. The meeting started with an engineer on the search project describing the latency issue. Almost immediately the Engineering Manager who was the advocate for React Native (but was not involved in the search project) just stated that we clearly shouldn’t use it for this project. There was some silence and it seemed that people were surprised. They were expecting drama but they got quick agreement. Another senior engineer then said, “React Native is just a tradeoff between latency and development speed”. I thought that was perfectly articulated. I said as much and that I agreed with the decision not to use React Native for our search experience. I was thrilled that we finally found our second exception because it finally drew some sensible rules for when we use React Native. It all came back to the bridge. Our search results page is very interactive. Most searches in large markets result in a seemingly infinite scroll of results. You can also swipe left and right to move to search results for different days, and there are other filter controls at the top of the page. All that interactivity requires moving data over the bridge and slows down the app. Most of the work we had done successfully in React Native were simple flows of one screen to another, or larger and more static pages like details pages for venues or classes. I had asked the team to use React Native unless they thought they had a good reason not to. We finally found another reason. Screens with a lot of interactivity should be done natively. ClassPass still defaults to React Native, but now there are more obvious guidelines for when we stick with native. The results Roughly 75% of all mobile development at ClassPass now happens in React Native. Our monorepo and suite of UI tests ensure we are testing React Native code against both apps for all builds. In our Growth squad, almost all mobile development is done in React Native; this has been transformative as almost all mobile work we do improves user conversion. All of this work is done twice as fast as it would be if we still had two fully native codebases. Code can be written once and deployed on both iOS and Android at the same time, and we plow through the roadmap twice as fast as we did two years ago. I am not a React Native evangelist. As an engineering leader you need to make technology decisions with the best interest of the business in mind. As engineers we are literally building the business and, in the case of mobile app development at ClassPass, React Native was the right tool for the job.
https://medium.com/swlh/the-move-to-react-native-969391f0af97
['Matthew Eckstein']
2020-05-03 01:02:47.513000+00:00
['Software Engineering', 'React Native', 'Engineering Mangement', 'Mobile App Development']
What the Little Brown Birds Taught Me About Marketing
This morning, the love of my life convinced me to hop on our bikes and ride four miles to the beach/park in our town. She does it regularly, but I haven’t been on my bike for a while, so I won’t lie: it was rough going. I’m in fairly good shape these days, but it’s a hilly ride, and I couldn’t quite get the hang of the gears. By the time we got there, I was out of breath, my butt hurt, and I felt more frustrated than invigorated by the experience. So I was happy to chain up our bikes and simply walk on the beach for a while. At one point, I even took off my shoes and dipped my feet in the water of the sound. Then we made our way back up to the grassy area. Part of her ritual when she does this (usually several hours earlier than we did) is to sit in silence and spend five minutes meditating. I generally stink at meditating — too many thoughts in my head — but agreed to give it a try. So we found a spot in the shade and sat on the grass facing the water, putting a tree between us and the rising sun. Then she set a timer on her phone and the meditation began. Almost from the moment we sat down, I noticed that some little brown birds, probably sparrows, started gathering near us, no more than a yard or two away. So I decided to focus on them, as a way to keep me from thinking about work or finances or my sore butt or the prospect of the mostly uphill bike ride back to the house. For the first minute or so, I simply observed. I watched the birds flit and hop in the grass in random directions, sometimes closer to us, sometimes farther. I was sure they had an agenda of some kind, but if they did, it was lost on me. Food probably, right? They were looking for bugs to eat. Or bread crumbs left behind by picnickers. During the second minute or so, I tried to put myself in their tiny shoes. See the world from their point of view. Low to the ground, the world massive all around me, most other life forms gigantic compared to me. How terrifying that must be. Luckily, I have eyes on the side of my head. So I can see in all directions at once and be aware both of potential food sources and potential threats. As if on cue, at around the third minute mark, a bike sped down the path not far from us, and the birds reacted instantly. Most of them not only hopped but flew several yards away, quickly recovering and returning once the disruption had passed and they realized they weren’t in any real danger. This triggered a couple of thoughts that carried me into the fourth minute. One, that the birds clearly considered the bike a threat, but not us sitting there on their (literal) turf. I guess because we weren’t moving around or making noise? Or maybe because they saw in us the opportunity, based on past experience, for food. Two, along those lines, I felt fairly confident that if I had some seeds or crumbs on me, and held them out in my hand, at least one or two of the birds would have ventured close enough to eat them. Because even in those few minutes, they had learned to trust us. Otherwise, why return so quickly after the bike passed by? The final minute of our “meditation” is when I experienced the revelation … … which was that as a marketer, my ideal potential clients — mostly small businesses, startups, and entrepreneurs — were like these birds. Each one has an agenda of their own that I can’t always understand, yet which I know is there. It’s often tied into not simply profits but also (often more so) their values and their mission and the way in which what they do defines who they are and how they feel about themselves and whether they’ll be able to take care of their family. Many feel overwhelmed by a world in which everything seems bigger than them. Like trying to compete against entrenched companies with deeper pockets doing the same thing they do. Or so many options for software and contractors and coaches and resources that it can be hard to decide how to allocate funds. Or trying to be seen and heard as a thought leader in a marketplace filled with increasingly more “influencers” with millions of followers. And of course, like those birds, they are cautious and alert. They have to be. Particularly for smaller businesses, the owners and primary stakeholders have to wear a lot of hats. They need eyes on the sides of their heads to see everything coming at them, both opportunities and threats. They need to be able to react and respond quickly. So if I want to be part of a potential client’s world, then I need to follow all of the same steps that I followed this morning, while sitting on that grass: 1) Take the Time to Observe Truly effective marketing is about building genuine connections. And you can’t possibly connect with somebody in a genuine way unless you pay attention to and watch and listen to and understand them. People are not (to me) demographics or segments to be sold to. Those things are helpful tools, yes, certainly. But in the end, people are less numbers than they are unique and compelling stories to be understood. So be sure to stop thinking about your product or service or brand long enough to hear those stories. 2) Put Yourself in Their Shoes Each individual or business has what Seth Godin calls a “worldview.” In fact, we all have multiple worldviews, some of which contradict each other. (We can worship a band, for instance, but hate one of their songs because it conflicts with our values.) Taking the time to look at the world from the perspective of a potential client is critical. What is their business landscape? What are their challenges? What are their values? Is their worldview aligned closely enough to what you offer that it even makes sense to market to them? If not, then don’t. And if yes, then what is the story you need to tell in order to help them see it? 3) Are You an Opportunity? Or a Threat? Those who need to market themselves or pay for marketing, which is anybody with a business these days, have a decision to make. Are you going to be the person sitting quietly on the grass or the noisy bike speeding by on the path? If I accept your LinkedIn connection request right now, are you going to take the time to get to know me, to read my articles and comment on my posts? Or are you going to hit me with a direct message cut-and-paste sales pitch in five minutes? The first makes you an opportunity for me. The second makes you a threat. And if I see you as a threat (if only to my bottom line), then why shouldn’t I simply fly away from you? 4) Offer Something of Value While I didn’t have any food to give those birds this morning, I would have if I did. The point being, my focus would have been on the giving, not the taking. Yes, as a business, I need to make money in order to survive. Which usually means charging for my services. But if money is all I was focused on, then I would go back to a corporate job. In order for a potential client to want to do business with me, both they and I need to believe that I can offer them something of value that will support their mission. They need to know that by connecting with me, they will gain more than they lose. In this way, the cost of my services becomes not an expense but a transactional byproduct. How I convey this sense of value is by first getting to know the people I’d like to do business with, hearing their stories, not making any sudden or threatening moves, and telling my own story in as many ways as I can. If somebody likes it, and if our worldviews align, then that’s amazing. Let’s connect and do business. If not, or if I don’t feel as if I have anything of value to offer them, then we at least got to know each other. Something huge that I have come to recognize over the past year is that I am not the right solution for every business. And that’s okay, because I don’t need to be. Hell, I don’t even want to be. Imagine that kind of pressure! Some people want to work with a big firm with a receptionist and a foosball table in the lunch room. (Or with somebody who creates the illusion that they are one.) That’s not me. Some businesses need a marketing scientist to closely manage their “funnel” and turn ten-thousand leads into five customers. That’s also not me. More power to the businesses out there who are these things … but they’re not me. For the right business, though, I am the perfect solution. I sit quietly on my piece of grass at the beach and watch the birds flit and hop about all around me, and I spin stories in my head. Stories I’ve heard and stories I’d like to hear and stories I can’t wait to tell. Stories about the wonderful things my clients are doing to make their corner of the world a better place. Stories that I am privileged to tell. So if your business has a story to tell, and if you’re okay working with somebody who answers every single one of his own emails and still believes marketing is an art and not a science, then maybe I’m the perfect solution for you. If so, then pull up a spot on the grass next to me and let’s talk. I can be reached on LinkedIn or at RandyHeller.com.
https://medium.com/swlh/what-the-little-brown-birds-taught-me-about-marketing-140c1c647724
['Randy Heller']
2019-08-28 14:05:45.312000+00:00
['Worldview', 'Marketing', 'Storytelling', 'Clients', 'Seth Godin']
Whiteboard View v.0.0.1
Note that you can filter entities in Tree Component to see only relevant entities: Filter entities in Tree Component. You can also double click on entity to see its details. And that is it so far :) How is this useful right now? Well, so far the only case is hierarchy visualization. For example, you may want to see Goals, Initiatives and related Work in a hierarchy. With filters you may see only active goals and active initiatives, thus narrowing down the view.
https://medium.com/fibery/whiteboard-view-v-0-0-1-601b1665b5f
['Michael Dubakov']
2019-11-25 18:12:34.604000+00:00
['Canvas', 'Productivity', 'Startup', 'Whiteboard', 'Fibery']
How to Be Great — Learn to Say No
Today’s world is tumultuous, complex, and distracting. Each day from the moment we wake up there are often hundreds of things that demand our attention, like reading the most recent breaking news report, watching that TV show everyone’s raving about, or talking to friends on social media. Photo by Colton Duke on Unsplash Every single one of these distractions are tempting, who doesn’t want to be on top of the latest tech news? But they’re still distractions. It’s almost universally recognized that most people have problems with saying no (check out this Psychology Today article from 2014), and it’s all rooted in our want to avoid conflict. Although saying “no” won’t cause a major war, it still creates some disharmony that our brains interpret as conflict. Another common reason for not saying no is people don’t want to seem rude or waste a possible opportunity. These are all understandable reasons for wanting to say “yes”. In fact, saying “yes” a lot may even make you more popular. But it does hurt your productivity, a lot. If you’re always helping others, you’ll never be able to spend a lot of focused time on what matters to you, and as selfish as that seems, you have to take care of your stuff too. Instead of letting these tempting offers distract you, you have to learn to say no to them so you can spend time on your own work. All of us have big aspirations or dreams in our lives, dreams that can’t be achieved without laser focus. Saying no to distractions is ultimately saying yes to greatness, it’s saying yes to putting in the extra mile and sacrificing all the other “yes” opportunities that most people won’t. Be very clear and consistent about your boundaries, and make sure others are clear about your free time. You should be fighting for your dreams as if it was a matter of life or death, because it is a matter of your life. This isn’t easy to do. As a student and someone who has lofty career aspirations, there’s a lot of opportunities that I have every day, very tempting opportunities both professionally and personally. Unfortunately I’m only one person. It sucks, but there’s a lot of really great projects, relationships, and people that I’ve had to divest my time in to maintain laser focus on what matters most to me. And of course I still make mistakes (I’m currently overcommitted to things at time of writing), but the discipline I’ve been building on when to say “yes” has grown quite a bit. Your dreams deserve your 100%, and you can’t achieve that unless you can say no to the distractions that take away from that 100%. Here’s to a brighter and more focused you. Keep in Touch There’s a lot of content out there and I appreciate you reading mine. I’m an undergraduate student at UC Berkeley in the MET program and a young entrepreneur. I write about software development, startups, and failure (something I’m quite adept at). You can signup for my newsletter here or check out what I’m working on at my website. Feel free to reach out and connect with me on Linkedin or Twitter, I love hearing from people who read my articles :)
https://caelinsutch.medium.com/how-to-be-great-learn-to-say-no-462f374c3d84
['Caelin Sutch']
2020-12-26 05:19:08.088000+00:00
['Productivity', 'Focus', 'Self Improvement', 'Greatness', 'Entrepreneurship']