title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Christianity’s Polygamy Problem | Christians do sometimes disagree.
“Well it’s right there in the Bible, so it must not be a sin,” sings Rich Mullins in 1991 in “Jacob and 2 Women.”
Or Martin Luther writes in 1524, in a private letter: “I confess that I cannot forbid a person to marry several wives, for it does not contradict the Scripture.”
But publicly, the clerics kept a unified front — for monogamy.
That didn’t mean they liked it?
A history might be written of the Christians who might’ve been open to a polygamous life. William Blake, the poet, was said to have “wished to add a concubine to his establishment in the Old Testament manner, but gave up the project because it made Mrs. Blake cry."
But mostly they’d be restrained by convention — and punished if they’re not.
So are the heroes of the Bible un-Christian?
The great Abraham has a wife, Sarah, and two slave-wives, Hagar and Keturah. The hero Jacob, in the famous story, wants to marry Rachel, but her father tricks him and he finds himself married to her sister Leah.
Had this been a Christian story, Jacob might have to make do with Leah. But it’s the Bible, so he marries them both. Plus Jacob has two slave-wives, Bilah and Zilpah. A busy guy?
In Judges 8:30, the hero Gideon has “many wives.” Saul has a concubine (2 Sam 21:8). The great David has “wives and concubines” in 2 Samuel 19:5, and his son Solomon is a legend of polygamy, with 700 wives and 300 concubines (cf. 1 Ki 11:3).
Standing back in sheer amazement, you might even wonder, when reading the Bible, if the heroes are guys with lots of sexual energy.
Does God like very sexual people? Because heroines are the same way.
Like Tamar of Genesis 38, who has sex with three guys in a row. What a role model!
Though personally I’m a huge fan of Rahab the Harlot, the madam of Joshua 6 who has an unusual ability to know things. Perhaps her sexual experience has endowed her with this grace?
I think of Delilah, Jael, Bathsheba, the Queen of Sheba, Esther, and the girl in the Song of Songs, as I see God loving a lady who’s ready for action. And it’s rather hard to make an argument for a monogamous mindset.
You’d have Adam and Eve to work with, I suppose. When there’s one man and woman—they seem to be monogamous.
God is also polygamous, of course.
In biblical spirituality, deities ‘marry’ nations. We see this in the regular word for a deity, ‘Baal’—it’s just the word for ‘husband’. YHWH gets married to Israel and then to her ‘sister’ Judah.
It’s a plotline developed throughout Ezekiel 23, as the wife Israel committing ‘adultery’ (with other deities) wears on God. In Jeremiah 3, He agonizes about it, and finally divorces her.
Then Jesus, as a ‘bridegroom’, comes along in the New Testament, and seems to want to marry—everyone?
And the world of the Bible is polygamous. There was no rabbinic ban, as Adiel Schremer details in his 2001 paper, “How Much Jewish Polygyny in Roman Palestine?” as he concludes the Palestine of the New Testament period “may be designated as a polygynous society.”
So where did the anti-polygamy talk come from?
It’s a cultural ideology whose origins might make one queasy. I sit reading about the long effort to use the Bible to advance monogamy, as keeps lurching into the ugly. “The higher primates are in many, if not all cases, monogamous,” goes an effort in 1900.
Let me translate? They know that polygamy is practiced in regions of the world where there aren’t many white people, like Africa and the Middle East. In saying that monogamy marks the ‘higher primate’, they’re offering white Christianity as the religion of evolved or more developed humans.
How do Bible readers deal with the polygamous evidence?
As an Evangelical the line I’d sometimes hear is: God allows it, but doesn’t approve of it.
I’m not sure how that argument works. The heroes care about their wives very much. There might be a primary or favored wife, but that is just part of polygamy, probably. The ‘law’ is working to manage situations involving extended families (cf. Gen 30:26; Exo 21:10–11; Deut 17:17, 21:11–17, 24:5; Lev 18:18; 2 Sam 5:13; 1 Ki 11:3).
And there’s more multiple wives than the tradition wants to admit. Scholars have had to work on the fact of Moses having two wives. The traditions only like to remember Zipporah the Midianite, but there was an unnamed second wife, noted in Numbers 12:1 to be Cushite.
For all the effort by tradition to combine them and achieve a monogamous hero, a Midianite wife is not the same as a Cushite wife.
It all begins to look like the faithful in a mass hallucination?
Rich Mullins would say: “So many people are distressed by ‘Jacob and 2 Women,’ and I respond, ‘then the Bible must be very distressing to you.’”
Helen R. Jacobus, a scholar studying the Bible’s narratives of sex with slaves, writes of attending a lecture in which a cleric says: “Abraham’s marriage to Sarah was monogamous, and then, as an aside, he added, jokingly, ‘apart from the handmaiden.’”
Perhaps, in the minds of such speakers, the ‘handmaiden’ is easy to dismiss—for being a female without much social status, or perhaps because she is a darker-skinned Egyptian.
But God always cares for Hagar and Keturah, and takes care of them even when Abraham does not.
The Christian tradition reaches for any reference to support its monogamous agenda.
In 1 Timothy 3:2, the apostle Paul is talking about the ideal leader of the church: “Now the overseer is to be above reproach, faithful to his wife . . .”
Aha, the traditional Christian reader says. ‘Wife’ is singular, not plural!
But this is a system which sees the Christian people as the ‘bride of Christ’, a figure of a woman. To be faithful to the ‘wife’ — is to be faithful to her.
“Anyone who seeks a firm rejection of polygamy in the Bible is probably doomed to frustration,” writes William Tucker.
In his 2014 book, Marriage and Civilization, he makes a case for monogamy having been a beneficial idea, indeed, in his view, making us ‘human’.
Perhaps there’s a good case for that, and to be monogamous is a good idea.
But to be a monogamous liar doesn’t sound good. And the Christian idea that God hates multiple spouses was simply that. There were also palpable racist origins, as anti-polygamy theology kept many out of the faith.
Christian missionary efforts, over time, have often been focused on stamping out polygamy—instead of, say, that ‘gospel’ thing?
So let’s be clear? The Bible assumes polygamy.
Although, that insight might lead to a chain of others, sadly. Why did a religion set itself against its own text?
And why does the system the clerics seem to favor look less like the Bible and more like the prevailing policies of ancient Rome? In the process of writing his History of Sexuality, Michel Foucault realized the sexual codes of ancient Rome and Christian tradition were about the same.
As he concludes in 1980, “the so-called Christian morality is nothing more than a piece of pagan ethics inserted into Christianity.” (The case is elaborated in David Wheeler-Reed’s 2019 book Regulating Sex in the Roman Empire.)
And really, when we step back and look at ‘traditional morality’, we might see a moralizing agenda being brought to the Bible.
Try reading the Bible as a polygamous mentality, and it changes rather startlingly. Like Jesus’ speech in Mark 10:10–12.
Anyone who divorces his wife and marries another woman commits adultery against her.
The typical Christian reader hears the deity saying to human males: If you divorce one woman to marry another, you’re bad.
But if rules about marriage are the subject, and divorcing one woman to marry another the problem being discussed, then Jesus’ would be prompting the man to marry both women.
For Jesus, the more love, the better?
Maybe God likes people to like each other, and take care of each other.
That seems like a possible meaning of Jesus saying “love one another” in John 13:34. He doesn’t seem to narrow it to numbers of spouses.
But, between you and me, Jesus isn’t as ‘Christian’ as you’d think. 🔶 | https://medium.com/belover/christianitys-polygamy-problem-9bca6b4ada8e | ['Jonathan Poletti'] | 2020-11-29 03:22:12.002000+00:00 | ['Relationships', 'Christianity', 'Religion', 'Psychology', 'Marriage'] |
Django vs Ruby on Rails Comparison. Which Framework is Better? | There are more than 90 web development frameworks out there. No wonder it’s hard to choose the one that’ll suit your project best. Still, there are at least two major frameworks that are widely used by the tech giants of nowadays, and for good reason. Ever heard of Django or Ruby on Rails? If both web frameworks are quite good, how do you compare Django and Ruby on Rails to choose which one to use for web development?
Web Frameworks: How To Get Started Web frameworks have transformed the world of programming and become vitally important in every development process. Even the smallest unit of an application is comprised of coding, and a web framework simply automates it. You might try browsing different sites, books and articles about it, but find only general and ambiguous information – nothing but endless definitions and difficult terms that make your head spin. Well, it’s time to handle...
Instagram, YouTube, Spotify, Dropbox and other online and app-based services that we use daily are powered by Django, a Python programming language framework. On the other hand, Airbnb, Bloomberg, Shopify, and other leading companies use Ruby on Rails, a Ruby programming language framework. Both languages were created to serve the web and make web applications (including mobile web apps) possible.
Python & Django development Your chance to enter the market faster Learn more
In this article, we’ll compare these two popular frameworks. While both are fast and easy to use, Django and Ruby on Rails each have reasons for and against them as the development framework for your future project. As software development professionals we’ve found materials comparing Django vs Ruby on Rails performance, Django vs Ruby on Rails speed too oversimplified, since speed and performance often depend on the complexity of each individual project as well as the proficiency of the development team with the respective technology. What we mean to say is that even if python/ruby/etc are interpreted languages, and are slower in certain workloads, for tasks which are relevant for a web framework this may not matter. So we decided to take a closer look at their pros and cons, as well as use cases to help you decide which one is the best fit for your needs.
Table of Contents
Django Pros, Cons and Use Cases
Django is a widely used Python web development framework. It was developed in the fall of 2003 by Python developers Adrian Holovaty and Simon Willison as they started to use Python to build applications. It gained its speed, security, and scalability from Python. Pinterest Engineering, Mozilla, Udemy, NASA, Washington Post and other powerful websites all rely on Django. It comes with the most tools and libraries for common use cases – for instance, its authentication, URL routing, template engine, object-relational mapper (ORM), and database schema migrations (Django v.1.7+). Here are some reasons for and against Django.
Read more: 10 Popular Websites Built With Django
Django Pros
Some say that Django’s pros outweigh its cons. Let’s take a look at its main advantages:
Rapid and low-cost prototype/MVP development
Django has a flexible, well-structured admin panel that lets you reuse code from one project to another. Besides, there are plenty of out-of-the-box libraries that can help you build a product prototype or an MVP fast.
Explicit, logical syntax
The Django framework is easy to build on, easy to work with, and easy to support. It’s based on Python, which is considered one of the easiest development languages to learn. Besides, it is easy to debug and read, which means it’s not going to be a problem for new team members to catch up with a project in mid-stream.
An extensive open-source ecosystem
Being an open-source ecosystem means that numerous tools and libraries, both free and paid, are available for everyone at any time. Django’s official documentation is more than enough to find solutions if you’re stuck. Besides, there are plenty of helpful forums such as Stack Overflow or the Django community of Reddit, where developers find answers to their Django-related questions.
Django Admin portal
This built-in admin panel is a great tool for easier management of the backend user interface. It is well structured, and has permissions and authentication modules out of the box. Besides, it’s easy to customize by adding custom CSS or replacing the default templates.
Django’s REST framework
REST stands for Representational State Transfer, and it lets you easily build APIs. It’s powerful enough to build API in just 3 lines of code, and flexible enough to return multiple data formats and handle different types of calls. Basically, the Django Rest Framework gives you a lot of convenience, such as authentication modules, JSON serializers/deserializers, API routing, and documentation, etc. You could argue that when comparing Django vs Rails performance in an API-heavy project, the REST-based architecture is among the most evident Django/Python Pros.
Top 14 Pros of Using Django for Web Development Django is one of the top frameworks for web development, but why is it so popular among developers and business owners? Let’s review the reasons why so many applications and features are being developed with Django. Table of Contents 1. Django is simple 2. Django works on Python 3. Django has many useful features and extras to simplify development 4. Django is time-effective 5. Django suits any kind of project...
Django Cons
Although Django has many advantages, there are also a couple of downsides:
Requires more code upfront
Django developers have to write more of the code themselves. As a result, they are more conscious, purposeful, and demanding of the business goal. This freedom from hard presents, on one hand, can be considered one of the main Django/Python cons.
Django is monolithic
Django is a full-stack framework with a monolithic approach. Basically, it is the other side of a ready-to-use, out-of-the-box solution. Django pushes developers into certain patterns within a framework. This is also the reason why Django is the choice for large, tightly-coupled products. The Django framework is a single package where all components are deployed together, so you can’t pick and choose bits and pieces.
Django Use Cases
Django is widely used for e-commerce websites, healthcare, financial applications, social media sites, transport & logistics and more. Here are the main areas where Django is applied:
Travel and booking systems with complex, fine-tuned customizations;
Platforms with complex API architecture and that rely on data from third parties;
Content, data, and e-commerce management engines with custom rules;
Other web apps with dynamically changing complex algorithms.
Disrupt the Travel&Booking industry. Hire proficient Python developers. Learn more
Django Summary Table
Summary Pros Cons Django is a clean and easy-to-use Python-based development framework that shortens the time to get a project to market. Fast and low-cost development
Open-source system
Simple syntax
Lots of libraries
Many features included * Requires more code up front
Is a monolithic framework Use Cases Brands Apps Travel and booking
Healthcare
E-commerce
Platforms with complex APIs
Content, data, and e-commerce management engines
Other web apps with dynamically changing complex algorithms NASA
The Washington Post
Reddit Instagram
Dropbox
Spotify
Mozilla
Pinterest
YouTube Edit
Ruby on Rails Pros, Cons and Use Cases
Read more: Why We Use Django Framework & What Is Django Used For
Similarly to Django, Ruby on Rails (RoR) is also an open-source framework. It lets developers use ready-made solutions and therefore, helps them save time on programming processes. David Heinemeier Hansson, the founder of Ruby on Rails, powered his own web application Basecamp with the Ruby framework. Other famous websites built with Ruby on Rails include Twitter, GitHub, Yellow Pages, and more. So, how does one compare Ruby on Rails vs Django?
Based on the Ruby language, Ruby on Rails inherited its parent’s logic and simplicity. Basically, Rails is a layer on top of Ruby that helps developers build web applications. It’s a very popular choice for backend solutions, and there’s a comprehensive guide – “The Rails Way” – to building production-quality software with Rails.
As a fully fledged framework, it offers an ORM (Object Relational Mapping) system for business data and logic, application management, and routing out of the box. It is a popular choice within Silicon Valley (big Valley startups based on RoR are Airbnb, Etsy, Spotify, etc.) – and now we’re going to take a closer look at it to know why.
Ruby on Rails Pros
Ruby on Rails is indeed one of the most popular web development frameworks. Here are the main benefits of Ruby on Rails:
Component Structure Based on Plugins and Gems
Ruby on Rails’ component structure, based on plugins (application level) and gems (system level), lets experienced RoR developers quickly put together efficient applications with less coding. The plugins are well documented and easy to use. New gems are constantly added to public repositories, such as the popular RubyGems resource, which currently contains more than 150,000 total gems for download.
Easy to Migrate and Modify
In fact, any fundamental changes to the codebase don’t require many changes in the application code. RoR code is of high quality and can be easily read. Ruby on Rails developers don’t have to micromanage it.
Diversity of Presets and Tools
There are lots of must-have features that are already preconfigured. Ruby on Rails provides developers with multiple standard web features and patterns, which significantly speeds up the development process.
Testing Environment
When complex testing logic is the core of the product, RoR’s superior testing environment is a great help. Developers can make sure their apps work as desired using testing and debugging tools, as RoR makes it easy to build automated tests and get all aspects of the product checked.
Ruby on Rails Cons
Along with advantages, come the downsides. Here are some of them:
Faster Complexity and Tech Debt Buildup
Ruby on Rails’ flexibility has a downside. Basically, with so many ways to code the same outcome, code can get difficult to read and may require a steeper learning curve as well as more rework later on.
More Difficult-to-Create API
Building an API with Ruby on Rails can be incredibly complex, as RoR has no equivalent to Django’s REST framework.
Documentation Quality and Standards Vary
With Ruby on Rails, it may be hard to find good documentation, especially for “less popular” gems. Most of the time, there are “test suites” that serve as the main source of information for developers. They have to study the code instead of simply reading the official documentation (which is not there).
Ruby on Rails Use Cases
RoR is widely used for creating prototypes and MVPs. On top of that, it’s very popular in the startup community. According to SimilarTech, more than 402,000 websites are powered with RoR. Here’s an overview of the main areas where Ruby on Rails is used:
Relatively self-sufficient systems without much third-party data exchange;
Platforms with dynamically changing rules that need to be re-tested frequently;
Content, data, and e-commerce management engines with relatively standard feature-set requirements;
Other CPU-intensive web apps.
Ruby on Rails Summary Table
Summary Pros Cons Ruby on Rails is a web application framework with a wide range of presets and tools that enable faster development. At the same time, its documentation is not always clear and standardized. The structure is based on plugins
Easy to migrate and modify
A diversity of ready-made tool. Testing environment
Difficult to create API
Hard to find quality documentation on gems
Faster complexity and tech debt buildup Use cases Brands Apps Self-sufficient systems
Platforms that require frequent testing
Content, data, and e-commerce engines
Other CPU-intensive web apps Airbnb
Basecamp
GitHub
YellowPages
Kickstarter Twitter
ASKfm
Goodreads
Fiverr Edit
Django Framework vs Ruby on Rails Framework Comparison
Both Django and Ruby on Rails are great web development frameworks. They can deliver modularized, clean code and significantly reduce the time spent on common web development activities. Both of them follow the MVC principle, which means modeling of the domain, presenting the application data, and user interaction, all separately from each other. The question then is: how do you pick which framework to use? The decision may come down to which language you prefer or which software development principle you want to follow: to rely on sensible defaults, such as Ruby’s convention-over-configuration principle, or to follow Python’s “explicit is better than implicit” principle. The answer is simple — you can’t really go wrong by picking either one.
Here’s a comparison table of the two frameworks as a reminder of the main attributes of each:
Ruby on Rails Django Principle Convention over configuration Explicit is better than implicit Architecture Model-view-controller pattern Model-view-template pattern Pros The structure is based on plugins
Easy to migrate and modify
Diversity of ready-made tools.
Testing environment Fast and low-cost development
Open-source system
Simple syntax
Lots of libraries
Many features included Cons Difficult to create API
Hard to find quality documentation on gems
Faster complexity and tech debt buildup Requires more coding up front
Is a monolithic framework Areas of application Standard features, test-heavy Customizations, API Popularity and availability 57% of web developers enjoy RoR
4% who are not developing with RoR expressed interest in it 62% of web developers enjoy Django
6% who are not developing with Django expressed interest in it Community and ecosystem The RoR community has more than 5,000 people who have already contributed their code to Rails and hundreds of gems with reusable code. The Django community consists of more than 11,000 people with around 4,000 ready-made packages to use Edit
When thinking about long-term prospects of a technology stack it’s common to evaluate the strengths of each tech community and compare django vs ruby on rails popularity among the software developers. In fact, there is no perfect front-end framework out there. Your choice should always rely on your business goals and objectives. Having said that, Django, for instance, was included in the list of the most preferred frameworks among DevOps in a StackOverflow survey in 2018. It was and remains popular as well, according to the Python Developers Survey of 2018.
Based on Python, one of the top programming languages currently in demand, Django is used by thousands of programmers every year to build various web applications. It’s compatible with major operating systems, scalable, and easy to understand. It has a lot of features to simplify development and a large, helpful community.
The framework of your choice should serve your business needs and fit into the industry ecosystem — and not because of hype, your friend’s advice, or ease of hiring, but because it suits your project’s objectives. Study the purpose of a framework beforehand, as some frameworks are a good fit for gaming applications; others, for example, lean towards e-commerce websites.
Boost your web development. Get an experienced technical partner. Learn more
You should approach this fundamental decision carefully and strategically. It’s hard to simply switch programming platforms in the middle of development, as it takes time, is risky, and also quite expensive. So, choose the main platform, but consider whether its core library is flexible and adaptable, in case you need to make any modifications in the future.
On top of that, deciding what framework to use is a strategic investment for the company. It should not be solely a CTO’s or an IT department’s decision, but rather one that considers the elements of the desired framework at all levels.
Our team has worked with Django almost since its inception. We have been actively contributing to the community over the years and have broad experience with it – starting from basic features and navigating around weaknesses, to creating MVPs that help startups get venture-capital funding and scaling web services that have become #1 market leaders in their region.
So, if you have a product idea in mind, let’s discuss how to make it into a web product with Django! | https://djangostars.com/blog/django-or-rails | [] | 2016-12-20 12:23:20+02:00 | ['Ruby on Rails', 'Python', 'Django'] |
RightMesh Bi-Weekly Update: January 13, 2018 | TL:DR
Second Global Ambassador, Hiten Shah, announced;
RightMesh welcomes our two newest software engineers;
RightMesh Co-op, Keefer Rourke, awarded Ian Pavlinic Memorial Award for Innovation.
Press
RightMesh Welcomes Second Ambassador, Hiten Shah (India)
We are very excited to officially welcome Hiten Shah as our second RightMesh Ambassador!
Hiten first discovered the RightMesh project in early 2018, when he came across our home page and the line, ‘Connecting the next billion without infrastructure’, caught his attention.
Although he wasn’t familiar with mobile mesh networking at the time, Hiten started reading everything he could and asking a ton of questions to the RightMesh team in our online community.
His thought provoking questions and passion for the project was infectious, so a few months later, our team invited Hiten to be our Online Community Manager. After nearly 11 months of working with our team, Hiten is ready to take on a larger role within the company by representing RightMesh in his local community.
Hiten’s understanding of RightMesh, having come from within our community, and his desire to bring connectivity to the 65% of India’s population without internet, make him an excellent addition to our growing global team.
2018 Co-op of the Year Awarded to Keefer Rourke
We’re excited to announce that Keefer Rourke, RightMesh Software Developer and student in the School of Computer Science, has been named the University of Guelph’s Co-op Student of the Year 2018 — Ian Pavlinic Memorial Award for Innovation.
The Ian Pavlinic Memorial award recognizes the outstanding contributions of a student in the areas of academic achievement, workplace performance, innovation, contributions, involvement within the community, and impact during the work term. Keefer went above and beyond in all categories.
Developer Team Growth
The RightMesh team is excited to welcome our two newest software engineers, Dean Neumann as our VP of Engineering and Ming Hu as our Senior Software Engineer, to our growing developer team.
Collectively, they bring over 43 years experience in software development and a plethora of knowledge to share throughout the organization.
RightMesh Has Moved to Rocket.Chat
Don’t forget that we have transitioned our main channel of communications from Telegram to Rocket.Chat.
With Rocket.Chat, community members can participate in conversations that specifically interest you and avoid the ones that don’t. We can take deeper dives into specific subject matters and spark more productive conversations by segmenting topics into clearly defined channels including:
#tech
#bizdev
#rmesh
#use-cases
#announcements
#careers
#press
#chinese-channel
#russian-channel
#support
#rocketchat-feedback
Stay in touch: | https://medium.com/rightmesh/rightmesh-bi-weekly-update-january-11-2018-a784a67115ce | ['Amber Mclennan'] | 2019-03-21 21:42:29.644000+00:00 | ['Blockchain', 'Startup', 'Mesh Networking', 'Mesh', 'Rmweeklyupdate'] |
How to Start a Company on an H1B Visa: Top Tips for Immigrant Startup Founders | How to Start a Company on an H1B Visa: Top Tips for Immigrant Startup Founders Unshackled Ventures Follow Dec 29 · 6 min read
A Q&A with Unshackled’s Immigration Lawyer, Michael Serotte
Written and curated by Unshackled Fellow, Linda Ye
Last month, we invited our very own Michael Serotte, Founding Partner at Serotte Law Firm and immigration lawyer of 25+ years, to chat with our Roundtable cohort of future immigrant founders and help demystify the process for starting a company on a visa.
The biggest takeaway was that your immigration status should not preclude you from starting your company. Generalized advice is often misleading, so we encourage you to work with partners (attorneys, investors) who have expertise in immigration for founders and follow the path that works best for your specific situation.
Below, we’ve put together a summary of Michael’s session for any immigrant entrepreneur on a U.S. temporary worker visa (with a focus on H1B visa holders) looking to learn more about the process.
Disclaimer: Unshackled Ventures is not a law firm and anything written here should not be viewed as legal advice or applicable to any individual’s specific case. We recommend working with an immigration attorney prior to taking any legal action.
What can and can’t you do on an H1B visa when thinking about starting a company?
On an H1B visa, you can only work for your sponsor company — any steps you take to develop your startup idea will need to comply with those terms. This means you cannot act as the CEO of a new company or be performing day-to-day activities as a company employee including any managerial or executive position. Any role where you can hire, fire, and manage people is considered working without authorization, and is thus a no-go.
You can, however, do the following:
Create and register a business
Become a principal shareholder with voting rights
Establish a board of directors
Be on the Board of Directors, including Board Chair, and attend meetings
Perform market research or other research-related activities (e.g. interact with lawyers and accountants, prototype your product/service)
Perform investment-related activities (e.g. pitch or negotiate with potential VCs, angel investors)
To avoid falling into the realm of “working without authorization”, you must create a level of separation between the day-to-day company activities and shareholder decision-making. You, as an H1B holder, cannot run the day-to-day activities of a non-sponsor company — but you can start your business, for instance, with a cofounder who is a U.S. citizen or permanent resident to help execute the day-to-day work, while you oversee higher level decision-making in a shareholder or Board member capacity.
Being on an H1B does not mean you have to give up your startup dream — it just means you’ll need to find smarter ways to grow and nurture your company idea while taking care to satisfy the law, until you are able to be hired by the company. Working with the right partners and people who have gone through the journey will speed up your journey.
Can I self-sponsor my H1B as an entrepreneur?
Yes, but only if you meet very specific criteria. You must demonstrate the existence of a true “employee-employer relationship”, in which a) someone has the ability to hire or fire you from your company and b) the company should continue should you be fired (i.e. not go bankrupt, can have other people meet your role).
For example, if you have 75% majority ownership in the company, you may need not just one but two board members to meet the hire-fire requirement. You may need to have a corporate lawyer review your company bylaws, as some may give a majority shareholder the right to fire other directors. You also must be able to demonstrate that the company has the ability to pay you at least the prevailing wage.
While there are certainly ways to have your own company sponsor an H1B, it would be most prudent to consult an attorney on properly establishing ownership structures to protect your visa eligibility.
If you’re on one type of visa, how does changing visa status work?
Generally for most changes of visa status, you (or your employer) will submit a petition to the USCIS. For most visas, you can file premium processing and, if your petition is approved, you should be able to start working at your company within a couple of weeks.
If you’re going to an early-stage startup, you will likely get a Request for Evidence (RFE) by the USCIS to provide additional evidence, typically within 90 days. Currently, typical processing time is about 5–9 months. However, you can at any time pay USCIS an additional $2,500 for “Premium Processing” and your petition will be adjudicated in 15 days.
Before you file for a change of status you should know the fundamentals about your current valid visa, including when your valid visa expires and what countries you can visit in the waiting period (if need be), as well as your eligibility for the new visa status you’re applying for. There may not be much risk to petitioning for a change in status (e.g. if you’re moving from H1B to your own startup) as long as you apply before your visa expires.
What’s the outlook for obtaining an H1B as we head into a Biden administration? Will things change or get easier?
It is likely that change will happen, but not immediately and mostly in rhetoric rather than the law. The Trump administration was stricter on evidentiary requirements (e.g. providing multiple document types of proof) but temporary work visa laws themselves did not shift much. If the incoming Biden administration does ease requirements or make other reforms, it’s unlikely that the changes will permeate through the USCIS until months or more down the line. What we do predict is more consistent messaging about traveling, borders restriction, etc. in the upcoming months.
What’s the deal around obtaining an O1? What are VC’s long-term expectations for immigrant entrepreneurs?
The O1 visa, which is granted for “extraordinary ability” in a specific field, can put immigrant entrepreneurs on the path towards visa independence as a long-term goal. Unless you already have an Olympic medal or Nobel Prize on your resume, the best strategy to build up your case for an O1 is to first pick a niche area to which you can bring expertise. Then you can gain credentials and documentation of achievements in your chosen field. Venture funds can help facilitate these credential-building opportunities — for example, at Unshackled Ventures, we connect our immigrant entrepreneurs to tech journalists and other public-facing opportunities to demonstrate their thought-leadership.
In the U.S., the majority of VCs will expect entrepreneurs to be working full-time on their venture. VCs want their founders to have time to focus on building and innovating, which can be understandably more difficult for immigrant entrepreneurs who face visa issues. Unshackled’s unique fund structure works on short-term solutions to allow founders to focus 100% on building (e.g. transferring change of status to Unshackled as employer until the startup is robust enough to self-sponsor) and long-term solutions to whatever path is right for the founder (whether it’s O1, green card, etc.). | https://medium.com/unshackled-ventures/how-to-start-a-company-on-an-h1b-visa-top-tips-for-immigrant-startup-founders-27bd9244de07 | ['Unshackled Ventures'] | 2020-12-29 21:48:16.786000+00:00 | ['Immigrants', 'H1b', 'Startup', 'Venture Capital', 'Founders'] |
How I Made My First £1,000 As A Freelance Writer | Pitch Yourself Relentlessly
I looked for publications that posted written content in my niche and interests, made a list of them, and pitched. Relentlessly.
At the start when you don’t have a big name as a writer it’s highly unlikely that people will contact you directly to start writing for them, hence, you will need to reach out to them.
You will need to introduce yourself to them — whether that’s by finding the contact email on their website or by finding the editor on Linkedin, Twitter, or elsewhere on social media. However you can, you need to open the lines of communication with them.
Let them know who you are, what your main areas of writing are, and what you have to offer for them as a freelance writer. Send links to your portfolio so they can see it. It’s important that you have an idea of exactly what you can offer the publication/brand and why your portfolio is even relevant to them because they need to know why they should give you an opportunity over someone else.
They may have a brief of the types of pitches that they take. Or they have a specific type of content they produce. Your job is to tailor your pitch to what you think they are looking for in new writers for their platform.
The harsh reality is that you may not get any replies to your pitches. You may simply get a reply saying they aren’t taking pitches or new work right now. That is the name of the game. Keep going. Keep sending your pitches out.
I was lucky in that in my first month of seriously pitching myself as a writer, I pitched myself to a connection of a known publication, with who I had a rapport due to my support of her brand for some years. I made it clear what I wished to offer the blog publication and I showcased my passion for delivering quality work to them.
I was given the opportunity to deliver a trial piece of work for the publication and see where it went from there. | https://medium.com/the-brave-writer/how-i-made-my-first-1-000-as-a-freelance-writer-5e558267a35b | ['Tonte Bo Douglas'] | 2020-11-11 13:02:47.491000+00:00 | ['Freelance', 'Freelancing', 'Freelance Writer', 'Writing', 'Writer'] |
10 Key Takeaways from Google’s Material Design Guidelines | The Material palette generator can be used to generate a palette for any color you input. Hue, chroma, and lightness are adjusted by an algorithm that creates palettes that are usable and aesthetically pleasing. — Material Color System Guidelines
Trying to create a color palette from scratch can be tedious and often less effective (since you’d have to calculate the values yourself) than using a tool like Material’s Palette Generator (located near the bottom of the page).The best part is that the colors generated already meet accessibility standards, so you’re spared the hassle of checking your palette against WCAG guidelines. You should still probably double-check your designs with a plugin like Stark, though.
4. Color
As you’re considering colors and how to use them, Material also has strong tips and tools for using color in an interface.
Show brand colors at memorable moments that reinforce your brand’s style. — Material Color System Guidelines
Think of your brand colors like salt and pepper on a plate of avocado toast. Too much and it overpowers the natural flavors, too little and it’s bland. When adding colors to reinforce your brand to the interface, be thoughtful about when and where they’re added.
By limiting the use of color in your app, the areas that do receive color — things like text, images, and individual elements like buttons — will get more attention. You’ll notice that apps like Instagram and Twitter that feature many colorful posts and unpredictable content tend to have a pretty plain interface. This design element is subtle, but it takes the focus away from the interface in favor of the content.
Color indicates which elements are interactive, how they relate to other elements, and their level of prominence. Important elements should stand out the most. — Material Color System Guidelines
When an element’s appearance contrasts with its surroundings, the user understands that it has greater importance than its surroundings. We can use color and color weight to establish a hierarchy within an interface. The weight of a color refers to how saturated that color is. More saturated colors will appear more vibrant and bold, thus giving them a greater visual weight.
More prominent, bolder information will draw the user’s eyes first, and then they will move on to the supporting information below it. If one element is more important than another, it should be of a greater visual weight. Thus, the user can quickly skim the page and distinguish between the various levels of importance.
5. Material’s type scale generator | https://uxdesign.cc/10-key-takeaways-from-googles-material-design-guidelines-3b0867f0465a | ['Danny Sapio'] | 2020-09-21 14:50:44.570000+00:00 | ['UI', 'Design', 'Product Design', 'UX', 'Tech'] |
My Father Will Not Be Giving His Permission For Me To Get Married | You know how sometimes you don’t realize something until it just… hits you?
That is exactly how I came to the revelation that I do not want my partner to ask my father for his permission to take my hand in marriage.
Now, you may think that I’m about to go on a rant about why I think it’s a silly tradition/old/irrelevant in this day and age. But the reality is that I don’t necessarily think that at all. I think it’s something that could be very special to some fathers and daughters depending on the nature of their relationship.
But… my father and I don’t have that kind of relationship.
I could go on and on about all of the reasons why we aren’t close but I’m going to just focus on the specifics of why I do not want my boyfriend to ask his permission to marry me. | https://medium.com/fearless-she-wrote/my-father-will-not-be-giving-his-permission-for-me-to-get-married-aa4f1f5cb295 | ['Carrie Wynn'] | 2020-12-02 16:56:32.181000+00:00 | ['Relationships', 'Life Lessons', 'Mental Health', 'Self', 'Family'] |
Can You Know Too Little? | Can you know too little?
Short answer: Yes, of course. There is always more to be learned.
Long answer: Sometimes, less is more.
Much of this thought is connected to our modern society, and the information overload we’re fed daily. On any single given subject, there are probably a minimum of a half a dozen outlets reporting. Multiply that number in certain circumstances, and the number of viewpoints becomes utterly untenable.
Truth at its base comes in three brands — mine, yours, and the absolute. When, however, you factor in the individuality of people, that number grows exponentially.
For example: Every time Trump puts out a new Tweet, countless sources, credible and otherwise, will examine, analyze, exaggerate, minimalize, report on, and otherwise share it. Here there will not just be three brands of truth, but uncountable brands instead.
Everyone has opinions. These get based on experience, education, situation, environment, personal perspectives and viewpoints and more. Sometimes you as an individual can have multiple opinions on a single topic. Point being, the truth as we believe it outside of the absolute is going to be rather variable.
If you go to the trouble of taking in multiple sources reporting on a single thing like a Trump Tweet, you could easily drive yourself mad. Odds are pretty good you’ll make yourself feel negative, either in agreement or disagreement with one of those opinions you encounter. From there, it doesn’t take much to become really distressed.
The point being that knowing less is to your betterment.
To know too little is better than overwhelm
This is very topic specific. Overall, knowledge is power. But there is a fine line between knowledge and overwhelming information.
You may desire to be apprised of current affairs. This can help in making informed decisions in elections, whether to participate in protests, call and e mail members of Congress, and so on.
But do you need to dig into the story and get all the gory details? Does intimate knowledge of a particularly unpleasant or troubling subject serve you?
If it is going to bring you down, cause you to feel negative, or otherwise distress you, likely the answer is no.
To know too little is not the same as being ignorant. Ignorance is a total lack of awareness. It is knowing nothing at all about a subject. Knowing too little, on the other hand, means you are aware of the subject, but probably not a subject matter expert.
If knowledge is power, how can there be too much? The real question is, do you actually need to know this? | https://mjblehart.medium.com/can-you-know-too-little-77ba121427bc | ['Mj Blehart'] | 2019-05-02 12:05:24.871000+00:00 | ['Personal Development', 'Self Improvement', 'Empowerment', 'Psychology', 'Mindfulness'] |
Sqair Air Purifier Real World Test | Sqair was kind enough to lend me a unit for a few days to test it out. I’ll first give my general thoughts on the device and then present my experiment to show how effective the Sqair is at removing PM2.5 from a room.
I was very pleased with the look of the device as it didn’t stick out as much as other air purifiers (the IQ Air purifier looks like an industrial battery). The controls are dead simple with three options: off, 1, 2, and 3. Fan speed 1 is very quiet (like very quiet). Fan speed 2 is certainly audible even in a room with a little background noise, but it easily blends in. Fan speed 3 is downright noisy, but as you will see below there isn’t much need to use this speed unless you are looking to really move some air through the machine.
The Experiment
One thing I really wanted to test was how effective the Sqair is at clearing pollution from a room. I’ve heard criticism about air purifiers from several people who are skeptical of how well they circulate air. The common question is, “How do I know it isn’t just cleaning the air in one corner of the room?”. So I set up an experiment to see how well the Sqair works.
At my school we have a quiet study room that is 10m² and has ceilings of 3 meters. That makes 30 cubic meters of air in the room. The room has a door that seals completely and has no air vents. This made the perfect room to conduct an experiment with an air purifier. The only problem is that in early September there is essentially no air pollution outside, and inside PM2.5 levels hover around 5 µg/m³. So I had to get creative and make some air pollution. | https://medium.com/mongolian-data-stories/sqair-air-purifier-real-world-test-bceca2bd7da | ['Robert Ritz'] | 2019-09-19 06:53:02.051000+00:00 | ['Environment', 'Air Pollution', 'Data Science', 'Mongolia'] |
Segmentation of Online Shop Customers | Segmentation of Online Shop Customers
With Web Analytics Data and k-Means Clustering
Identifying clusters of similar Customers
In this article I will describe how we can segment customers based on web analytics data from an online shop. Based on the results, on-site personalization can be realized and targeted campaigns can be started for the users in the segments.
On the way there, we will first explore the data in more detail (“Explorative Data Analysis”), then do suitable preprocessing of the data, calculate the segmentation, and finally visualize the clusters. For the calculations we will use Google Colab.
Data
www.kaggle.com
The data is from the Kaggle data platform and contains web tracking data for one month (Oct. 2019) from a large multi-category online shop. Each line in the file represents an event. There are different types of events, such as page views, shopping cart actions, and purchases.
The record contains information about:
event_time / When was the event triggered? (UTC)
event_type / view, shopping cart, purchase
product_id / product ID
category_id / category ID
category_code / category name
brand / brand name
price / price
user_id / customer ID
user_session / session ID
The data is available as a CSV file as an export from a customer database platform for analysis.
Let’s import the data:
First look at the data
There are over 42 million records available for the month of October 2019.
First 10 records of the dataset
This data comes from over 3 million visitors. Over 166,000 different products were purchased.
Example Customer Journey
To show an example of a customer visit, we look at all entries that exist for a certain Session_id and try to interpret them:
Example Customer Journey
The user has viewed several iPhones
An iPhone purchased with 1 click (without shopping cart event)
Considered 2 unknown products of the brand arena
Viewing some Apple headphones and buying one
Afterward he visited a more expensive one, but decided not to buy it.
Example Customer History
To view all actions of a specific user in that month, we filter all records with his User ID.
Example Customer History
Explorative data analysis
How many events were recorded in the web analysis on each day?
Number of events recorded over time in Oct. 2019
Number of event types
Which events occur in the data, how often?
Type of events
The majority of the data consists of page views 96%, the rest of the data consists of shopping cart and purchase actions.
Features of the visitors
We calculate the most important features for each visitor and put them together in a table.
Pageviews
Visits
Number of purchased products
Number of products in the shopping cart
Total expenditure
Expenditure per visit
Pageviews per visit
Shopping cart actions per visit
We filter the purchases from the actions
In the next step, we filter the purchases from the data in order to be able to analyze them more precisely. We save the result in a separate table.
Key figures on purchases
How many products are purchased by one buyer?
What is the average purchase value per buyer?
On average, each buyer makes slightly more than 2 purchases.
The average buying value per buyer is 773.85.
Brand popularity
From which brands are products bought?
Let’s look at a bar chart of the top 10 brands.
Popular Brands
For further analysis, we group the purchases into groups of the most common brands (the top 5). And the rest into a group “Others”.
We calculate for each buyer the share of purchases in the 6 brand categories, and store them in the buyer table.
Product categories
Which product categories are available?
The product category exists in the form of a hierarchical code. We extract the first level and save it as a separate characteristic.
Purchases per top-level category
There are 13 main categories. We add the share of the purchase price in each of the main categories as additional features to the table of buyers.
Adding purchase characteristics to the characteristics of all visitors
We now add the purchase characteristics to the table of all visitors, and thus obtain a table with all visitors and characteristics.
So we have the data of 3,022,290 users, of which we have stored 27 characteristics each.
Limitation of the number of users
In order to keep the calculation of the clusters and the visualization within limits, we will limit ourselves to the first 50,000 users in the following.
Conversion to matrix format for cluster calculation
Before we can start the calculations for clustering, we have to convert the data into the appropriate format as a 2-dimensional array.
Scaling of the data
To ensure that all characteristics are present on a uniform size scale, the matrix is scaled by shifting by the mean value and dividing by the standard deviation.
Calculation of customer segments with different numbers of clusters
The “k-Means method” is used to calculate the segments. It is a method used for cluster analysis, where a set of objects has to form a number of k groups, which have to be given in advance.
Since we are dealing with a very large amount of data, we use the “mini-batch” variant of the procedure, which uses only part of the data in each iteration to calculate the new cluster centers.
How to set the optimal number of clusters (“k-value”)?
We calculate the clustering for different k-values, and then search for the best value. The calculated silhouette score is a measure for the quality of the clustering. The closer the value is to 1, the better the quality of the clustering. We use it to determine the number of clusters.
Silhouette score for different k values
Now we calculate the clusters with the determined optimal cluster number.
And consider the number of customers allocated to each segment.
Cluster sizes
Visualization of the clusters
To get an impression of the clustering, we create a visualization with the method “tSNE”. t-Distributed Stochastic Neighbor Embedding (tSNE) is a technique for dimensionality reduction and is particularly well suited for the visualization of high-dimensional data sets.
Cluster visualization with tSNE
Now let’s calculate the visualization with a much smaller number of clusters. It is much more difficult for the procedure to separate the individual areas into different segments.
tSNE with 5 clusters
Characterization of the segments
In order to enable an interpretation of the segments, we create graphical representations that show, for example, the characteristics of categories for each segment at once as a “radar chart”. This can help to interpret the meaning of the segments.
Categories in clusters
For example, a segment that has high purchase shares in the area of “Children” and “Sports”, and others with purchases in “Electronics”. | https://towardsdatascience.com/segmentation-of-online-shop-customers-8c304a2d84b4 | ['Andreas Stöckl'] | 2020-05-26 22:04:28.135000+00:00 | ['Visualization', 'Data Science', 'Online Marketing', 'Ebusiness', 'Machine Learning'] |
Implementing The Strategy Pattern Using Lazy-Loaded Components in Angular Version 9 | Implementing The Strategy Pattern Using Lazy-Loaded Components in Angular Version 9
Simply stated, the Strategy Pattern is a behavioral pattern that governs the selection of different algorithms or procedures for solving a common problem at runtime. Many times, this pattern is implemented by loading a computational library that conforms to an Interface and is applied uniformly by one or more application components.
In front-end applications, a strategy may involve more than just an algorithm or library API. There is often need to load one of several different implementations of an entire view based on runtime criteria. In an Angular setting, we may think of this as needed to lazy-load a Module, but it is even more helpful if we can load individual components, outside any Module definition. Enter Angular Version 9.
In the article below, I covered dynamic component generation inside lazy-loaded routes, with the ability to control component display via JSON data,
Three components were dynamically generated inside a lazy-loaded route and displayed in an order dictated by a data file. However, since components prior to Angular Version 9 were required to exist in a Module definition, each component was defined in the entryComponents section of the route’s Module. Each component was loaded into the application when the route was lazy-loaded, even if the component was not actually required.
Angular Version 9 provides the ability to lazy-load individual components that are outside the scope of any Module. This allows for a very powerful implementation of the Strategy Pattern.
The Application
The sample application for this article is a mashup of multiple requirements from past applications across Flash, Flex, and Angular. The application illustrates two different solutions to the point-in-circle problem, i.e. identify the circles in a collection for which a single point is in the strict interior.
Now, if it was just a matter of selecting an algorithm to solve the problem and then display the results in a single view, the strategy could be implemented with a lazy-loaded JavaScript library (a topic for another article). In this demo, the results are viewed differently, not only in terms of layout, but with different render environments. Each view renders a simulation in which the point moves inside the view area and the simulation updates the point/circle intersections. However, one algorithm is rendered into a Canvas context and the other into SVG. The Canvas and SVG displays also differ based on algorithm characteristics.
Because of the differing views and dramatically different dependencies between views, this problem is best solved by lazy-loading an individual component (and its dependencies).
Before deconstructing the application, I wish to point out that is is not an article on ‘how to’ lazy-load Components in Angular Version 9. A Google search will return about half a dozen good articles on the steps involved in this process. Instead, this article discusses how to build a complete application around the technique.
And, on that note, here is the GitHub for the complete application.
Algorithms and Geometry
Well, let’s get this out of the way early. This is an Angular article, not a math/algorithms article. As a consequence, the two algorithms I chose for the demonstration are rather rudimentary and easy to follow. As far as the necessary math, here is everything we need.
Blah, blah … math … blah, blah … geometry … blah blah … API.
There, we’re finished :). Everything you need is encapsulated away into Typescript libraries in the /src/app/shared/libs folder. I even gave you a bonus — a number of pure functions for operations involving circles from my private Typescript Math Toolkit library.
The two algorithms implemented in this demo were chosen to illustrate techniques you may find helpful in future applications. In one case, there could be an arbitrary (and large) number of circles, but the circles do not move (only the point moves). In the second case, there are a modest number of circles (hundreds) and a small percentage of them are expected to move each simulation step along with the point.
Both the point and circles move according to a random walk. Don’t worry — remember the power of the API — it’s all done for you :)
The first algorithm (simulation rendered into a Canvas) uses a simple quad map. The viewable area is divided into quadrants. The Typescript Math Toolkit basic circle class automatically computes quadrant locations given bounds, so a quad map is very easy to implement. Since the circles never move, the map need only be initialized once before the simulation begins.
The second algorithm (simulation rendered into SVG) is designed for a situation where there are a modest number of circles (hundreds), and a small percentage of them move at each time step. All circles are tested for point intersection, but the test is optimized to return false quickly. It is more expensive to test for full intersection, so instead of testing ‘if something is true, then do the following operation,’ it is more efficient to test ‘if something is false, skip and go to the next one.’ It’s very easy to quickly return false for the intersection test and that is the majority expectation for most tests.
Common (visual) operations for each display include indicating the specific circles that contain the test point at each simulation step. Previous visual indications from a prior simulation step must be reset to default display at the next time step. Beyond that, the Canvas display shows the quadrants and highlights the current quadrant in which the test point lies. The SVG display only shows the circles and test point.
It’s only necessary to understand the algorithms employed in both simulations at a very high level. In fact, the only reason to dive deeper into their implementation is if you want to improve the point-circle intersection algorithm further.
Application Deconstruction
Let’s look at each of the application requirements, and then discuss how they are all integrated together. We need
1 — Two components to implement each algorithm and each set of display requirements for a single simulation. Commonality among these components should be specified in an Interface.
2 — Lazy-load only one of the components based on some runtime criteria (an injected algorithm ID, for example).
3 — Create a means to provide inputs and handle component outputs.
4 — Run the simulation, i.e. advance one step at regular time intervals.
For demonstration purposes, the main app component, /src/app/app.component.ts, serves as the smart component that lazy-loads one of the two point-in-circle simulation components. The main app module allows specification of an algorithm ID (1 for the first algorithm and Canvas render, 2 for the second algorithm and SVG render).
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppComponent } from './app.component';
import { CircleService } from './shared/services/circle-service';
// Injectable constants
import { ALGORITHM_ID } from './tokens';
@NgModule({
declarations: [
AppComponent,
],
imports: [
BrowserModule
],
providers: [
CircleService,
{ provide: ALGORITHM_ID, useValue: 1 },
],
bootstrap: [AppComponent]
})
export class AppModule { }
This ID is injected into the main app component and then used to select the run-time strategy for the point-in-circles simulation. Change the value from 1 to 2 in order to lazy-load a different simulation component.
The two point-in-circle simulation components are
Algorithm 1 (Canvas): /src/app/components/point-in-circle-1/pic-1.component.ts
Algorithm 2 (SVG): /src/app/components/point-in-circle-2/pic-2.component.ts
Since everything is driven by the main app component, let’s look at that one in detail.
Main App Component
This component, in /src/app/app.component.ts is the smart component that lazy-loads one of two presentational components based on an algorithm ID. Although the algorithms and the display employed by each of these presentational components is different, as far as the rest of the application is concerned, they have a common set of inputs, outputs, and an exposed API. These are defined in the Interface in /src/app/shared/interfaces/pic-component.ts,
import { OnChanges } from '@angular/core';
import { Subject } from 'rxjs';
export interface IPointInCircle extends OnChanges
{
intersect$: Subject<string>;
next: () => void;
step: number;
}
The simulation step (controlled by the smart component) is an Input to the point-in-circle component. This component is not statically defined in a template (there is no known binding to that Input), so Angular does not know to call the ngOnChanges lifecycle method. As a result, the smart component must do this manually. So, each lazy-loaded component must implement the OnChanges interface. Each of these components must also provide a next() method in order to advance the simulation one step.
While there are several approaches to wiring up component outputs, I personally prefer a reactive approach. Each of the lazy-loaded components must expose a Subject<string> to indicate that the point intersected a circle with a particular id. The smart component that lazy-loads one of the two point-in-circle components simply subscribes to intersect$.
This interface is the only knowledge the smart component has about the two lazy-loaded presentational components.
Typical practice for lazy-loaded components is to place them into an ng-container, and this is the case for the main view in this application, /src/app/app.component.html,
<p>Algorithm ({{algorithm}} Render into: {{render}})</p>
<p>Total intersections: {{intersections}}</p>
<ng-container #picContainer></ng-container>
The template variable is used to assign a ViewChild in /src/app/app.component.ts,
@ViewChild('picContainer', {static: true, read: ViewContainerRef})
private picContainer: ViewContainerRef;
The main app component receives the algorithm ID and all other classes necessary for lazy-loading through injection,
constructor(@Inject(ALGORITHM_ID) public algorithm: number,
private _compFactoryResolver: ComponentFactoryResolver,
private _injector: Injector)
{
this.render = RenderTargetEnum.CANVAS;
this.ComponentInstance = null;
this._duration = 1000;
this._delta = 500;
this._currentStep = 0;
this.intersections = 0;
}
The render variable indicates whether the rendered output is Canvas or SVG. The ComponentInstance variable is a ComponentRef<IPointInCircle>. The instance property of that variable allows direct calls to the presentational component after lazy-loading.
A duration of 1000 time steps at increments of 500 msec is set in the constructor along with the current step number of zero and a total intersection count of zero.
On initialization, the component begins the simulation via a call to the private __initSimulation() method, whose relevant code is shown below.
You could place this code in the then clause of a Promise, but I personally prefer asyc/await (although beware there is a known problem with this if your target is ES2017). Component lazy-loading is shown below,
Following is the relevant block of code for lazy-loading the component associated with algorithm #1,
const {Pic1Component} = await import('./components/point-in-circle-1/pic-1.component'); factory = this._compFactoryResolver.resolveComponentFactory(Pic1Component); this.ComponentInstance = this.picContainer.createComponent(factory, null, this._injector);
It is very important to place the precise path of the lazy-loaded component as a string literal inside the import. For example,
const url: string = './components/point-in-circle-1/pic-1.component'; .
.
. const {Pic1Component} = await import(url);
compiles with a warning, and the application will not run.
The destructuring assignment must reference an exported Object from the referenced file, in this case, Pic1Component.
Once a point-in-circle simulation component is lazy-loaded, the following blocks of code in __initSimulation() handle outputs, inputs, and the actual simulation,
this.ComponentInstance.instance.intersect$.subscribe( (id: string) => this.__updateIntersection(id) );
This subscribes to the output from the simulation component and executes a handler every time the simulation detects a point-circle intersection.
The simulation is run within an RxJs timer.
timer(100, this._delta)
.pipe(
map( (msec: number): void => {
this._currentStep++;
if (this._currentStep > this._duration) {
this.destroy$.next();
this.destroy$.complete();
}
}),
takeUntil(this.destroy$)
)
.subscribe( () => {
this.ComponentInstance.instance.ngOnChanges({
step: new SimpleChange(this._currentStep-1, this._currentStep, this._currentStep === 1)
});
// run the next simulation step
this.ComponentInstance.instance.next();
});
The variable destroy$ is a local Subject that is used to indicate the end of simulation. This happens when the duration limit is reached or if the component is destroyed.
The parent (smart) component controls the step count of the simulation as it may later be desired to run a simulation forward or backward. The presentational (simulation) component accepts this step as an Input and reflects it in its own UI.
The simulation (lazy-loaded) component Input is accepted and processed through that component’s ngOnChanges() method. That handler is called each time step with a new SimpleChange object. Note that the call is made on the component reference’s instance property.
this.ComponentInstance.instance.ngOnChanges({
step: new SimpleChange(this._currentStep-1, this._currentStep, this._currentStep === 1)
and then, each iteration of the timer executes the next simulation step,
this.ComponentInstance.instance.next();
That concludes the deconstruction of the smart component that lazy-loads one of the point-in-circle simulation (or presentational) components. The simulation components are largely like any other Angular presentational component, but with one important exception. We’ll look at that next by deconstructing the Canvas-rendered component.
Lazy-Loaded Component Structure
The lazy-loaded simulation component for the first algorithm looks largely like any other component. This is the template,
/src/app/components/point-in-circle-1/pic-1.component.html
<div pic-canvas
class="circlesContainer"
[strokeWidth]="1"
[strokeColor]="'0x0000ff'"
[gridColor]="'0xff0000'"
(onIntersect)="onIntersection($event)">
</div>
<p>Simulation Step: {{step}}</p>
An attribute directive is used to delegate Canvas creation (via PIXI.js), computations, and rendering. In addition to defining class properties necessary to fulfill the IPointInCircle interface, a ViewChild is defined for the attribute directive,
export class Pic1Component implements IPointInCircle, OnChanges
{
public intersect$: Subject<string>;
@Input()
public step: number;
@ViewChild(PicCanvasDirective, {static: true})
protected _picContainer: PicCanvasDirective;
constructor()
{
this.intersect$ = new Subject<string>();
}
.
.
.
The next two methods complete the IPointInCircle interface contract,
public ngOnChanges(changes: SimpleChanges): void
{
let prop: string;
let change: SimpleChange;
for (prop in changes)
{
change = changes[prop];
if (prop === 'step')
{
this.step = +change.currentValue;
}
}
}
public next(): void
{
if (this._picContainer) {
this._picContainer.next();
}
}
Note that this is the next() method that is called from the RxJs timer subscription after Pic1Component is lazy-loaded.
Note that Pic1Component has an important dependency, the PicCanvasDirective. While Pic1Component may exist outside any NgModule in the application, the directive still needs to be tied to the component of which it is a ViewChild via a module definition. The cleanest way to do this and ensure that the directive is only loaded when Pic1Component is loaded is to use a local (non-exported) module,
@NgModule({
imports: [CommonModule],
declarations: [
Pic1Component,
PicCanvasDirective
]
})
class PicCanvasModule {}
You will see a similar module in /src/app/components/point-in-circle-2/pic-2.component.ts for the second simulation component.
Note that Pic1Component (and Pic2Component, for that matter) have another dependency, the injected CircleService. This service currently contains read-only variables, but is not listed as a provider in the above NgModule, nor has an injector been configured for it. In this application, the lazy-loaded component is loaded into the application root, so we are using the root injector. Note that a provider for this service is contained in /src/app/app.module.ts. It is often the case that either the root or a route-level injector can be used for such purposes. Other tutorials on the mechanics of lazy-loading components describe how to configure an injector just for services used by a lazy-loaded component.
This is all the structure that is necessary to dynamically load and instantiate a complete simulation for the problem at hand using only an algorithm id to differentiate between the two simulations.
The final section is optional and deconstructs the PicCanvasDirective. It is not necessary to understand the details of this directive in order to build your own application with lazy-loaded components. So, free free to quit reading here :)
Inside A Simulation
The reference file for this deconstruction is /src/app/shared/libs/render/canvas-render/pic-canvas.directive.ts
This is an attribute directive that creates a PIXI.js-generated Canvas inside some DOM container (typically a DIV). The directive employs a number of computational helper functions, as can be seen in the imports,
import { TSMT$Circle } from '../../circle';
import { CircleService } from '../../../services/circle-service';
import { canvasRenderCircle } from '../render-circle-canvas';
import { pointRandomWalk } from '../../point-random-walk';
import { TSMT$PointInCircle } from '../../circle-util-functions';
import { TSMT$getQuadrant } from '../../geom-util-functions';
import { RandomIntInRange } from '../../random/RandomIntInRange';
@Directive({
selector: '[pic-canvas]'
})
export class PicCanvasDirective implements OnInit, OnChanges
.
.
.
TSMT$Circle is the Typescript Math Toolkit (a private library for which I have open-sourced some of the contents) basic circle class. This class is optimized for a quad-map as it automatically updates the quadrant location of a circle every time its coordinates are mutated (provided that bounds are set in advance). The class also handles the case where a circle may lie in more than one quadrant.
Quadrants in which circles are located are referenced by four Typescript Record structures,
protected _quad1: Record<string, boolean>;
protected _quad2: Record<string, boolean>;
protected _quad3: Record<string, boolean>;
protected _quad4: Record<string, boolean>;
Relevant information about the DOM container for the Canvas (specifically width and height) is available in the ngOnInit() handler, which is where most of the PIXI.js initialization is performed. The initial set of circles are also drawn at that time in the __initCircles() method.
There are four class variables that store circle-related information. The first is _circleRefs, an array that stores are reference to each TSMT$Circle in the simulation. Every circle has a visual representation, a graphics context or display object in PIXI.js, which is stored in another array, _circleDO.
When a circle is identified as containing the test point during a simulation step, the circle reference is placed into the _indentified array and its corresponding graphics context into the _identifiedDO array.
Here is the code for the circle initialization, which is responsible for placing circles throughout the simulation area. PIXI.js drawing has been covered in a number of my past articles and will not be discussed in the current deconstruction.
The most important code (for this algorithm) occurs after placing the circles ‘randomly’ inside the display area.
c.setBounds(0, 0, this._width, this._height);
if (c.inQuadrant(1)) {
this._quad1[c.id] = true;
}
if (c.inQuadrant(2)) {
this._quad2[c.id] = true;
}
if (c.inQuadrant(3)) {
this._quad3[c.id] = true;
}
if (c.inQuadrant(4)) {
this._quad4[c.id] = true;
}
Each circle instance contains internal bounds that may be set by the caller. Bound setting and each subsequent coordinate mutation alters the quadrant in which the circle lies. That quadrant may be queried and is used to set the primitive quad-map (currently four separate Records in Pic1Directive).
Most of the simulation work is performed in the next() method that advances the simulation one step.
It seems like a long method, but it performs a relatively simple number of steps. First, the _indentified array is used to redraw any circles identified as containing the point in the prior simulation step with their default visual properties.
The random walk for the point is executed and the coordinates updated in a destructuring assignment,
[this._px, this._py] = pointRandomWalk(this._px, this._py, CircleService.LOW_RADIUS, CircleService.HIGH_RADIUS);
The current quadrant occupied by the point (stored in the _curQuad class variable) is updated with a utility method,
this._curQuad = TSMT$getQuadrant(this._px, this._py, 0, 0, this._width, this._height);
The current quadrant is checked vs the prior quadrant from the previous simulation step. If the quadrant number changes, then it is necessary to adjust the subset of circles that are checked for point-circle intersection. For example, if the current quadrant is one and the prior quadrant is one, then the check is made against keys in the local _quad1 Record. This variable might be unchanged for several simulation steps. If it changes to three, for example, tests for point-circle intersection should to be made against keys in the local _quad3 Record. This is handled by the block of code,
if (this._curQuad != this._prevQuad)
{
this._prevQuad = this._curQuad;
this._check = Object.keys(this[`_quad${this._curQuad}`]);
}
The statement,
this[`_quad${this._curQuad}`]
creates a reference to one of the local class variables (Records), _quad1, _quad2, _quad3, or _quad4, depending on the current quadrant number. A forEach function runs the point-circle intersection test against only the circles that lie in the current quadrant,
this._check.forEach( (id: string): void =>
{
circ = this._circleRefs[+id];
g = this._circleDO[+id];
if (TSMT$PointInCircle(this._px, this._py, circ.x, circ.y, circ.radius))
{
this._identified.push(circ);
this._identifiedDO.push(g);
g.clear();
g.lineStyle(this.strokeWidth, '0xff0000');
g.drawCircle(circ.x, circ.y, circ.radius);
this.onIntersect.emit(id);
}
Note that an Angular EventEmitter is used for the directive to convey an intersection to a containing component (Pic1Component). That component in turn executes the next() method of an RxJs Subject, which is subscribed to in the main app component (the primary smart component in this application).
Here is a screenshot from one time step of the Canvas-rendered simulation,
Note that the quadrant currently containing the test point is highlighted in green. Run the application and change the algorithm id to 2 to see how the SVG simulation display differs.
Summary
This was a long article, but I hope it illustrates how lazy-loaded components can be applied in an actual application. In this hypothetical situation, two different simulations with different algorithms, dependencies, and output are differentiated by an algorithm id. A run-time decision is made as to which simulation to execute. The simulation component and its dependencies are lazy-loaded.
This is in contrast to an alternate scenario where a simulation with identical dependencies and visual display is differentiated by algorithm only. This situation can be handled by a lazy-loaded a computational library instead of a component.
Good luck with your Angular efforts!
EnterpriseNG is coming
EnterpriseNG is a two-day conference from the ng-conf folks coming on November 19th and 20th. Check it out at ng-conf.org | https://medium.com/ngconf/implementing-the-strategy-pattern-using-lazy-loaded-components-in-angular-version-9-38efceb1f49f | ['Jim Armstrong'] | 2020-09-30 19:45:07.730000+00:00 | ['Math', 'Typescript', 'Angular 9'] |
My Journey Towards Reconciling My Faith and Sexual Orientation | I always used to say to my family, friends, and everyone I meet that dealing with depression has been the hardest thing I ever experienced in my life. I spent my teenage years grabbling with mental health issues, and that made me feel empty because I lost so many years sinking in a deep sorrow while my fellow friends were enjoying every moment of their childhood.
The main reason for my depression was that many kids at school and people I knew harshly bullied me. I used to go home crying every day without telling anyone about what happened since there was a lot of shame behind it. I used to be criticized for the way I walked, my soft voice, the way I spoke, and pretty much everything about me. Going outside was like torture for me, but so it was staying home because I used to receive the same comments from my family members, from the people I loved the most. I felt like I had to adjust everything about me, and that made me feel lonely because I was continually asked to fix myself. Yet, I was the only one who didn’t find anything to fix, which made my childhood, though, and I still haven’t even healed from those wounds now that I am an adult.
I grew up in a conservative Muslim family, and from an early age, I knew that Islam was my identity. Yet, as early as elementary school, I knew that I preferred the company of my male classmates, I could easily forget myself in the charm of some guys I used to study with, it felt amazing to be attracted to someone.
However, and from a very young age, I learned that the kind of attraction and love I was experiencing was a synonym of loneliness and abandonment.
After a few years, I came across the word gay, and like many other young gay people, the realization of our sexual orientation comes as a shock. In most cases, accepting this takes years. Some can admit this to themselves only at a mature age. A few are never capable of this.
No gay chooses his/her sexual orientation. Who would be foolish enough to decide to become gay when life for gay is so much harder than for heterosexuals? Gay people are often teased at school. In numerous cases, gay’ own families and friends distance themselves and even reject them. Getting a job is often more difficult for gay people than for straight people. And, gay people may lose their job when their boss or colleagues discover their sexual orientation.
Gay/lesbian people reported more acute mental health symptoms than heterosexual people, and their general mental health also was poorer. Gay/lesbian people more frequently reported severe physical symptoms and chronic conditions than straight people. Differences in smoking, alcohol use and drug use were less prominent.
Often heterosexuals look at gay with contempt and may also express their aversion to them verbally. In some countries, gay people are persecuted, imprisoned, and even sentenced to death. Gay rightly ask who would choose this.
Truth to be told, the primary reason behind my depression, was my inability to accept my sexual orientation, since I thought my faith was against me.
The first person who told me that I could be both gay and Muslim was my psychiatrist. When he said to me that I was astonished because it was the first time that someone validated my feelings, it was the first time that someone told me that I had the right to exist, me as a whole person without giving up on something as crucial as my sexual orientation.
After I met with a psychiatrist, I started researching on my own. He informed me of some organizations in the world that helped LGBTQ+ people reconcile their sexual orientation with their faith. I reached out to them and joined their social network. I used to ask a bunch of questions every day, and many other people were more than happy to help me and guide me towards the right path.
Having someone who told me that it was okay for me to be gay and muslin wasn’t enough because I needed to be convinced myself that it was possible to be both. So after years of research, I came out with the following reasons that justify how it was reasonable and healthy to be both LGBTQ and Muslim:
The only mention of same-sex sexual activity in the Qur’an which is the holy book in Islam, to my knowledge, is regarding the city of Sodom, and to the most credible interpretation is that the problem wasn’t that they were practicing same-sex sexual activity, it’s that they were practicing highway rape, preying on vulnerable travelers. Many people use these verses to justify their hatred towards gay people using specific verses that mention the sexual activity of Sodom or Lut people.
Sodomy people were already married with children, which means they were heterosexuals, who preferred to engage in sexual activity with men, and they weren’t gay per inner disposition.
The holy book doesn’t mention anything regarding the love between two adult men.
The Quran. Not once, *not once* does explicitly state that homosexuality is a sin and is condemned.
The Quran doesn’t speak about lesbian, asexual, or other forms of sexual diversity.
Most of the verses used to attack LGBT folks have disputed interpretations, and there is a solid argument that the verses aren’t referring to something like modern gay relationships but are instead referring to temple prostitution or pedophilia.
The basis of marriage for most LGBT people is love. There is nothing in Islam forbidding love marriage, even though marriage for love does present a different pool of possible candidates than was generally present in the time of Revelation, there’s no evidence, to my knowledge, that someone of the same sex is legally forbidden as a marriage partner. There isn’t any. There is even a part of the Qur’an that explicitly lists everyone we aren’t allowed to marry; most of these prohibitions pertain to familial relations. People of the same sex are notably absent from this.
For me, anyone who says that homosexuality is “un-Islamic” or “sinful according to the Quran” clearly knows nothing about Islam and the teachings of its scripture. I can clearly say that religion doesn’t speak, but people can speak and interpret things the way they want. It is very misleading to listen to people and their judgment. It is essential to develop a critical mindset, especially with it comes to discussing religious things.
I now created a Facebook group for the LGBTQ+ Muslims to provide them with the support they need and guide them towards full acceptance. I know how hard and lonely this journey can be. Even today, I still have friends of mine who always go against me and try to convince me that it wasn’t okay to be gay and Muslim. These kinds of friends chose to remain reluctant to open their minds and ease things for the LGBTQ+ people, and it was their choice. It is also my choice to walk away from this kind of friendship; I owe it to my happiness and peace of mind because, for me, my religion and the God I chose calls for love, acceptance, and understanding.
Let’s be honest here; we choose a religion to become happy in life, to become more stable, not to be miserable. Many LGBTQ+ people attempt suicide just because they feel that they don’t have a spot in their religion; others choose to leave their faith even if it’s something vital for them.
What matters, when it comes to spirituality, is not the gender of the persons you are attracted to, but rather the way you love them that matters. As far as I can tell, as long as you can commit to, support, and genuinely care for the people you love, you’re right in God’s books.
Gay people have not chosen their sexual orientation. Many of them have the same desire as heterosexuals’ to live in a loving and faithful relationship. For me, Islam represents one of my arms, and being gay is the other arm, and I refuse to cut one of them. | https://medium.com/an-injustice/my-journey-towards-reconciling-my-faith-and-sexual-orientation-39d44c4a6252 | ['Mohamed Maoui'] | 2020-04-27 22:21:45.218000+00:00 | ['LGBTQ', 'Faith', 'Mental Health', 'Equality', 'Islam'] |
I Haven’t Touched Anyone In 30 Days | This feels weird.
I made a mask. I put on a leather jacket. I’m fine. Everything’s fine.
The last time I had plans or wore pants with a zipper was March 8th. I know because I Instagrammed it. It was a sunny Sunday and I sat on a roof with some friends thinking how nice it was to be sitting outside again. I assumed it was the first roof of many. We talked about Netflix documentaries over beers and wore jackets because it was a little too cold out. After that we went to dinner and hugged goodbye and I haven’t touched a person since. That was 30 days ago. It was a simple, casual event I had no idea would be my last human contact for the foreseeable future. Ignorance is bliss, but it is also a motherfucker.
I have been self isolating, socially distancing, quarantining, basically pretending I’m a princess in a tower from a 5th story Brooklyn window since the evening of March 8th. I didn’t really know that’s what I was doing until March 11th, the day the shit hit the fan. I wish we’d all known to get started in January. Maybe back then the idea of our lives stopping on a dime was too much to fathom. You don’t want to know the things I’m allowing myself to fathom now. Sometimes I tell my WhatsApp group. They’re kind to me.
As a single woman who would rather be rent-poor than ever see another human being’s dishes in the sink again, I live alone without roommates. The only other inhabitant of my home is a 13-year-old long-hair mutt cat that resembles what might happen if an average member of the feline species and a Swiffer duster on a shelf at Wal-Mart decided to mate. It’s just the two of us in here, and only one of us has a prescription for Xanax.
I’m used to being alone. Yes, I will admit that the current climate is thoroughly pushing it, but overall I’m used to a life containing a minimal amount of human touch. Having experienced my fill of casual sex AKA sex where I was hoping it would turn into more than sex and he was hoping to go home and play video games as soon as soon as possible, I’m pretty much over trivial touch. In this phase of my life I value connection with more depth. I intend to ensure that the next man granted bedroom privileges intends to spend quite a bit of time there and would indeed be disappointed if that were not the case.
So while the online dating moronic masses are still trying to get quick, easy, and free ass during a global pandemic, getting the bed to myself is nothing new. That’s not what this is about. This is about human contact of any kind which, for me, has been limited to the lady at the grocery store who passes my lemons over a scanner and yells at me to stand back. I have found this interaction wanting.
The assumption was that I’d feel lonely. That was a given. Even me, a stubborn, minimally patient advocate for single women encased in NASA-grade titanium would be susceptible to the very basic human emotions and wants that come along with solitude of this intensity. But what I’ve actually come to feel after 30 days of not sitting in the same room as someone else is fear. I feel a very permeating fear around 12:30pm every day that doesn’t let up until I go to sleep or, on occasion, find something good on Netflix. And even then, the relief is a sham. It’s scary feeling this alone, and this void of human connection. The cat does her best, but she weighs eight goddamned pounds, what’s she supposed to be, the big spoon?
The fear is a kind of new normal, not without variation, lest I get bored. Occasionally it offshoots into bouts of extreme impatience, deep sadness, or consuming apathy that finds me standing in the middle of a room doing literally nothing, as if under a spell of nothingness that not even the sparkliest craft project could pull me out of. I’d get into the anxiety, but you have Twitter. I’m sure you understand.
I’ve tried to find ways to minimize my discomfort, including but not limited to meditation, white wine, and eating lemon curd from a jar, but really…nothing fixes this. This is just a thing I’m living through, and will live through, for the duration of the pandemic. Heaven knows there are healthcare workers, sick people, and their families dealing with actual problems. My complete restructuring of self and fear that I’ll meet the end of the world alone is laughable in comparison. But I also think playing the comparison game both invalidates people’s feelings and sends us all down a dark path we don’t want to see the business end of. Everything sucks. Let’s agree to that.
When we first started existing in closed quarters, our faces seen and voices heard only from behind suddenly VERY clean screens or not at all, I was so afraid for my coupled friends, and especially for my friends with kids. Trying to unexpectedly earn their living from home with other people around, some of whom require constant care and attention, sounded…forgive me, fucking horrendous. If it’s been difficult for me, a grown woman, to establish and stick to a reasonable schedule and something resembling smart nutrition, how the hell are parents doing this with six year olds?
But as the days have marched on, and as rules regarding sleeping, waking, snacking, and Zooming have melded together like a box of crayons over an open flame, I’ve come to see my friends who have partners and families as lucky, at least from my perspective. They’re not actually alone. They’re stepping on Legos and Instagramming ungodly defilings of their sofa cushions with household products, sure, but they’re not alone. The scenario in my apartment is really alone. It is also some bullshit.
I don’t want children and I’m very happy to ramble into the internet about that, but dammit if they don’t look convenient for cuddle purposes right now. Is that why people have them? For hugs? Those tuition fees have to have some kind of tangible ROI I’d imagine. I could do with a hug I think. I get a little sad trying to imagine the next time that will be considered an acceptable activity. Do you ever just stop and think for a moment how batshit all of this is? How truly bizarre life is right now? Actually, you know what…don’t. I’m sorry.
There’s an understanding about check-in texts, calls, and FaceTimes. We’re probably technically seeing more of each other now than we did before. But what I think is becoming keenly aware to us in a way that was always true but is now impossible to ignore: Digital doesn’t count.
Yes, I am living for my Friday night House Party app happy hours with friends from New York, London, and Sydney all joining in despite one of us being on his way to work at a hospital while the rest of us are, ahem, shitfaced. I do treasure an unscheduled FaceTime from a friend as one might regard finding an unbroken shell on the beach. But it isn’t enough. It isn’t nearly enough. I don’t know why that is, but I do see some irony in the fact that during a technological age when we’ve never been more connected, we’ve also never been further apart. There are some things, friends, an app cannot exist for.
It’ll change us. I don’t think life operates normally after something stops the entire globe from spinning around like COVID-19 closed her eyes and put her finger down on it. I would like us to be less digital. I would like us to hang out more and charge our devices less. I would like us to stop being so relieved when someone cancels plans. I would like us all to take more initiative to make plans.
I want to be outside this summer. I want to be back on that roof, mad at myself for forgetting sunscreen and not afraid to let a friend try my beer to see if she likes it. I want to find out if somewhere out there, there’s a human being who wants to make sure I never sit through another global disaster alone. Being single is just fine. Being alone and afraid for this long is not.
There’s lonely, then there’s whatever this horseshit is. I feel home, and not at home. Quiet, and silently screaming. Productive, and also like I never accomplish anything at all. I’m off. It’s all off. And I’m one of the lucky ones. I work from home, did I tell you that? On its face, my day doesn’t look very different. But remove yourself from society involuntarily and it becomes pretty clear that the thing that keeps us together is our freedom and ability to be together. Maybe when we can’t connect, we fall apart.
Please stay home. And, if possible, give someone a hug—for me. | https://shanisilver.medium.com/i-havent-touched-anyone-in-30-days-28da6baad807 | ['Shani Silver'] | 2020-04-07 18:25:10.180000+00:00 | ['Humor', 'Relationships', 'Life', 'Culture', 'Writing'] |
Two Things I Learned in Computer Programming | I have always hated programming. Ever since my first encounter with a computer code in high school, I knew this was something I wouldn’t pursue in the future. I even graduated college barely passing my two programming courses. However, when I was searching for available jobs for BS Mathematics graduates, most of the jobs I wanted required some programming knowledge. Of course I still tried applying because I thought maybe it’s okay since I possessed some of the skill requirements (which are mostly soft skills), but when interviews came, I would shrink into self-consciousness because deep down, I knew I was unqualified for the job. That’s when I realized I have to learn these programming languages.
Fast forward to today, I have now decided to take a path towards data science and turns out, programming is not as scary as I always deemed it to be. Here are two of the interesting things I learned so far:
First is using the command line to switch from one directory to another within your computer. To access it, just type cmd on your computer’s search bar and press enter or if you don’t like using your mouse, you can press the windows key and r simultaneously and a box like this will appear at the bottom left of your screen:
Just type cmd on the space provided, press enter and voila! You have accessed the command line.
Now, maybe you are wondering why it matters to know how to use the command line when we can simply click through different icons in our graphical user interface. That’s because there are certain computer commands that are only accessible through the command line. But for now, what I will show you is changing from one directory to another.
Once you have entered the command line, what you will see is a black screen with three lines of white text. The third line is the most important because it shows where you currently are in your computer. As you can see in my screen below, the user’s name is CATH because the previous owner of this laptop is named Cath.
Now, let’s try changing directories! We can do it by following the four simple steps below:
To switch from the current directory to the desktop directory, just type cd<space>desktop, where cd stands for change directory.
2. Now that we are in our desktop directory, we can go back from the previous directory by typing cd<space>.. or cd space dot dot and pressing enter.
3. Also, it is possible to move to two directories in a single line. If I want to go to my WORK folder in my desktop, I will just type cd<space>desktop\work
Note that I used a backslash symbol to include my second directory.
4. Lastly, going back to my original directory, I will just type cd<space>../.. where I now used a slash symbol.
Those are just four out of so many things to learn about command line. As I mentioned earlier, there are certain commands accessible only using the command line.
The next thing I want to share is the Spyder IDE, a scientific programming environment written in the Python language. As you can see in the photo below, there are only three main parts: the code block, the variable explorer, and the python console.
Basically, the code block is where you write your code, the variable explorer is where the assigned variables and their corresponding value appears, and the python console shows the output of your code. Let us try to write some code.
In the photo above, I used the print function in the code block to show the text “Hello!” in the python console. Noticeably, nothing different happened in the variable explorer since I have not assigned a value to any variable yet.
Note: To run your code, just press ctrl+enter or simply click the green play button at the top left (below “Debug”).
Now that I have assigned values to variables x,y and z, it has reflected in the variable explorer. Although, nothing was added to our “Hello!” text in the python console since we did not print anything. Let us try printing z in our next line and see what appears in the python console.
If I print “z” including the quotation marks, it treats z as plain text. On the other hand, without the quotation marks, it treats z as a variable.
Actually, this Spyder IDE is not completely new to me since we used it in one of my two programming courses. However, it feels like I’m still learning it for the first time since as I mentioned earlier, I used to hate programming. The only difference in learning it now is that I am excited to learn more.
That is it for now. I hope you enjoyed learning and relearning with me. | https://kelsey-lopez.medium.com/two-things-i-learned-in-computer-programming-6dfffc65a109 | ['Kelsey Lopez'] | 2020-09-17 10:55:06.566000+00:00 | ['Python', 'Command Line', 'Spyder', 'Basics', 'Programming'] |
I Let My Kid Get Hurt — A Lot. It makes me cringe now, but will… | That consequence-based learning carries so much value may be intuitive to some, but I only started to understand it while teaching an undergraduate Psychology course in learning theory a few years ago, when I was still pregnant — and damn, was I grateful for that crash-course in parenting. I came out of that class convinced that taking a learning theory course — or at least reading the textbook — would be an amazing tool for parents. For example, did you know that punishment doesn’t work? Well, it’s very difficult to make work. More on that in a future post.
Anyway, once I realized the value of consequences, I started thinking of the learning process in terms of this question: Where is the point at which I can maximize learning (consequences) while preventing my kid from getting seriously hurt? In other words: There’s a line between maximum learning and serious injury, and my job as a parent is to let her approach that line without actually crossing it.
So I started seeking out opportunities for her to test boundaries. Why not? It’s the fastest, most effective way for her to learn. Hell, it saves me a lot of work because I have to supervise her less down the road if I let her learn for herself than if I’m constantly safeguarding her.
When I see my kid about to do something daring, like fling herself across the yawning gap from the coffee table to the couch whilst donning a superhero cape, and she looks at me with that gleam in her eye, I don’t tell her: “Don’t do that.” Instead, I quickly size up the dangers: Anything she could knock her teeth out on? Impale herself with? Then I say, “Go ahead. Try it!”
I’ve let her stand on flimsy cardboard boxes, fall right through the top, and land smack on her butt on the kitchen floor. I’ve let her crawl into the top of the cat castle and tip the whole thing over. I’ve let her put her hand too close to the stove and watched her pull her hand back quickly, wide-eyed — though I was holding her far enough away that she would not have been able to *actually* reach the burner (this is what I’m talking about with drawing the line between learning and crisis).
I’ll add that big part of making this work has been knowing my kid’s natural limitations and propensities. For example, she naturally tends cautious with certain things like avoiding the street, moving vehicles, and strangers — but apparently, pink puppy dog scooters transform her into Evel Knievel.
And again, I have to do the work of creating environments where maximizing learning is still safe, like restraining her from the stove, or moving ankle-breakers off the floor in a jumping zone. Again, that’s drawing the line between learning and serious injury. But it’s way easier than constantly chasing after her and yelling at her not to do things. Also, I don’t get as sick of the sound of my own voice saying the same damn thing over and over again. | https://medium.com/home-sweet-home/i-let-my-kid-get-hurt-a-lot-2a710bfa174c | ['Not A Doctor'] | 2020-08-26 11:00:35.904000+00:00 | ['Parenting Advice', 'Parenting', 'Family', 'Psychology', 'Parenting Toddlers'] |
Did A Martha’s Vineyard Bookstore Send The President Shameful Books? | Everyone’s dug into President Obama’s official reading list for his vacation this week on Martha’s Vineyard: Richard Price’s Lush Life (which is terrific), Kent Haruf’s Plainsong (which sounds interesting), David McCullough’s John Adams (which sounds like, um, he should just watch the movie instead), couple others. But we did not notice until now that the President will also be receiving a very interesting sounding package from a local bookstore. “Over at Bunch of Grapes…a clerk acknowledged that they had sent books, but when asked which ones, she sounded as if she were on the press office payroll. ‘Nothing [we] can share with anyone,’ she said.” What do you think they sent? Jackie Collins? Anais Nin? ? Maybe The Surrender: An Erotic Memoir by Toni Bentley. God, remember that? | https://medium.com/the-awl/did-a-marthas-vineyard-bookstore-send-the-president-shameful-books-4a8d69d8ebe1 | ['Dave Bry'] | 2016-05-13 00:41:55.819000+00:00 | ['Erotica', 'President Obama', 'Books'] |
Science Is Not My Religion | The Truth About Science and Religion
When I first stumbled into science, I truly thought I had escaped the patriarchy and dogma of religion. It was the naïve overconfidence of an eighteen-year-old excited to have found her passion.
Over time, through my own reading as well as history of science coursework and participating in dialogue online, I realized how flawed the scientific world is with the same exact problems that plague organized religion: prejudiced dogmas that support the superiority of white, cishet men.
It infuriated me. I had left one space filled with self-absorbed men who think they are endowed with divine knowledge for another.
Whether they are claiming to be the moral or intellectual authority, the goal is the same — to control the narrative through power. Both collect loyal disciples, followers who will affirm their power.
At first, I bought in to science just like I had bought into religion as a kid. I thought it was infallible. I thought I needed to convert everyone around me to it.
But I had attuned myself so closely to the issues with my childhood religion that I couldn’t ignore their presence in science, too. And just as I had with Christianity, I began speaking out about those issues.
It turned out other people had been saying something about them for a long time — women who had fought for the right to practice science, people of color who struggled for a place at the table, queer folks who wanted to be their true selves at their jobs.
People were aware of the problems because they are so much bigger than science and religion. They are flaws and prejudices inherently built into a broken system that privileges whiteness, maleness, and heteronormativity.
Recognizing this makes reform that much more terrifying. If you upset the status quo in one field or community, you’re threatening the whole system.
And those who benefit most from the system are going to fight back.
Photo by Edwin Andrade on Unsplash
Is there still hope?
I didn’t leave religion to find a new god in the laboratory. I left it to find what I perceived as a better method for seeking out objective truths. Science isn’t built on faith — it’s built on evidence, peer review, constructive criticism, and reproduceable results. To accept its results without applying these crucial methods would be depending on faith rather than evidence. People are not infallible.
Science is not my religion. But in many ways, it can remind me of it. This is why I work hard to help it change and improve.
For the first time in my life, I’ve found a community of people like me — nonbinary folks who love science; bisexuals in microbiology; people who also left an anti-science community to study their passion. It’s a place where, for the most part, I can be my unapologetic self.
But it’s only like that now because of the hard work so many folks have put into making science a more inclusive, accepting place. And there is still more work to be done.
That’s why I don’t begrudge the Christian community. There are many people within it, including dear friends, who are actively working to make their faith a space that is safe and welcoming for those who were formerly rejected.
Science has just as much work to do. And those within the scientific community need to understand that although we may be practicing a method for which the goal is objectivity, we are only human. Our results are often colored by biases and our own intentions.
Recognizing that fact is the first step toward reconciling with it. We should all be in a never-ending pursuit of the unvarnished truth. The only way to achieve that is through collective criticism of ourselves and our work — checking each other for mistakes and hidden agendas, anything that would interfere with reaching the most objective conclusion possible.
And in order to do that, we need everyone to be a part of the collective community. By excluding anyone based on race or sexual orientation or gender or ability, we risk objectivity.
We risk truth. | https://readmorescience.medium.com/science-is-not-my-religion-288b0c95729 | ['Sarah Olson Michel'] | 2020-09-30 15:43:45.456000+00:00 | ['Religion', 'Politics', 'Personal Growth', 'Science', 'God'] |
I Found My Mom Groove. Let’s Find Yours | I Found My Mom Groove. Let’s Find Yours
In the realm of motherhood — or womanhood — what do you want advice on?
Photo by Court Cook on Unsplash
I’ve been dragging my feet on finishing my book.
“I’ll pick it back up after quarantine” was my first excuse. And then the quarantine got extended to May 20th where I live in Connecticut. If I put it off until then, there’s no way I’m going to finish it.
I have this vision of popping champagne with my friends on a rooftop once self-isolation is a thing of the past, and I won’t feel like I deserve that moment unless this book gets written beforehand.
For those of you who don’t know, I’m writing a book on modern motherhood. What started out as a self-improvement book based on true experiences, has now become a full-blown, deeply personal memoir, sans the self-helpery speak.
I wanted to ask you, as fellow moms and fellow women, what topics are of most interest to you in the realm of motherhood? Do you want style advice? Do you want relationship advice? Do you want personal experience stories? Do you want to know how to find the best mom friends? Do you want it all?
The excitement of continuing to write it has finally resurfaced. So I’m taking advantage of this momentary surge of motivation, and am giving myself a hard deadline of April 30th to send my completed manuscript to my lovely editor, Simone from Seattle, and my other lovely editor, Kate from Brooklyn.
To give you an overview, the book covers the bases of figuring your new self out as you begin the unpredictable journey that is motherhood.
My Hope For This Book
My hope for this book is to create a modern mom movement in which laying everything on the table in the realm of new motherhood isn’t just acceptable, but celebrated. My hope is for a new mom to read this book and be able to completely relate. My hope is for that new mom to then begin embracing her new life instead of struggling through it.
It is my story — or more specifically — my journey from the moment I found out I was pregnant up until my son’s fourth birthday. And I truly, from the depths of my soul, hope the book helps fellow moms who are struggling with mental health issues, postpartum issues, and the general overwhelm and loneliness that is the reality of new motherhood.
That said, I’m asking what you would like to see in the pages of this book. Please let me know in the comments (or email me back!). After all, I’m writing it for you!
As always, thank you for reading.
Be you.
XOXO,
Ashley
PS! Do you want these newsletters delivered to your inbox? I send them 3x per week (on Tuesday, Wednesday & Thursday mornings) in super short snippets. Content focuses largely on self-improvement, happiness, and embracing your true self.
Sign up for Ashley’s Newsletter here. | https://medium.com/modernmotherhood/i-found-my-mom-groove-lets-find-yours-76830f83987 | ['Ashley Alt'] | 2020-04-16 15:19:18.013000+00:00 | ['Self Improvement', 'Authors', 'Quarantine', 'Writing', 'Writers On Writing'] |
Announcing the Dreamit Fall Class of 2016 | Eko Devices — Non-invasive sensors and software currently enabling precision cardiac care in over 400 hospitals, health systems, and other providers across the US.
GraftWorx — “Smart” wearables and implantables for dialysis and peripheral arterial disease patients that transfer clinically actionable data from the device directly into the clinician’s EHR system.
Yosi — Eliminates patient intake from waiting rooms and provides an end to end patient on-boarding solution, making the office paperless and coordinating care between different EMR platforms.
Voiceitt — Translates unintelligible sounds voice into clear speech in real time, enabling those with motor, speech or cognitive disabilities to communicate with caregivers, family members, health care professionals and society as a whole.
PadInMotion — Provides customized tablets integrated with the patient care protocols of the medical facility enabling medical facilities to improve key quality of care metrics while patients are in the medical facility and an advanced software platform for care management outside of the medical facility.
BrainCheck — Mobile, rapid, and easy-to-use cognitive health assessment and management technology, connecting patients and their care teams in both the concussion and dementia markets.
PhotoniCare — Changing the way middle ear disease is managed by enabling physicians to look through the eardrum without cutting it open, saving months of treatment time per patient.
Synotrac — A patent pending implantable medical device that is changing the way doctors look at infection after joint replacement surgery by actively monitoring the joint health via smartphone app
Reliant Immune Diagnostics — Developing innovative diagnostic tests for detection of certain allergies, infections and diseases.
Lilu — A pumping accessory that automatically massages and compresses the breasts of pumping moms to increase milk output by up to 50% and enable moms to pump more milk in less time. | https://medium.com/dreamit-perspectives/announcing-the-dreamit-fall-class-of-2016-50fca46a0b41 | [] | 2016-10-25 15:20:52.841000+00:00 | ['Digital Health', 'Education', 'Entrepreneurship', 'Startups', 'Healthcare'] |
The US Economy Won’t Recover For Awhile, So It’s Time to Optimize for Recession. | A couple of months ago, I wrote this article, about how trying to time the market to take advantage of the so-called “Coronavirus Dip” was a risky proposition, and how the safest thing the savvy investor could do would be to do nothing. I still stand by that advice; making no significant changes to your portfolio, or your plan, is always the safest thing you can do. Giving in to irrational pessimism or untethered optimism are both signs of the same lack of emotional regulation in any market. And that, my friend, will cost you money every. single. time.
That being said, I also have to admit that sometimes, the safest thing to do is not necessarily the right thing to do. For the vast majority of the time, leaving your portfolio untouched, regardless of what the market is doing, is the right thing to do, as well as the safest thing to do. The times that we’re currently experiencing, however, I believe are unusual. And, I also believe that there are steps to be taken that might prove quite profitable for the savvy investor.
Let’s say that you’re one of the fortunate ones in the midst of this crisis. You have your health, you are still collecting a normal paycheck, and you have the financial means to continue to invest as you would normally do. If that is not your financial story, please know that there are many dozens of excellent resources out there to help you out during this difficult season. For more on those resources, click here, here, and here.
Assuming that you have the financial means to continue investing as you normally would, now might be a good time to make a few adjustments to further optimize your investment portfolio to take advantage of “bargains” or the lowered expectations of the stock market. As Benjamin Graham wrote in his foundational work The Intelligent Investor, “The intelligent investor is a realist who sells to optimists and buys from pessimists.” Surveying the landscape of the global stock market these days, we may find many, many pessimists, and not a few optimists looking for a massive rebound in the near future as well. Whether that rebound will come sooner rather than later remains to be seen. Here, then are three practical things that you can do, starting right now, to not only prevent unnecessary losses in your portfolio, but perhaps even optimize your portfolio further to take advantage of the state the market is in currently.
1. Adjust Your Mindset
Photo by Natasha Connell on Unsplash
This is the first, and most important step, you can take to achieve financial success in the stock market. The best way to start out optimizing your portfolio is to conduct a mental inventory, and make sure your head is in the right place with relation to the stock market. Now, more than usual, you must remove emotion from the equation. Whether you feel hopeful about economic recovery, or you feel despair about economic catastrophe, these feelings will not help you make sound financial decisions. Your mindset must be one of calm, and of understanding a couple of fundamental things about the current state of things.
The discipline of value investing, first described in detail by Ben Graham in the book alluded to above, reminds us of the fundamental mindset shift that the savvy, unemotional investor must make in order to succeed; we have to stop viewing stocks, options and mutual funds as inherently valuable in and of themselves, and remember that these instruments represent fractional shares of ownership of a company. The soundness of the share price is (or should be) linked to the soundness of the company or companies represented. Too often, especially in the modern age of complex investment analytics and strategy, it is easy to get solely focused on the technical aspects of the market, and forget the underlying bedrock that the market rests on: the value of the global economy.
Because of that, we can take heart in times of uncertainty like this. Especially in the case of the “Coronavirus Dip” the market meltdown was not caused by any unsoundness in the global economy. It was caused by fear. Fear of death, of illness, of a global pandemic. Business is still strong. Industry remains a powerful economic engine. Yes, there have been sectors of the world economy that have been badly damaged by the virus, but that is less a function of their lack of value than it is their lack of consumers at this point.
Remembering to assess the value of a company or group of companies, rather than purely analyzing the technical data of a stock quote, is the first and most important thing you can do to begin to optimize your financial situation. Optimizing your mindset is the key to the whole exercise.
2. Look for stocks or funds made up of companies whose fundamental value has only changed as a result of the pandemic
Photo by JESHOOTS.COM on Unsplash
Let’s take the travel industry as an example. Obviously, nobody is taking trips these days, cruise ships are sitting idly in ports, airplanes are mothballed on tarmacs, hotels and resorts stand empty, and billions of dollars of revenue are in limbo. But, if we look at the data, we see that prior to the Coronavirus arriving on our shores, the travel industry, particularly in America, was thriving.
As an example, let’s look at Fidelity Investments’ Select Leisure Fund (symbol: FDLSX). The fund is made up of several hotel, resort and cruise companies, and represents a fairly broad swath of the industry niche. Here’s the chart for the past five years of this fund:
As we can see, up through the first part of 2020, the fund was up almost 61%. That’s amazing growth, precipitated by the explosive growth in the tourist industry in the last thirty years. Of course, the fund today is not looking so rosy, shedding almost one third of its total value between Jan 31st and March 31st. But, remember, as fractional shares in the ownership of a company, nothing about the fundamentals of the business has changed, other than the lack of customers. Once the customers return, the business returns. This would be a perfect example of a potential opportunity to get into a bargain-priced offering in the tourism niche, and add a valuable stable of corporations to your portfolio.
3. Identify any investments you currently have that were “running on hype” anyway, and which may have been exposed by the overall market weakness, and dump them.
Photo by Franck V. on Unsplash
Now is no time to get sentimental. That pet stock you had that was supposed to be a sure thing, but hasn’t quite panned out, and is now in the ninth circle of Coronavirus hell? Jettison that thing, ASAP.
If nothing else, the last few months have definitely exposed some areas of the market that were running too hot, and weren’t backed up by as much fundamental value as they were by hype and noise. As this excellent article points out, the stock market was primed for a correction for years, and while the Coronavirus seems to have pushed us well past “correction” and into a full on Bear Market, that doesn’t mean that a lot of the air that’s gone out of the market was hot air anyway.
Some of the worst offenders for being overhyped and underperforming are the tech and startup stock brackets. Much like the dot-com bubble of the mid-90s, this is less a function of shaky fundamentals, and more a function of severe overcrowding. The niche is just too crowded to support. For every Apple, Microsoft and Salesforce, there are a hundred wannabes trying to cash in. And, unfortunately, many of those wannabes are less concerned with becoming solid businesses than they are with making as much money as possible before they flame out. We can all be thankful that the Coronavirus has exposed the instability and lack of staying power of many of these overnight “unicorns” and hopefully thinned the herd somewhat of these stocks, getting us back on our feet in terms of solid fundamentals and good business models. For more on what the future of startup offerings should look like post-Corona, be sure to read this article by Alexandre Lazarow . | https://medium.com/swlh/the-stock-market-wont-recover-for-awhile-so-it-s-time-to-optimize-for-recession-df883b3edfe3 | ['Jay Michaelson'] | 2020-08-18 17:50:17.270000+00:00 | ['Money', 'Coronavirus', 'Economy', 'Investing'] |
You’re Wrong: Apple Can Justify the $550 Price Tag | Another reason they’re so expensive is that they’re just really high quality. The headphones make the other competitors look old and bulky. When compared side-by-side, the Sony’s look a decade old.
Plus Apple chose to use metal materials for earcups and much nicer plastics on other parts. The build quality just overall looks so much better. Sure the metal makes them a lot heavier. But Apple’s willing to take that sacrifice. It just looks so much better.
Features — Specifically Spatial Audio
We’ve gotten a taste of the Spatial Audio magic on the AirPods Pro. And while I haven’t had an opportunity to try out Spatial Audio on the AirPods Max yet, I’m confident it’ll be even better than the Pros due to the additional H1 processor and a total of 9 microphones.
Apple
When you first hear about it, Spatial Audio seems like a gimmicky feature that’s trying to be the next-gen surround sound. The thing is, it’s actually really immersive and brings more depth to movies. I recently recommended my friend to try out Spatial Audio. This is what I got back:
whoops
Look, if you have access to AirPods Pro (or AirPods Max for that matter) you should try out the feature. It’s supported on all Apple TV+ shows, Hulu, and Disney+. Unfortunately, it has not come to Netflix or Prime yet. The feature is also waiting to be added to Macs and Apple TVs for some reason.
Regardless of Spatial Audio, the AirPods “brand” has a bunch of other features that you don’t see on competitors. Starting with the AirPods connecting “magic”, Apple has the experience refined to a tea.
Reviews are also starting to say that the transparency mode on AirPods Max is also next-level. The transparency mode takes cues from Spatial Audio and has the same spatial effect but with your actual surroundings. You’re more immersed in real life (pretty sad sentence, I know).
It’s the Little Things
Combine all this and we’re left with the question of whether the $200+ price difference is worth it. Apple obviously says yes. And I’m, hesitantly, kinda agreeing with them.
Yes, $550 is a lot of money. And no, I won’t be buying them anytime soon. But the headphone landscape is overall pretty expensive in the first place. Forgetting about the Sony XM4’s or Bose 700’s, the $500 range is pretty affordable and fair. And with those extra “Apple magic” features, the headphones themselves blow the Sony’s and Bose away like nothing.
I’m putting my bets on this product having a decent amount of success. Feel free to tell me why I’m wrong in the comments. | https://medium.com/macoclock/youre-wrong-apple-can-justify-the-550-price-tag-69255170d3f7 | ['Henry Gruett'] | 2020-12-26 18:09:31.823000+00:00 | ['Technology News', 'Technology', 'Apple', 'Technews', 'Tech'] |
The Crypto Gaming Multiverse Is Happening, Right Now. | The team at CryptoFights has been working very hard getting to this point. We started back in January with a goal to bring a real deep strategy video game to the #dapp market and its been a windy road, to be honest. In a previous article, I touched on how we switched from focusing on being a desktop game using metamask and staying totally on the Ethereum mainchain for all combat logic to now being mobile, using sidechains for the game logic, and having all game items using Enjin’s ERC1155 token standard. Not to mention the ability to purchase game items using in-app purchases in the full version to obtain those game items.
CryptoFights Development
We have now reached a point in our game where we have developed the very first set of weapons aptly named “Genesis Zero” to recognize these are the very first set of weapons for the game and multiverse. All of these items are never to be recreated again by us, have no trading restrictions, no trading fees and All of Our Pre-Sale Items Will be Multiverse Enabled. All items were handcrafted by 3d artists with unique traits to make them fully unique for the gaming multiverse and given special properties and rarity tiers to be used within our game. There is only a specific amount of weapons that will be sold, you can check the quantities by visiting our presale page.
Our presale is scheduled to start on September 19th at https://presale.cryptofights.io
CryptoFights Game Trailer
The gaming multiverse is an exciting time to be a gamer. Imagine being able to collect all your gaming items in your digital wallet to be used across a multitude of games. Buying a legendary item means its legendary in ALL games in the multiverse since they can not create more of them.
To learn more about our combat mechanics and how rounds work check out this video.
Stay tuned with more CryptoFights news in our Discord channel where we are running a contest to win Ethereum and a chance to design and name your own weapon. | https://medium.com/cryptofights/the-crypto-gaming-multiverse-is-happening-right-now-3a868f5c7bbf | ['Crypto Fights'] | 2018-09-12 16:48:44.614000+00:00 | ['Gaming', 'Startup', 'Blockchain', 'Mobile', 'Ethereum'] |
Ruby’s New Exception Keyword Arguments | Ruby’s New Exception Keyword Arguments
`exception: false` and `exception: true`
Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog
In Ruby 2.6, a variety of Kernel methods get a new exception: false or exception: true keyword argument. When Kernel methods fail, some raise an error and some just return nil . This new feature lets you override that default behavior.
TL;DR: In Ruby 2.6 these examples will all work:
Background
What should Ruby do when unexpected things occur? Fail loudly? Carry on silently? It depends. There are reasons you may prefer to handle errors differently.
Many methods have a default behavior to either return nil or raise an error when something goes wrong. Ruby chooses what’s most appropriate as a default on a method-by-method basis.
One example of Ruby having various default behaviors is string-to-number conversion. For example, 'nope'.to_i is permissive and returns 0 , since 'nope' isn’t a number. On the other hand, Integer('nope') is strict, and raises an error. Here are a few examples of how this works:
The permissive String#to_i method is lax about detecting a number. When one can’t be found it just provides a default value of zero. Alternately, the strict Kernel#Integer method raises an error. If you want strict parsing but no error in Ruby 2.5 and earlier, it’s up to you to manually rescue:
Integer('nope') rescue nil
#=> nil
Exception False with Numeric Conversion
The problem with rescuing manually is that it’s slow and noisy. On my machine, it’s almost three times faster to use exception: false :
In Ruby 2.6, you’ll be able to use Integer('nope', exception: false) instead of Integer('nope') rescue nil for much better performance. The same applies to Float , Rational and Complex .
Float('nope', exception: false) instead of Float('nope') rescue nil
Exception True with System
The new exception keyword arguments are also available for Kernel#system. If command execution fails, the #system method’s default behavior is to fail silently and return nil:
system 'nope'
#=> nil
Before, you had to write the message and raise the error yourself if you wanted to have #system raise an error on execution failure. Now you can simply add exception: true and you’ll get an error raised with a nicely formatted message:
system 'nope', exception: true
#!> Errno::ENOENT: No such file or directory - nope
When command execution succeeds but there’s a non-zero exit status, exception: true will also cause an error to be raised instead of the false return value.
Conclusion
Thanks to Aaron Patterson for proposing this feature for numeric conversion. And thanks to Takashi Kokubun for proposing this feature for #system. The new exception: keyword arguments will ship with Ruby 2.6 when it’s released on Dec 25, 2018. These changes didn’t make it into ruby-2.6.0-preview1 but they will be part of the upcoming ruby-2.6.0-preview2 release and are available now on the nightly snapshots.
We use Ruby for lots of things here at Square — including our Square Connect Ruby SDKs and open source Ruby projects. We’re eagerly awaiting the release of Ruby 2.6!
The Ruby logo is Copyright © 2006, Yukihiro Matsumoto, distributed under CC BY-SA 2.5.
Want more? Sign up for your monthly developer newsletter or drop by the Square dev Slack channel and say “hi!” | https://medium.com/square-corner-blog/rubys-new-exception-keyword-arguments-4d5bbb504d37 | ['Shannon Skipper'] | 2019-04-18 22:18:37.207000+00:00 | ['Ruby', 'Programming', 'Software Development', 'Developer', 'Engineering'] |
How to Set Up a Deployment Pipeline with React, AWS, and Bitbucket | Bitbucket Pipeline
We jump back into our Bitbucket dashboard and from there we will want to pull up the Pipelines section.
Screenshot by the author
If you scroll down there will be a spot with templates for creating your first pipeline. We will want the option for “Deploy React app to S3”.
Screenshot by the author
This will pull up an editor with a starter script for our pipeline. Note that we are going to need to provide AWS access keys to Bitbucket as variables that will be used when the pipelines run.
Screenshot by the author
Back in AWS, we will need to also set up an IAM user for Bitbucket or you can provide the access keys for an existing user, provided they have S3 and CloudFront access in their profile.
Screenshot by the author
Screenshot by the author
Screenshot by the author
Screenshot by the author
Once created you will be given the access key needed in Bitbucket.
Back in Bitbucket, there is a section for “Add variables” over on the right side. Expand that and we will need to provide 3 values for the following variables here: AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY , and AWS_DEFAULT_REGION
Screenshot by the author
Screenshot by the author
In the bitbucket-pipelines.yml file, we just need to change 2 lines. The first is the S3_BUCKET property in the deployment to S3 section. The value will of course be the name of the bucket we created for our static site.
The second value we need to change is the DISTRIBUTION_ID for the CloudFront invalidation section. For reference, my full pipeline file is below.
Next, we click on the button to Commit the file to the repository. Now if we go back into the Pipelines section we should see at least one pipeline run.
Screenshot by the author
If we go into the detail section for the latest pipeline run we should see a Deploy button if the pipeline ran successfully. Click that Deploy button to kick off the deployment process.
Screenshot by the author
Click that Deploy button under the “Deploy to Production” step and that will pop up a confirmation dialog where we click Deploy one more time to kick off the process.
Screenshot by the author
After that runs successfully if we pull up the domain name for our CloudFront distribution we should be greeted with the React logo. Note that every time you push to master this pipeline will run and after successful runs, you’ll have the option to manually kick off the deployment to production. | https://medium.com/javascript-in-plain-english/how-to-set-up-a-deployment-pipeline-with-react-aws-and-bitbucket-aeaa9b0bdd9c | ['Matthew Brown'] | 2020-12-07 15:30:15.276000+00:00 | ['JavaScript', 'Programming', 'Cloud Computing', 'DevOps', 'Software Development'] |
You Don’t ‘Need’ Hormones To Be Non-Binary | You Don’t ‘Need’ Hormones To Be Non-Binary
There is no one, definitive, “right” level of hormone — or prescription pathway — for non-binary trans people
Image by gthylla from Pixabay
I see this a lot — which hormones do you take if you’re non-binary?
Now, myself, I take the full HRT (Hormone Replacement Therapy) dose for someone transitioning female to male. But that is because gender is a spectrum, and on it, I’m more on the masculine end of the spectrum. For my mental health and mental happiness, I’m comfortable being viewed as male, as much as I’m comfortable being non-binary androgynous.
But that is my story and my approach.
I’m also looking to eventually have most of the hair on my body (other than that on my head) removed by laser, as it causes me some distress, but not a high level of distress.
What about other non-binary people?
Which hormones should non-binary people take?
As I mentioned — gender is a spectrum. People present their gender externally, and everyone’s gender identity falls somewhere upon this spectrum. This is how you get “tomboys” (or girls who like to dress in more masculine clothing but identify very firmly as female), non-binary people, cis-male/cis-female individuals, and trans people!
But with this spectrum in mind, that means there is no one, definitive, “right” level of hormone, or prescription pathway, for someone who is trans non-binary. Some individuals may opt for lower levels of hormones to bring things more to the middle ground.
Some non-binary people may opt for the full dose level. Some people may opt for no hormones but go for surgical interventions. Others may do nothing — they are comfortable in their body as it is, or they don’t have the funding to do such. Non-binary people typically make decisions in consultation with themselves and and a trained medical professional to find the right pathway. It can sometimes take a lot of time, and even experimentation.
Ultimately the experience of treating a person who is non-binary with dysphoria is that of trying to find the right combination of hormones and other treatments to alleviate gender dysphoria. The alleviation of dysphoria removes mental distress, often allowing a person to be better able to function in society.
However, you don’t need dysphoria to be non-binary or trans; sometimes, you don’t realize you have dysphoria, or maybe you feel a sense of “wrongness.” Other times the dysphoria doesn’t register — just the euphoria of finding yourself and being free from the constraints of wearing a mask. | https://medium.com/gender-from-the-trenches/you-dont-need-hormones-to-be-non-binary-246a4a45cec9 | ['Robin Kyrie'] | 2020-10-08 22:38:20.647000+00:00 | ['Gender Identity', 'Mental Health', 'LGBTQ', 'Nonbinary', 'Transgender'] |
Synthetic Content | We have entered the age of fake news and deepfakes. It is more problematic than ever to find a useful piece of information among millions of websites with irrelevant or simply wrong content. How bad can it get and are there any upsides?
This dog doesn’t exist
Fake News vs Synthetic Content
As an AI entrepreneur and a scientist I follow machine learning research on a daily basis. With the recent outcry about fake news and deepfakes, I wanted to test what is really possible if you were to generate an entire website with every piece of content on it by artificial intelligence. The whole process, which I describe below, let me arrive at a concept of a synthetic content, a content which is made purely through AI and machine-generation.
First of all, not every synthetic content is a fake news and vice versa. Secondly, it is almost impossible to determine whether a given piece of content is synthetic, especially if it was generated in a narrow knowledge domain. Hence a basic criterion for evaluating a piece of content should be its quality and whether it’s true or not.
If you think about it, a synthetic content is not necessarily bad. Imagine synthetic research, new science discoveries made by machines, which would only enrich our civilisation and boost our growth. This is the good side and this should be a true goal of building complex AI systems.
The bad side is, you can synthetically create fake news, misinformation or spam, on the scale never seen before. And this is what this article is about. We are currently still before the discovery of machine learning 2.0, that is machine learning combined with logical reasoning, which would allow us to boost our scientific understanding via AI-research or AI-research assistants. However I believe that we are now able to create new pieces of content, regardless whether true or false, on a massive scale in a form indistinguishable to human eyes.
To test this hypothesis I have decided to use state-of-the-art text and vision machine learning models to create two websites in popular subjects purely automatically. No content was to be added by me or other humans, everything must come from AI, even the website itself on every single level.
I have chosen to create two separate websites: one about a healthy lifestyle and the other about money, which according to different statistics were among the most popular topics searched on the Internet.
1. WallStreetHack.com
This website was to be about money — earning, saving, insuring, taking a loan. From advice on how to get rich to texts about best loans and mortgage, written by experts in their fields (who don’t exist). Among most googled keywords and highest paid ads are ‘loan’, ‘mortgage’, ‘insurance’, so this choice was obvious.
2. PerfectLifeHack.com
A large part of the Internet is about selling goods, especially those related to being fit and healthy. Beauty products advertised by celebrities, lotions for quick weight loss, powders for growing your muscles, you name it. Skilled affiliate marketers earn millions of dollars through well-played campaigns and ad management. So a website with beauty, health and fitness advice seemed like another obvious choice.
Automating content production
After deciding on what sites I want to create, the rest was about setting a machine to deliver the content I want. The challenge was to make it automatic on every single level. So after I bought those two domains and installed Wordpress manually (but with a little effort that can be automated too), the rest was automated and written in Python. Roughly it consists of four component:
Scraping and organizing most googled questions in topics I have chosen. Generating a short text based on a question from the scraped database. Generating an image accompanying the generated short text. Putting it all together and posting it on Wordpress.
Long story short, technical details aside — the project succeed! You can see the results on those two websites: WallStreetHack.com and PerfectLifeHack.com. And while you skim through those sites, remember that none of this content was written by human, none of the people, animals or vegetables depicted on images exists in the real world. It is all artificial, generated by AI. Judge the results for yourself. Disclaimer: I don’t take any responsibility for advices found on those websites. Please don’t follow them and find another source of information.
In the end what I got was a perpetuum mobile — a machine for continuous content creation on any topic I want, which needs no further supervising. In other words, a flood of content coming in quantities only limited by computing power. With limited power I gave to it, creating a single blog post with relevant pictures and posting it on a website took up to a couple of minutes. With simpler models I’ve tested, and lower quality of text/images, the same process took less than a second. If you were to put the whole machine on a cloud powerful enough, you would be able to create hundreds of new genuine websites with unique human-level content every single hour. That means millions of articles per day if you perform computations in parallel.
Depending on your point of view that’s either fascinating or terrifying. It’s fascinating if you believe that shows how much progress we made and how much more there is to come. It’s terrifying if you are scared about potential malicious use of those algorithms to produce smart spam on enormous scale. For those reasons I have decided not to share what kind of models I have used. Although if you are a researcher in machine learning, you shouldn’t have a problem figuring them out or building similar ones. With time this knowledge will become more widespread and thus we should prepare for the world with synthetic content available on a massive scale. Let us ensure that it will be of good quality and fun to read.
Summing up, if you’d like to discern synthetic content from human-created content, I have bad news. It might not be possible at all. Synthetic content passed the point of distinguishability from human work, and there’s no reason to believe that we would be able to tell a difference between the two. On the other it doesn’t really matter, because what is important is whether a given piece of content brings any value to humanity, whether there’s an original thought or point of view in it. And to this goal AI will provide us with more valuable content, showing novel perspectives and compilations of ideas.
This is the future that is already here. | https://medium.com/swlh/synthetic-content-9cf5838d8e80 | ['Przemek Chojecki'] | 2019-07-05 14:13:35.038000+00:00 | ['Machine Learning', 'Technology', 'Artificial Intelligence', 'Fake News', 'Content Creation'] |
How To Deal With Imposter Syndrome For Self-Taught Developers | “I have had dreams and I’ve had nightmares, but I’ve had conquered my nightmares because of my dreams.” — Jonas Salk
Before we tackle the “how’s” let us first understand why we self-taught developers feel this way, and you need to understand that it’s not just you, but almost everyone feels the same, the first step will be awareness, while most of the developers know that they are suffering from this very common trap, others are in denial, but you will never overcome anything if you don’t admit it first with yourself.
After getting that first or second developer job, have you ever felt like you don’t deserve it?
Why are you even doubting yourself when the company that you are working on chose to invest in you, they know better than you do, they probably see something in you that you don’t, and it is your job to prove them that it was the best decision they made.
I am not an expert on anything especially on this one, but I am also a developer like you who has had to deal with this imposter syndrome all my life, and I don’t think I will ever overcome this, but I just have to make sure that my confidence and will is far bigger than my anxiety, if you are an ambitious self-taught developer like me who doesn’t include settle in their vocabulary then you know how it feels, you just have to deal with it and face it instead of running away because that is how you will grow, and that is where you will fly.
Ask yourself often, would you want it to control you or you will control it? and of all the sacrifices you made just to be where you currently are, would allow your anxiety stop you now?
Feed your mind more about your dreams, goals, and ambitions for what you feed your mind you become, a person with a clear goal can overcome anything.
For courage doesn’t always roar, sometimes it is the silent voice that whispers, I will try again tomorrow. | https://medium.com/javascript-in-plain-english/how-to-deal-with-imposter-syndrome-for-self-taught-developers-f490d6a314c0 | ['Ann Adaya'] | 2020-06-25 09:55:21.654000+00:00 | ['JavaScript', 'Software Development', 'Web Development', 'Programming', 'Software Engineering'] |
What It Means to Be Spiritual but Not Religious | Practice openness
Our tendency is to judge things by our past experiences. When faced with something new, try to keep an open and non-judgmental mind. New opportunities to learn and expand your perspective are all around you. Attaining a higher sense of spirituality requires being open to the possibilities.
Learn to listen
Few people really listen. Most of us are simply waiting for our turn to talk. But it’s not just about listening with your ears. It’s about listening with your heart to hear what the world is trying to communicate to you.
See the beauty in people, things, and situations
Part of spirituality involves seeing the truth. There’s beauty to be found in nearly everything and recognizing it is part of recognizing the truth. When you see the truth, you come to realize there are very few things to worry about.
Spend some time in nature
There are few things more spiritual than sitting in nature with the sun on your face and the breeze pressing against your back. Experience the trees, grass, flowers, and birds. The Greek physician Hippocrates said it best, “Nature itself is the best physician.”
Look for the bigger picture
The mere act of wondering about the universe and what it all means is an exercise in spirituality. Consider your purpose and true meaning to the world. What is the greatest gift you have to give?
Spirituality is ultimately about self-discovery
It can be considered the art and science of discovering who and what you really are. American novelist and scholar, Ralph Ellison, famously wrote, “When I discover who I am, I’ll be free.” Renowned English writer, Lewis Carroll, aptly declared, “Who in the world am I? Ah, that’s the great puzzle.”
Stay focused on the present
Living in the past or the future isn’t living. Life can only be lived right now. Part of being spiritual is recognizing that living a positive life today leads to good things tomorrow.
Spend time each day focusing only on the moment you’re currently experiencing. By focusing on your thoughts, words, and actions today, you have the ultimate amount of control over your life.
Love yourself
If you don’t love yourself, how will you ever feel comfortable enough to present your true self to the world? We’ve all done things the wrong way and come up short numerous times, but it doesn’t define us. It merely describes us in a certain situation at a certain time.
Allow yourself to be inspired
To fully learn about yourself, it’s important to experience new things. Meet new people and read new books. Only by being exposed to everything that interests you can you learn everything there is to know about yourself. | https://georgejziogas.medium.com/what-it-means-to-be-spiritual-but-not-religious-7d565b8ca3e0 | ['George J. Ziogas'] | 2020-06-07 21:31:33.044000+00:00 | ['Religion', 'Self Improvement', 'Personal Development', 'Spirituality', 'Psychology'] |
Vite Bi-weekly Report | Project Updates
Vite App
Vite Android App v3.8.0 is expected to be released in early December, with a major reorg of the asset page and the app.
The native BTC wallet will be supported in Vite iOS App v3.9.0, with which users can store native BTC, ETH, and Vite with the same mnemonics phrase in the Vite app. Vite iOS App v3.9.0 will be released in December.
Developer Tools
To facilitate developers to debug and deploy smart contracts on Vite, the Soliditypp VSCode extension has been upgraded, with the following updates:
One-click contract deployment. Developers do not need to write additional deployment scripts any more;
Multi-network support. Choose the local dev environment, the Vite testnet, or the Vite mainnet to deploy your contracts;
New browser UI;
ViteConnect supported when deploying onto the Vite mainnet.
Installation link: https://marketplace.visualstudio.com/items?itemName=ViteLabs.soliditypp
Recent Milestones
Vite Developer Committee to establish soon
On Nov 4, Vite CPO Blackey Hou announced on the Vite community livestream AMA that, Vite Labs will provide at least 500,000 VITE every month to reward software development by community members.
The Vite Labs foundation will set up a Developer Committee responsible for collecting suggestions from the community, create plans for development, decide allocation of budgets, and supervise project outcomes. Members of this Committee will comprise mostly of the Vite community, following the decentralized and global ethos of the Vite project. The Committee will be organized around the principles of fairness and transparency.
Look out for more information on how to join the Vite Developer Committee!
ViteX Exchange
On Nov 6, the ViteX operator VGATE listed Idena (IDNA), and opened the IDNA/BTC trading pair.
Idena is a blockchain that proves a user’s human identity by running an AI-resistant test, without using personally identifiable information. The Idena blockchain uses a Proof-of-Person (PoP) consensus.
ViteX rules for suspending the display of certain trading pairs
On Nov 4, ViteX released rules for hiding certain trading pairs. The ViteX website and app will no longer display non-active trading pairs. As mentioned in the previous biweekly report, relevant assets are still safe and custodied via users’ own private keys.
Per our communication with gateway operators, we released the first list of hidden trading pairs: https://forum.vite.net/topic/4315/ann-the-first-batch-of-hidden-trading-pairs
For more information, see this announcement of decision to hide trading pairs: https://forum.vite.net/topic/4296/announcement-rules-for-hiding-trading-pairs
ViteX Data
Activities
Chinese Community AMA
On October 21, Vite CPO Blackey Hou ran a livestream AMA in the Chinese community.
The replay: https://www.yizhibo.com/l/XWpTDJaNkd3TNHJg.html
Futurist Conference Discussion
On Oct 11, COO Richard represented Vite Labs in Futurist Conference, where he discussed blockchain scalability, security and decentralization with Evan Shapiro, CEO of Mina (previously Coda).
The replay: https://youtu.be/dgc4iwT4KEg?t=31358
The Blockchain Debate Podcast
Vite Labs COO’s Richard Yan will be releasing a new episode of The Blockchain Debate Podcast. The topic for this episode is about the future of Tether. As the first stablecoin, Tether provides the blockchain ecosystem with a solution in pegging crypto value to fiat. Tether has also been the target of multiple lawsuits (under-collateralization, bitcoin price manipulation). The guests on this episode are Matthew Graham, CEO of Sino Global Capital, and CasPiancey, an independent investigator and reporter of Tether and other crypto projects.
This podcast is a way for Vite Labs to stay connected with thought leaders in the crypto space. All previous episodes are available here: https://blockdebate.buzzsprout.com
You are welcome to follow Richard here: https://www.twitter.com/gentso09
You are welcome to follow the show here: https://www.twitter.com/blockdebate | https://medium.com/vitelabs/vite-bi-weekly-report-2b50ce3ba828 | ['Allen Liu'] | 2020-11-17 15:59:17.301000+00:00 | ['Project Updates', 'Blockchain', 'Development', 'Decentralized Exchange', 'Vite'] |
Proiectul „Conştientizarea şi educarea tinerilor privind problematicile de protecţie a mediului şi dezvoltare durabilă” porneste la Galati si Bacau | IRICE is dedicated to promote the international relations of Romania and to develop the economic, cultural and diplomatic relations with nations of the world.
Follow | https://medium.com/revista-de-politica-externa/proiectul-con%C5%9Ftientizarea-%C5%9Fi-educarea-tinerilor-privind-problematicile-de-protec%C5%A3ie-a-mediului-%C5%9Fi-8b0201c355f2 | ['International Relations'] | 2017-01-25 15:11:58.844000+00:00 | ['Economia Romaniei', 'Economy', 'Ecology', 'Environment', 'Educatie'] |
Why Most People Never Achieve What They Want? | Here are some of the reasons that stop people from achieving what they want:
Lack of motivation
It’s the motivation behind what someone wants to achieve that forces us to strive for it. You work hard, you take your time out, you manage different things side-by-side.
Whatever we do to reach our milestones, there’s the motivation behind that.
Generally, motivation comes from wants or desires that develop by the influence of many factors like lifestyle, society, culture, etc. These are the real forces that motivate us to do something to achieve and force us towards action.
Not organizing life
It’s well-known that businesses, companies, and startups have complete plans, strategies, and drafts for the projects they want to work on.
Why is that? Because organizing things helps us to focus, prioritize, and work on our goals in a clear way.
Similarly, keeping yourself organized increases productivity. By organizing your life and goals, one can quantify that what they worked on, where they are stuck and what they have achieved so far.
It helps us to remain disciplined which is really important to chase our dreams in life.
Overdependence on something
As human beings, we need support and help from people around us to achieve something. A child needs the support of his parents to grow. A student needs the help of their teachers to excel in education. On a similar note, everywhere we are dependent to some extent on certain people in our life.
Getting help is good and there’s nothing bad in it. But sometimes, overdependence on situations, people, and certain things hinders our progress and stops the journey towards success.
You might have heard stories of numerous successful people in the world who started with uncertain conditions in life.
What if they’d have excused themselves due to their situation? It’d have stopped them from working on their goals and achieving everything they later did. | https://medium.com/illumination/why-most-people-never-achieve-what-they-want-ac55dee01539 | ['Saeed Ahmad'] | 2020-12-28 06:16:09.267000+00:00 | ['Success', 'Life', 'Motivation', 'Self Improvement', 'Life Lessons'] |
Regenerative Cities in their Bioregional Context | Regenerative Cities in their Bioregional Context
A conversation between Herbert Girardet and Daniel Wahl
On July 13th, I had a fascinating conversation with Herbert Girardet. His is a long-term member of the Club of Rome and the World Future Council, who has worked as a documentary film-maker for decades, is the author of many books spanning 4 decades, and one of the world’s leading thinker-practitioners on regenerative cities. His book ‘Creating Regenerative Cities’ came out in 2014.
We start by remembering when we last met at the NOW Assembly in Delphi where we were joined my Kenny Young who recently passed away and was a long term friend of Herbie’s. They co-founded ‘Artists Project Earth’ together to help fund environmental projects around the world through the music industry.
Here is Herbie’s obituary of his friend Kenny Young, who I had the pleasure to meet and spend three days with last October:
I invited Herbie to reflect on his 45 year journey from being a student at the London School of Economics in the 70s to becoming an environmentalist, meeting Satish Kumar and contributing to Resurgence, working on a TV series with the ‘self-sufficiency guru’ John Seymour on the human impact on the planet called ‘Far from Paradise’ that documented the increasing degradation of the Earth through human intervention.
We talk about the shift from hunter and gatherer societies to agricultural societies, cities and empires. We also address how many indigenous people around the world actively engaged in a subtle form of ecosystems management or being abundance generating key stone species in the bioregions they inhabited.
Summary of what we talked about (listen to recording below):
Herbie reflects on his work with the Kayapo in the Amazon. We talk about Terra Preta and biochar.
(min 14) We start talking about Herbie’s work in London and on the ecological footprint of London which he published in 2002 as a ‘Schumacher Briefing’.
(min 19) Herbie speaks about the impact of the city of Rome on the ecosystems around the Mediterranean basin and how Rome denuded North Africa.
(min 26) We speak about Herbie’s work as ‘Thinker in Residence’ at the city of Adelaide and his recommendations leading to the cities becoming among one of the best examples of city redesign. See:
(min 33) I asked Herbie about the balance between re-regionalisation and maintaining global collaboration in bioregional transitions everywhere. …
Herbie describes how his work on the ecological footprint in London made him realise that relocalising food production has to distinguish between the production of horticultural vegetables and grain production — the latter impacting on vast areas beyond a city’s boundaries.
(min 39) We talk about the relationship between cities and their bioregions. Herbert’s book ‘Creating Regenerative Cities’ offers 20 case studies of cities to learn from. “By an large the understanding that there is a conceptual systemic problem between an urbanising world and the every grater impacts on the global environment by the resource demand patterns of cities [needs to grow].”
(min 45) we briefly talk about the work of Sir Patrick Geddes … and my work on regenerative cultures … “Just looking at the physical metabolism of cities is only one aspect of the story we are talking about, which needs to be internalised deeply within the cultural context in which we live”
(00:47) Herbert starts talking about the new initiatives within the Club of Rome focussed on cultural change — reconciliation between humanity and “the environment”.
(min 53) … the dual path of ecosystems restoration as a way to build resilience to climate change and to contribute to mitigating (and possibly) reversing it …
(min 57) … local resilience and a return to local food as a widespread response to the pandemic …
(min 1:09) … is capitalism structurally dysfunctional or can it be redesigned to incentivise regenerative resource use?
(min 1:22) we talk about re-localisation again and have a brief chat of maybe getting Herbie involved in advising the city of Palma in the context of the island of Mallorca based on his experience in Adelaide … and I briefly mention some of the more recent developments on Mallorca aiming at creating a large scale integrated marine and terrestrial ecosystems restoration project within the context of re-inventing the bioregional economy in a way that makes it less dependent on tourism and a potential example of bioregional regeneration.
More about Herbert Girardet:
Here is the recording of our conversation:
(Warning: unfortunately the recording has some kind of interference every time Herbie speaks — a slight hissing in the background — but he can be understood well.)
—
If you like the post, please clap AND remember that you can clap up to 50 times if you like it a lot ;-)!
Daniel Christian Wahl — Catalyzing transformative innovation in the face of converging crises, advising on regenerative whole systems design, regenerative leadership, and education for regenerative development and bioregional regeneration.
Author of the internationally acclaimed book Designing Regenerative Cultures
Please consider supporting my ongoing work by becoming a patron: | https://medium.com/activate-the-future/regenerative-cities-in-their-bioregional-context-8259d8d65d8e | ['Daniel Christian Wahl'] | 2020-08-09 13:01:26.882000+00:00 | ['Cities', 'Sustainability', 'Urban Planning', 'Culture', 'Bioregionalism'] |
Scaling SignalR Core Web Applications With Kubernetes | Signal R with ASP.Net Core is an open-source library providing real-time communications between the client and the server. With just a couple of lines of code, we can easily add this capability to any asp.net core web application and leverage a powerful feature-set.
In this blog, I’ll use a Microk8s cluster running locally to deploy the app, but the files and commands should work with any cluster without requiring too many changes. The full source code for the example described in this blog can be found here.
The application we’ll be working with is a simple .Net core web app with a Signal R hub and an Angular frontend. The frontend is a simple chat application exchanging messages with all connected clients.
Let’s deploy it and see it work. Pull down the source code and navigate to the Kube folder on a terminal window. I already have the images available publicly so it should be easy to fire it up in our cluster. Before you get started, make sure you have the correct context set,
kubectl config current-context
This should return the context you are on currently. If you are not on the correct context, then you can set it using the command,
kubectl config use-context enter-context-name
Once on the correct context, lets create a namespace where all our resources will go,
kubectl create namespace signalrredis
Once the namespace is created, lets run the secret yml which really isnt used right now but will be used in later steps,
kubectl apply -f secret.yml --namespace signalrredis
Now, to run the deployment,
kubectl apply -f deployment.yml --namespace signalrredis
The deployment should create a single pod. We can now create the service, (alternatively use -n instead of — namespace, as a shortcut)
kubectl apply -f service.yml --namespace signalrredis
We can also create an ingress to access the app, (make sure to enable ingress on your microk8s cluster or on minikube).
kubectl apply -f ingress.yml --namespace signalrredis
Once the ingress is created, we now need to update our hosts file with the cluster’s IP and map it to the ingress host (signalrredis.local). With microk8s, the IP would just be 127.0.0.1. With minikube, you can find the cluster’s ip by using the command
minikube ip
Once you have the IP, update the hosts file by mapping the local address to the ip. On windows the hosts file is located at %systemroot%\system32\drivers\etc\hosts. On Linux, it is located at /etc/hosts.
Map the IP to the local address (signalrredis.local)
Once the mapping is added, the app can now be accessed from a browser using the local host name (http://signarredis.local). Create multiple instances to see it work,
The messages should be exchanged successfully and the pod that sent the message should also be displayed. Since we have only one pod, its the same name thats displayed in both instances.
Great! Now, lets scale it by re-deploying with a higher pod count, so lets change the replicas property in the deployment.yml file to 10,
... app: signalrredis spec: replicas: 10 ...
Now, lets re-deploy,
kubectl apply -f deployment.yml --namespace signalrredis
After the pods have been re-deployed, lets test the app again,
Or what didnt happen? When we scaled, we ended up creating multiple hubs but since each client connects to a single hub, the messages will only be propagated to the clients connected to that hub. Any client connected to any other hub will not receive that message since the hubs do not talk to each other. If multiple browser instances happen to connect to the same pod, then the solution might appear to work, but the moment a connection is made to a different pod, the messaging will stop working since the client connected to a different hub.
To fix this, we need a backplane, which will enable the hubs to communicate with each other. We can use an instance of redis as the backplane. It only requires a single code change to our code base in the startup.cs file,
services.AddSignalR().AddStackExchangeRedis(“<redis_conn_str>”);
The sample aplication already has this code change and we can use the backplane by changing an env variable, RedisConfig__UseAsBackplane to true in the deployment.yml, so lets change it,
... containers: - name: signalrredis image: ashwin027/signalrredis:latest env: - name: RedisConfig__UseAsBackplane value: "true" ...
But before we run this, we need redis to be running in the cluster. To install redis, we’ll use helm. The instructions on installing the helm CLI can be found here. Once helm is installed, we need to enable it in the cluster.
On microk8s, enable the addon along with the storage addon (required for redis) using the command,
microk8s enable helm3 storage
On minikube,
minikube addons enable helm-tiller
Once we have helm enabled, while on the Kube folder on a terminal window, run the below command to get redis running,
helm upgrade sigredis ./redis/ --install --namespace signalrredis
Note: With Microk8s, If you haven’t merged your microk8s config into your kubeconfig, then use the command,
microk8s.helm3 upgrade sigredis ./redis/ --install --namespace signalrredis
After the helm install, ensure that the pods for redis are up and running using either the kubernetes dashboard or lens.
Now that we have redis working, lets get the password for the redis cluster,
kubectl get secret --namespace signalrredis sigredis -o jsonpath=”{.data.redis-password}”
This should output a base 64 encoded password that needs to be copied over to the secret.yml file in your kube folder,
apiVersion: v1 kind: Secret metadata: name: redispassword data: redispassword: PASTE_PASSWORD_HERE type: Opaque
Once the secret yml file has been updated with the password, lets run it,
kubectl apply -f secret.yml --namespace signalrredis
We can now run the deployment again with the redis backplane flag set to true,
kubectl apply -f deployment.yml --namespace signalrredis
Once all the pods are deployed, we can test the app again,
When a message is submitted, you can see the pod name change in each browser showing you which pod the message came from. Our backplane is now fully functional!
If you have further questions on the topic, feedback on the article or just want to say hi you can hit me up on twitter or linkedin. | https://medium.com/swlh/scaling-signalr-core-web-applications-with-kubernetes-fca32d787c7d | ['Ashwin Kumar'] | 2020-09-08 19:45:05.678000+00:00 | ['Dotnet Core', 'Angular', 'Signalr', 'Aspnetcore', 'Kubernetes'] |
Kubernetes Authentication & Authorization 101 | Kubernetes Authentication & Authorization 101
How do you login in Kubernetes clusters? How can you grant permissions to your apps?
If we want to build a system with user modules, Authentication and Authorization are something that we can never ignore, though they could be fuzzy to understand.
Kubernetes Authentication & Authorization, by author
Authentication (from Greek: αὐθεντικός authentikos, “real, genuine”, from αὐθέντης authentes, “author”) is the act of proving an assertion, such as the identity of a computer system user — from wiki Authorization is the function of specifying access rights/privileges to resources, which is related to general information security and computer security, and to access control in particular. — from wiki
You can simply conclude into two points.
Who are you? Authentication enables the users to login into the system correctly.
Authentication enables the users to login into the system correctly. What can you do? Authorization grants proper permissions to the users.
This article will decrypt Kubernetes Authentication and Authorization, hoping you will no longer be puzzled by the following questions.
What are the users on Kubernetes?
How to verify user identity?
What is RBAC?
How to setup RBAC for users?
Users in Kubernetes
As the Kubernetes gateway, APIServer is the entrance for users to access and manage resource objects. Every access request needs a visitor legitimacy check, including verification of identity and resource operation authority, etc., then returns the access result after passing a series of verifications.
User Authentication, by author
Users can access API through kubectl commands, SDK, or sending REST requests. User and Service Account are two different ways to access the API.
Ordinary Users
There is no built-in user resource type in Kubernetes that users cannot be stored in etcd, like how other resources are stored. Thus Kubernetes completes the authentication of ordinary users by client certs or other third-party user management systems, e.g., Google Account.
The key here is to find a secure way to help ordinary users access Kubernetes resources with kubectl or rest API.
There are couples of ways to authenticate an ordinary user:
Client-side X509 Client Certs
HTTP request
Bearer token
Three ways, by author
X509 Client Certs
To set up a client-side cert using OpenSSL:
$ (umask 077;openssl genrsa -out testuser.key 2048)
Generating RSA private key, 2048 bit long modulus
………….+++
…+++
e is 65537 (0x10001) # generate key O:cluster name CN: username
$ openssl req -new -key testuser.key -out testuser.csr -subj “/0=testcluster/CN=testuser” # sign the cert
openssl x509 -req -in testuser.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out testuser.crt -days 365
After generating the private keys, we now need to set the keys in the kubeconfig .
PS: Here, I used OpenSSL to generate credentials. You can also use cfssl. Official documentation here.
What’s kubeconfig?
The kubectl command supports all the above authentication methods. It uses a kubeconfig configuration file, which is often(default) stored in ~/.kube/config, to keep the communication method with APIServer and the data required for authentication.
The file mainly contains the following items.
Clusters . The cluster list, including the URL to access the API Server and the cluster’s name to which it belongs.
. The cluster list, including the URL to access the API Server and the cluster’s name to which it belongs. Users . User list, including user names and authentication information when accessing API Server.
. User list, including user names and authentication information when accessing API Server. Contexts . Kubelet’s usable context list comprises of a specific user name in the user list and a particular cluster name in the cluster list.
. Kubelet’s usable context list comprises of a specific user name in the user list and a particular cluster name in the cluster list. Current-context: The context name currently used by Kubelet is a specific item in the context list.
Next, add the created client certs to kubeconfig .
# set kube config
$ kubectl config set-cluster testcluster — kubeconfig=testuser — certificate-authority=ca.crt — embed-certs=true # view cluster config
$ kubectl config view — kubeconfig=testuser
kubectl config set-cluster command is very important here, and we’ll use it in other solutions as well. here is the doc
The next step is to set client credentials.
$ kubectl config set-credentials testuser — client-certificate=testuser.crt — client-key=testuser.key — embed-certs=true — kubeconfig=testuser
The output of the kubeconfig is like this
The final step is using the context.
$ kubectl config use-context testuser@testcluster — kubeconfig=testuser
Till now, You still can’t get anything because the authorization is not finished yet. I’ll show you how to grant permissions to the users in the section below.
Bearer token
Bearer token is a static token verify method, to enable which, you need to start APIServer with token-auth-file=authfile
The authfile format is like, password,user,uid,"group1,group2". Each line represents one user.
There are two ways to use Bearer token.
Use HTTP header set
curl -X "POST" "https://{kubernetes API IP}:{kubernetes API Port}/api/v1/namespaces/{namespace}/serviceaccounts/{name}/token" \
-H 'Authorization: Bearer {bearer token}' \
-H 'Content-Type: application/json; charset=utf-8' -d $'{}'
Use kubeconfig
# set your token in the kubeconfig
$ kubectl config set-credentials NAME [-client-certificate=path/to/certfile] [-client-key=path/to/keyfile] [-token=bearer_token] [-username=basic_user] [-password=basic_password] # use the context
$ kubectl config use-context NAME
For more about Bearer Token, check here.
HTTP Login
It’s basically a username and password login method. To enable it, you need to start the APIServer with basic-auth-file=authfile.
The authfile here is just like the one for Bearer token. Using it requires the HTTP client to add Authorization: Basic BASE64ENCODED(USER:PASSWORD) to the header to perform HTTP basic identity authentication. BASE64ENCODED(USER:PASSWORD) is USER:PASSWORD base64 value. After receiving, APIServer will judge whether the username and password are correct according to the authfile .
I won’t expand here since the HTTP login method was abandoned in 1.16 and removed in 1.19.
Normally, it is recommended to use client certs as admin login methods, and other users generally log in to access the cluster through the Cloud provider authentication method instead.
How does this actually happen?
The validation code is in kubectl.
The first step is to find the auth params and build an env exec.
Then kubectl verifies all the token/key against the cluster using client-go API and makes sure users have the permissions.
Service Account
Service Account, different from the ordinary users, is one of the resources managed by Kubernetes. It can be created via API, contains a set of secrets, is stored in etcd , and is usually assigned to a namespace.
Service Account is managed by Kubernetes API.
Service Account applies to applications (pods) running inside the cluster.
Service Account accesses API through bearer token authentication. It is very easy to set up a Service Account through YAML.
Service Account YAML
Authorization
Authorization defines what you can do after login.
In traditional web systems, user management has three major parts, User, Role, Permission, which are stored in relational databases, and between which there are many-to-many mappings.
It is the same in Kubernetes, but a change in the terms. Now people use ServiceAccount , Role , Rolebinding in the Cloud world, and all these stored in etcd . So I believe it’s safe to say, Kubernetes’ authorization is not that hard to understand.
But if you think Kubernetes is “beating a dead horse”, then you are wrong. RBAC (Role-Based Access Control) is the Kubernetes authorization mechanism, and you can tell there is a switch from user-oriented to role-oriented from its name.
RBAC from CNCF
In Kubernetes architecture, one of the biggest advantages is the decoupling of different resources. Every resource type is independent, and they only communicate via APIServer. So API objects for different resources such as /pod/create, /service/create have become the new type of permissions, which are the real asset here.
Then it’s the developers’ job to organize all the APIs, assign them into different roles, and finally grant all these roles to ServiceAccount(user) using RoleBinding .
People believe that Kubernetes’ plugin resources design brings significant advantages.
It brings flexibility to the whole Kubernetes ecosystem, being able to add new resources without compromising its original authorization mechanism.
to the whole Kubernetes ecosystem, being able to add new resources without compromising its original authorization mechanism. It decouples the fragmented relationship between permissions and users, centralizing in creating a role character.
We’ll continue the introduction of RBAC below.
RBAC
RBAC belongs to rbac.authorization.k8s.io API Group, which became beta in 1.6 and went GA in 1.8, and had brought great security improvement to Kubernetes back then.
You need to set -authorization-mode=Node,RBAC in APIServer configuration to enable dynamic RBAC function.
Kubernetes uses namespace to add separation in resource ownership, except for cluster-wide resources. So RBAC also split into two scopes, cluster-wide and namespace-wide. Furthermore, non-resources in Kubernetes that do not support namespace can only be set up cluster-wide, such as /heathz .
ClusterRole and ClusterRolebinding are used for cluster-level resources. On the other hand, Role and Rolebinding correspond to resources in the namespace.
There are four major ingredients in a Rolebinding/ClusterRoleBinding
API resources
API groups
Subjects
Verbs
RoleBinding ingredients
Kubernetes put granular API resources together into all kinds of API groups. For example, Deployment belongs to apps , while Cronjob belongs to the batch group.
You can simply find the information in its YAML by run the following command:
kubectl create namespace test --dry-run -o yaml | cat
The output:
Let’s try configuring a Role for Pod and a ClusterRole for events, either of which only has some read permissions.
It is quite easy to understand that both Role and ClusterRole need three major elements to build a rule , ApiGroup , resources, and verbs . Here we only add Pod as an example. And you can add different types in APIGroups to the same rule if you want them to have the same permissions. Of course, you can separate them into different rules.
Among the multiple types of users mentioned in the image, I recommend the ServiceAccount approach, so this article focuses on that approach. However, for completeness, the article briefly describes the alternative approaches.
Now, let’s define a RoleBinding to combine ServiceAccount with Role , and for ClusterRole , it is ClusterRoleBinding . This determines what a ServiceAccount(user) can do.
The example binds the ServiceAccount name default in the test namespace and testuser (we defined in the User section using client-side certs) to a Role and a ClusterRole , so the ServiceAccount can read pod information in the test namespace and events for the whole cluster.
Aggregated ClusterRole
A new function was introduced in v1.9 for Aggregating ClusterRole (GA in v1.11).
AggregatedClusterRole from CNCF
The configuration is as follows.
This is basically a simplified configuration, allowing users to group similar ClusterRoles together and then match them with other ServiceAccount through an Aggregated ClusterRole . That is really a “nice to have” functionality!
Embed ClusterRoles
Kubernetes already has many built-in ClusterRoles , and you can check ClusterRole in your kube-system namespace.
kubectl get clusterroles -namespace=kube-system
Anything starts with the system: is a built-in ClusterRole .
Webhook
There is also a special use case. If you use CRD, you can define an authorization webhook extension to verify its permission. Add a kubeconfig format like auth.yaml by adding the --authorization-webhook-config-file=auth.yaml flag.
If you want more information, please refer to Kubernetes RBAC official doc.
In Summary
The Authentication and Authorization of Kubernetes may be more complicated than we think, especially involving ordinary users’ login and Authorization. But ServiceAccount is an excellent design that allows us to manage permissions for Pod’s running programs with flexibility and security.
Resources like Pod , Deployment , ConfigMap , are always the core assets of Kubernetes, and everything works around them. We understand various resource operations and the APIServer design better when learning RBAC.
For application developers, understanding how to use RBAC is essential, especially for most users who use Kubernetes through various cloud providers(GKE, EKS). RBAC and cloud providers’ IAM management have similarities.
For the features and differences between GCP IAM and Kubernetes RBAC, please refer to GCP IAM Authentication & Authorization 101.
Thanks for reading! | https://medium.com/swlh/kubernetes-authentication-authorization-101-stefanie-lai-15080f64bcee | ['Stefanie Lai'] | 2020-12-20 16:35:52.982000+00:00 | ['Authorization', 'Kubernetes', 'Authentication'] |
Using OpenStreetMap tiles for Machine Learning | Using OpenStreetMap tiles for Machine Learning
Extract features automatically using convolutional networks
Performance of the network when predicting the population of a given tile
OpenStreetMap is an incredible data source. The collective effort of 1000s of volunteers has created a rich set of information that covers almost every location on the planet.
There are a large number of problems where information from the map could be helpful:
city planning, characterizing the features of a neighborhood
researching land usage, public transit infrastructure
identifying suitable locations for marketing campaigns
identifying crime and traffic hotspots
However for each individual problem, there is a significant amount of thought that needs to go into deciding how to transform the data used to make the map, into features which are useful for the task at hand. For each task, one needs understand the features available, and write code to extract those features from the OpenStreetMap database.
An alternative to this manual feature engineering approach would be to use convolutional networks on the rendered map tiles.
How could convolutional networks be used?
If there is a strong enough relationship between the map tile images and the response variable, a convolutional network may be able to learn the visual components of the map tiles that are helpful for each problem. The designers of the OpenStreetMap have done a great job of making sure the map rendering exposes as much information as our visual system can comprehend. And convolutional networks have proven very capable of mimicking the performance of the visual system — so it’s feasible a convolutional network could learn which features to extract from the images — something that would be time consuming to program for each specific problem domain.
Testing the hypothesis
To test whether convolutional networks can learn useful features from map tiles, I’ve chosen simple test problem: Estimate the population for a given map tile. The USA census provides data on population numbers at the census tract level, and we can use the populations of the tracts to approximate the populations of map tiles.
The steps involved:
Download population data at the census tract level from the Census Bureau. For a given zoom level, identify the OpenStreetMap tiles which intersect with 1 or more census tracts. Download the tiles from a local instance of OpenMapTiles from MapTiler. Sum the population of the tracts inside each tile, and add the fractional populations for tracts that intersect with the tile
Visualizing the census tracts which overlap with the 3 example tiles
This gives us:
Input X : an RGB bitmap representation of the OpenStreetMap tile
: an RGB bitmap representation of the OpenStreetMap tile Target Y: an estimated population of the tile
To re-iterate, the only information used by the network to predict the population are the RGB values of the OpenStreetMap tiles.
For this experiment I generated a dataset for California tiles and tracts, but the same process can be done for every US state.
Model training and performance
By using a simplified Densenet architecture, and minimizing the mean-squared error on the log scale, the network achieves the following cross-validation performance after a few epochs:
The squared error of 0.45 is an improvement on the 0.85 which you would get if you just guess the mean population each time. This equates to a mean-absolute error of 0.51 on the log-scale for each tile. So the prediction tends to be of the right order of magnitude, but off by a factor of 3X (we haven’t done anything to optimize performance, so this isn’t a bad start).
Summary
In the example of estimating population there is enough information in OpenStreetMap tiles to significantly outperform a naive estimator of population.
For problems with a strong enough signal, OpenStreetMap tiles can be used as a data source without the need for manual feature engineering
Credits: | https://towardsdatascience.com/using-openstreetmap-tiles-for-machine-learning-4a3e41bb3ea6 | ['Robert Kyle'] | 2019-01-31 15:04:19.746000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'GIS', 'Data Science', 'Maps'] |
Taking Memories and Time for Granted | We do not realize that every moment we share with special people will be a moment we will crave in the future. Life is short. Good times become memories. And they are not like the movies. We do not have literal flashbacks, so every touch, smell, feeling, facial expression, and picture matter.
Sometimes we forget memories, but they always linger in our subconscious and our dreams, then we wake up and smile, we remember that time that we have shared.
We take advantage of the time that we have right now. We are too focused on issues of the world that we forget to make the most of the time we have. Time slips through our hands like sand on the beach. We forget that every breath we take is a breath borrowed from our creator. Memories shape the life we are living right now.
Getting up and doing something with that someone feels like a daily task until we look back again and think about how much we would give to go back to that moment in time.
Some of us crave to go back to simpler times or times where our hearts smiled, but at that moment, we did not know how cherishable it was.
Love is strong. You can find it in any memory, I am sure. Even the ones that were heartbreaking to the point where you could feel your chest hurt. You will find yourself wanting to go back to a painful place to feel that love you felt in that very moment.
Why is it? Why is the moment you are currently living have vague meaning? Why is that same moment valuable down the road?
We know nothing of the good old days until we are looking back at them. The truth is that you are in one right now. You are in the next phase of those good old days that will shape the rest of your life. You will not see the good old days until you think you are out of them.
My heart weighs heavy on this subject. I can not get over the feeling of feeling yesterday. It is a never-ending cycle that keeps repeating itself into infinity. You look at the simplest things that take you back to those times. | https://medium.com/the-masterpiece/taking-memories-and-time-for-granted-41849dbe09b4 | ['Tatiana Santana'] | 2020-12-23 18:50:58.222000+00:00 | ['Self-awareness', 'Memories', 'The Masterpiece', 'Happiness', 'Love'] |
Boost Your Influence Using These 6 Keys From Forbes Expert Josh Steimle | Boost Your Influence Using These 6 Keys From Forbes Expert Josh Steimle
Being influential doesn’t have to feel like a dark art.
Photo by Markus Spiske on Unsplash
Without influence, no matter how great your ideas, how talented you are, or how hard you work — no one will notice.
The bad news? You can be the smartest person in the room; without exerting influence over your peers and decision-makers, you won’t get any traction and stay stuck in first gear.
The good news? You can become more influential and accelerate your outcomes with the right approach.
How can you do so? By implementing 6 keys, which, according to influence master Josh Steimle, will multiply your impact.
Josh is widely recognized as an expert in this field, being a regular top contributor to Forbes Magazine, Inc. and many others.
He uses a specific framework built around the below elements to maximize influence. | https://medium.com/big-self-society/boost-your-influence-by-using-these-6-keys-from-forbes-expert-josh-steimle-5997caff9747 | ['Clément Bourcart'] | 2020-12-29 14:26:40.441000+00:00 | ['Marketing', 'Leadership', 'Self Improvement', 'Life', 'Inspiration'] |
InfoTimes’ Data Lab Combines Journalism and Technology | InfoTimes’s Data Lab
CAIRO- InfoTimes, the first Arabic website specialized in data journalism, has just launched a new training program called “Data Lab” combining data journalism and technology in cooperation with Arab Digital Expression Foundation.
Data Lab aims to provide interested data journalists and software developer with needed skills and knowledge to use data more professionally and easily.
“Data Lab is trying to find topics that fill the gap between journalists and software developers and to create a common framework that can help them to work together,” said Amr Eleraqi, a journalist, mentor and the founder of InfoTimes.
The lab, which will start next month, includes 8 sessions on data handling, storing, cleaning, and data quality measurements. It will also cover the basics of the following languages in the context of online publishing: HTML, CSS, PHP, Python and R. In addition to explore the various online media tools most useful to journalism and the best methods to write professional data-driven stories.
“Data journalists spend hours and days collecting and analyzing files and documents and filling excel sheets, while learning Python can help to do the same task in less than a second,” said Eleraqi adding that every data journalist should be aware of the basics of programming language which are often available online for free.
At the Lab, there will be sessions on easy web development platforms like Wordpress to encourage journalists to post their work individually. The training will cover also data visualization techniques including advanced topics for developers like interactive visualization using D3.js.
“At the end of the training, all participants will be able to write some programming codes, develop web pages and design websites with responsive interfaces.
To know more about how to participate, click here. | https://medium.com/info-times/infotimes-data-lab-combines-journalism-and-technology-287aa2e9e816 | [] | 2016-12-19 15:53:29.377000+00:00 | ['Journalism', 'Infotimes News', 'Data Journalism'] |
How to Setup Google Analytics Correctly | How to Setup Google Analytics Correctly
A Step-By-Step Tutorial
According to a 2017 survey by Clutch, about 71% of small businesses in the U.S. have a website and understand the value of having a digital presence.
However, having a website is not enough — you also need to make sure that your website is operated and designed in a way that brings in customers and create value for your business. That’s where Google Analytics comes in.
As a free and powerful web analytics tool, Google Analytics is more popular than ever. In fact, W3Techs estimates that about 53% of all websites today uses it — and so should you.
To use Google Analytics, you must set it up correctly, and we have found it to be one of the biggest obstacles preventing small and medium businesses from using the tool. This guide is designed to provide a step-by-step for you to alleviate this pain.
The Battle Plan
Let’s begin with an overview of the major steps that we are going to take.
First, we are going to set up Google Tag Manager on our website.
on our website. Then, we are going to set up Universal Analytics (the newest version of Google Analytics) through Google Tag Manager . This will enable Google Analytics to track all pageviews on your website.
. This will enable Google Analytics to track all pageviews on your website. Finally, we are going to go back to Google Analytics to set up “goals” and “views.” This will make sure that the data you track is not only accurate but meaningful for your business.
One of the biggest differences between our approach and a conventional Google Analytics setup approach is we stress the importance of setting up Google Tag Manager right from the start.
Google Tag Manager is a free “tag management” solution offered by Google that essentially serves as a “data broker” on your website.
It takes all your website data, and send it to different services such as Google Analytics, Facebook Analytics, and beyond.
We strongly prefer the Google Tag Manager approach for two primary reasons:
If you set up Google Tag Manager, it will be the only time that you touch the codebase of your website. In the future, whether you want to implement additional event tracking, or add a new analytical service such as Hotjar, you will not need to make any direct changes to your website code. Google Tag Manager provides you with the powerful “preview” function, which helps you make sure that tracking on your website is working correctly.
With all that said, let’s get into the first step — setting up Google Tag Manager.
Setting up Google Tag Manager
Note: In this step, you will need someone with access to the codebase of your website (such as your webmaster). Make sure they are available when you are doing this step.
Google Tag Manager Overview
Google Tag Manager operates on an account and container structure, with one account linking to multiple containers.
For example, if you own a sunglasses company (let’s call it SUN) with a mobile app and an online ecommerce store, you will create one account for your company, and two containers — one for the mobile app, and the other one for the ecommerce store.
Creating your Accounts and Containers on Google Tag Manager
To create your Google Tag Manager account, go to https://www.google.com/analytics/tag-manager/, and click the green button on the top right corner that reads “sign up now for free.”
Here, you will be prompted to sign in with your Google Account. If you don’t have one, create a new one with this link:
https://accounts.google.com/SignUp?hl=en
This Google Account should be a permanent Google account that only you can access. You can always grant viewing or editing access to other people later.
Even if someone else (such as a webmaster) manages your account, still make sure that you have your own account so they can share the ownership permission with you, or else you might risk losing your data when they stop working for your organization.
After signing in with your Google Account, follow the setup instructions on the screen to create your tag manager account and your first container. Agree with the Google terms of service.
Injecting tracking code onto your website
After all those steps explained above, you will see a screen like this with your Google Tag Manager code:
As the instructions point out, you need to place two snippets of code onto every single page of your website, one in the header, and another in the footer.
While this may sounds daunting, with modern platforms such as Shopify and Wordpress, you really just need to change only one file in your website directory.
If you use one of the content management systems mentioned above, or even if you have a more sophisticated web app structure, all you need to do is to tell your developers to paste those snippets in the header and body sections of your theme/template file, whether that’s theme.liquid for Shopify, or theme.php for Wordpress.
If you do not have one of those content management systems mentioned above, your developer need to physically put this code in every page of your website. Also, you should really look into a reliable content management system (CMS) for your website.
After your developers complete this step, you should be all set! Take a breather. Now let’s set up Google Analytics.
Setting up Google Analytics via Tag Manager
WARNING: If you are a Shopify user, Shopify has their own way of setting up Google Analytics, so please following the instructions in this link if you are setting up page tracking for a Shopify website:
https://help.shopify.com/manual/reports-and-analytics/google-analytics/google-analytics-setup
Nevertheless, it is still recommended to setup Google Tag Manager on your Shopify website since it enables dynamic event tracking that is beyond the capabilities of the default Shopify Google Analytics setup.
Google Analytics Overview
Let’s begin with an overview of Google Analytics.
GA (Google Analytics) is organized by a hierarchy of Accounts, Properties, and Views. A property is a website, or mobile app, or a point of sale device (like an external checkin service). A view is a filtered version of your website data (e.g. a common view is one that filters out employees at your company because you want to track website visitors, not your employees’ sessions).
For example, if you only have one website, you only need one Google Analytics account with one website property. If you have two websites (e.g. a personal website and a website for your business), you can make two accounts (one for each website).
Set up your first Google Analytics account and property
Use the same Google Account you used for Tag Manager, go to Google Analytics and sign in.
After you sign up, you can set up your account and website information (e.g. account name, website name, website URL, industry category, etc). Make sure you use the right time zone.
Get your Google Analytics Tracking ID
After setting up your account, you will be directly sent to a page similar to the one that you see below. If not, simply go to the Admin >>> Tracking Info section (it’s under the Property column).
Here, what you are looking for is the Tracking ID of your Google Analytics account. You can either write this code down (or paste it to a word doc), or simply keep the tab open while we set up Google Tag Manager.
Set Up Analytics Tag with Google Tag Manager
Now, it is time to go onto Google Tag Manager to set up our Google Analytics tracking tag.
Go back to tagmanager.google.com, and select the “New Tag” button on the top left of the screen.
A tag has two components: its “configuration,” and its “triggering”. Even though those two terms might sound complicated, here’s an easy way to understand them.
Tag configuration is the destination of the data (in this case Google Analytics). Tag “triggering” is the specific data from your website you want to send to a specific destination (in this case, we are sending all pageview data).
Let’s begin with Tag configuration. Click on the Tag configuration button, and a screen like the one below will show up.
Here, select Universal Analytics, and in the resulting screen (shown below), select “New Variable” under the “Google Analytics Settings” option.
In the resulting screen, set up a variable called “Google Analytics” (or whatever you’d like to call it), and enter your Tracking ID in the corresponding field. Save both the variable and tag configuration, and you are done configuring the tag.
Then, it is time to set up the tag’s “triggering.” In this case, we want to send all pageview data to Google Analytics (the default Google Analytics setting).
Click on the “trigger” area of the tag, and simply select the “all page” option. Then, you are all set, and your tag should look like the following screen.
Save the tag, then click “Submit” on the top right corner of the tag manager dashboard. Write down whatever notes you’d like to describe the actions you took. Congratulations, you now officially have Google Analytics configured properly on your website!
Set up Views and Goals in Google Analytics
Now let’s segue back to Google Analytics to configure a few more settings to make sure that your tracking is accurate and useful.
Setup views to filter your data
The first step is to set up your “Views.” Views allows you to look at your data with certain filters. The default Google Analytics view is “All Web Site Data,” which is unfiltered. You should probably keep this default view so that you can always access all your raw website data.
You can add new views according to the needs of your business. Go to Admin >>> View column dropdown >>> Create new view.
For example, you can add views to filter out certain pages on your website or traffic that does not count toward your goals. I recommend adding a view that filters out employees at your company because you want to track website visitors, not your employees’ sessions.
Set up Goals to track your business objectives
Goals allow you to track important events on your webpages, such as visitors filling out a contact form or spending a certain amount of time on a product page.
For example, let’s say you want to track purchase completions on your website, measured by the number of times a website visitor sees a purchase confirmation (or thank you) page after their purchase.
To go to Goals, click on Admin >>> View (column) >>> Goals.
Click on the “New Goal” button.
Look through the goal templates to see if any of them match the event you are trying to track (e.g. “Make a payment”). If not, click “Custom.”
Give your goal an easy-to-remember name. Select Destination and click “Next Step.”
Find the URL of your confirmation page. Copy the URL segment after the “.com” (e.g. “/confirmation.html”). Paste it in the Destination field and select “Equals to” in the dropdown menu.
If you want to quantify the monetary value of each time this goal is completed, type the dollar value of that action. Click Create Goal.
This is an example of tracking a simple conversion funnel.
You can create up to 20 goals for each website property. Common goals that Google Analytics can track for your business include:
Lead contact form submissions Email list sign ups Purchase completions Visiting pages that suggest an intent to purchase A certain number of pageviews per session Spending a certain amount of time on your website in a session
You can then use these goals to add website visitors to retargeting lists for Google Adwords campaigns.
From PPC Champ Blog
For example, I may want to set up a goal such that if a visitor visits more than 3 pages and spends more than 2 minutes on my websites, Adwords will add them to a retargeting list because I see these visitors as “hot leads.” Adwords can then automatically retarget them with display ads and Gmail ads.
Check out this Google Analytics support page to learn more about how to properly configure goal tracking on Google analytics.
Need Help Setting Up and Configuring Google Analytics? We’re Here to Help
We hope you found this short step-by-step walkthrough of setting up Google Analytics (and Google Tag Manager) helpful. If you have any questions or feedback, please shoot us an email at [email protected].
Coincidentally, we are offering to properly setup and configure Google Analytics (including reporting and goal tracking) to our beta testers, so let us know if you’re interested in giving us feedback on our tool.
Tune in for our next step-by-step tutorial on how to setup additional event tracking for Google Analytics through Google Tag Manager, along with tips on using the “preview” function of Google Tag Manager to make sure all your trackings are functioning correctly on your website. | https://medium.com/analytics-for-humans/setting-up-google-analytics-correctly-4f564b739d99 | ['Bill Su'] | 2018-06-08 19:32:19.131000+00:00 | ['Startup', 'Google Analytics', 'Web Development', 'Analytics', 'Small Business'] |
Reserve x Certik Technical Partnership | Meet Certik
Reserve has been working closely with our friends and partners at Certik to ensure the security of the smart contracts that comprise our stablecoin protocol. We wanted to take the time here to shine a light on their services and the ways in which they have contributed to the Reserve mission.
CertiK is a blockchain and smart contract verification platform. It was founded by former senior software engineers from Google and Facebook and formal verification experts from Yale and Columbia University. CertiK is unique among software auditing firms in the blockchain space, in that they perform formal verification services. Formal verification is a process that mathematically shows what a program does and how it may encounter errors. CertiK has received investments from Binance Labs, DHVC, FBG Capital, Bitmain, and Lightspeed. They have also formed partnerships with exchanges like Binance, OKEx, and Huobi, as well as blockchain projects like NEO, ICON, and QuarkChain.
To learn more about Formal Verification, head over to Certik’s blog where they discuss the topic in more detail.
CertiK and Reserve
Our RSV token is a decentralized stablecoin that is backed by a basket of stablecoins including USDC, PAX, and TUSD. Our goal is to make RSV a dollar-independent and universally-accepted currency for people and businesses in emerging markets with high inflation and significant financial frictions.
CertiK has and will continue to work with our team to secure the RSV’s Ownable, ReserveEternalStorage, Reserve, Vault, Basket, Proposal, and Manager smart contracts, as described in our whitepaper.
We chose CertiK to formally verify our contracts because of their unmatched reputation for quality assurance and professionalism within the industry. After carefully examining the architecture of RSV v2, Certik concluded that we have “implemented a well-designed system” for our tokens. They also noted that careful checks are performed during each critical operation, such as issuance and redemption, to ensure that RSV stays collateralized. Every operation is associated with the correct access control to prevent untrusted parties from manipulating the state of the protocol.
Conclusion
Reserve’s mission to become the world’s next reserve currency and to provide sound money to those countries that need it most involves taking on a great deal of responsibility. As such, we want to make sure that we are putting forward the best product possible. We are proud to work with CertiK to ensure the software we release meets the highest security standards. Their industry-leading Formal Verification tools have and will be a great benefit to the Reserve mission. | https://medium.com/reserve-currency/reserve-x-certik-technical-partnership-7a7022d6c02e | [] | 2019-10-30 16:17:51.957000+00:00 | ['Development', 'Certik', 'Updates', 'Blockchain', 'Cryptocurrency'] |
A start-up language start-up | Welcome back to the View from the Monolith! I’ve spent the last few years siloed in a monolithic organisation, but now I am adventuring in the world of start-ups. In this blog, I share the learning I glean along the way.
Every industry has its language: words or phrases that mean something specific in that culture which baffles outsiders who might stumble across them. The language of start-ups is a hybrid of business terms and technical ones, so depending on your background, some or all may already be familiar to you. But in today’s blog post, I thought I’d offer up some of my learning: phrases that were new to me (well, except one of them).
Almost have it…
Ideation. The creative process of forming ideas. Of “ideating”.
The concept this word embodies is exactly what I assumed it would be, but I was still surprised to discover it was a real word in the dictionary and everything. Apparently, it goes back to the 1820s, so perhaps I’m the only person who hasn’t encountered it before. Nevertheless, it seems it’s a common word around start-ups.
Incubator. An early-stage program, incubators provide a venue to develop and refine ideas.
Often established in a specific location, or focussing on one market, incubators help entrepreneurs at the very early stages, developing ideas into viable start-ups.
Loot!
Angel Investor. Provides the seed money to get the start-up moving.
They invest capital in the company, usually in exchange for an equity share. This generally happens before Venture Capitalists get involved, and Angels would typically be investing less money than a VC would.
Full disclosure, I had heard the term Angel Investor before, but I really wanted to draw an angel with a bag of money. I think she turned out quite well.
Accelerator. A program aimed at really getting a start-up moving.
Accelerators will provide seed money and a mentorship program which help start-ups accelerate their growth, generally for the fixed duration of the program. One of the most well-established accelerators is the Y-Combinator program.
Y-Combinator. The Y-Combinator program, often abbreviated to YC. Originating at Stanford University, it teaches entrepreneurship skills and provides seed investment for start-ups. Much of the training program is available online. For start-ups accepted into the program, YC acts as an accelerator.
Jack Attack!
Growth Hacking. Finding and using tactics to keep the start-up growing.
As a writer, words fascinate me, and ‘hacking’ is so interesting. Most of the older uses of the word are destructive or at least negative, viewed in that light, it’s easy to imagine growth hacking as undercutting growth. But, of course, that is not the case. In recent years, a ‘hack’ has come to mean a strategy, trick (or Shortcut!) to achieve something. So growth hacking becomes finding ways to keep your start-up growing; an essential element of success.
Runway. How long until the money runs out.
When I first heard this term, I had it the wrong way round. If your start-up has a runway of two months, that doesn’t mean two months to launch, it means that after two months your runway, and cash, runs out, and you’re not taking off at all!
Thanks for reading. I may revisit this idea as my own understanding of start-ups grow, but for now, time to move onto something practical. Join me next time when I share with you the advice we’ve received about recruitment at start-ups. And if you want to read this series from the start, check out the first View from the Monolith. | https://medium.com/the-shortcut/a-start-up-language-start-up-b3b5faaac41f | ['Rob Edwards'] | 2019-01-02 12:46:01.732000+00:00 | ['Stories', 'Startup'] |
11 Ways to Immediately Change Your Life for the Better | Here are some ways to live better.
#1. Say “No” — a lot.
Say yes to the right things. Say no to everything else.
Learn to say no to those things you don’t want to do, those non-essentials in your life, tasks that aren’t adding value, but rob you of your energy.
You know the ones. When you say “yes,” and you immediately regret it.
Those.
Say yes to those events, people, and tasks that fill your life with gratitude, joy, and value.
You know the ones. They are the ones that when you say yes, you are immediately excited and look forward to them.
Say no to any distraction that interferes with getting the life you want a year from now.
Time is a limited resource, so when you say no to what doesn’t matter to you, you will have more time for what does matter to you, whether that is writing, your business, your health, or your loved ones.
If you make your moments matter, everything will eventually fall into place because it’s important to you.
And remind yourself often that “No” is a complete sentence.
Those with boundaries will understand your no.
Those who don’t. Oh well.
Caveat — If you have children, you will not be able to say no to the many things you want to say no to. Like picking up your teen at their friend’s house at 9:00 pm when your yes includes reading Alexander Hamilton by Ron Chernow in bed, (I’m on page 254, I’m getting through it this time!).
Pro tip — Teach your children to be self-sufficient from an early age. They should know how to clean up after themselves, how to do their chores without being reminded, able to do their own laundry, make meals for themselves, and figure out solutions to problems before coming to you. Teach them grit. When you teach self-sufficiency, it will allow you can say no more often. | https://medium.com/the-happy-spot/11-ways-to-immediately-change-your-life-for-the-better-fe8293e82610 | ['Jessica Lynn'] | 2020-08-18 11:46:01.037000+00:00 | ['Self-awareness', 'Self Improvement', 'Life Lessons', 'Success', 'Inspiration'] |
Attention Management | We are, as Tim Harrera puts it, ‘overstimulated, under-focused navigators of the modern world’. Every which way we look there is a gadget of some kind, some sort of distraction, waiting for us. The old world had distractions too but nothing of the magnitude we find today. With this come two buzzwords: productivity and time management, and they are both hornets’ nests.
The problem with time management
Productivity is generally a reference to getting significant work done. It is about achieving a meaningful number of self-defined accomplishments consistently. And the most popular route to that is time management.
The trouble with time management, though, is that we are severely limited by design: we have 24 hours in a day and, as Adam Grant wrote in the New York Times, ‘focusing on time management just makes us more aware of how many of those hours we waste’. Further, time management can often ironically go against our priorities. In an interview with Roger Dean Duncan for Forbes Maura Thomas says-
Time management teaches us to say ‘no’ more often and ‘do less’. But saying ‘no’ deprives the world of those unique gifts. And because many people ‘have to’ work, the things they say ‘no’ to tend to also be the things that nurture and sustain them: things like hobbies, recreation, family time, and volunteer activities.
So it is settled. Time management can potentially be a waste of time. The real solution, as a series of Times newsletters brought to my attention all through 2019, is attention management.
Attention management, unlike time management, is not corporate speak. It is an approach backed by science and incredible spirit. It is about prioritising the people and work that matter with the understanding that when something really matters it makes little difference how long it actually takes in your day. By managing attention rather than time priorities work their way into the system inherently. ‘Attention management’, says Adam Grant, ‘is the art of focusing on getting things done for the right reasons, in the right places and at the right moments.’
The reason time management became as popular as it is today is because it redefined the meaning of productivity in our lives. But it is important to remember, as Dr Grant says, ‘Productivity isn’t a virtue. It’s a means to an end.’ Productivity and time management rely on will power, understating the why will automatically draw your attention to a task and, says Dr Grant, ‘you will be ‘pulled into it by intrinsic motivation’.
Why attention management works
I have previously spoken several times about intentional living. Choosing what we do carefully, no matter what mould it fits into or does not, knowing that every step we take is intentional, goes a long way in improving our quality of life. Little did I know that in describing intentional living I was in a way describing attention management. ‘Attention management allows us to be more proactive than reactive,’ says Ms Thomas. ‘It allows us to live lives of choice rather than reaction and distraction.’
Such choice is key to attention management. Whereas time management lets you allot time slots for whatever tasks come your way and asks you to cull them as they come, in effect making you flail around in the winds of chance, attention management takes a fundamentally reversed approach and asks that you pick your tasks based on what they mean to you and devote your attention to them.
Think in terms of meaning and focus rather than time. Do not focus on when and how quickly you want to finish something, focus instead on why you want to do it. That will justify why a task deserves your attention and time, and if it does deserve all that, you will have no reason not to focus on that task.
Managing distractions
The key to managing attention is identifying obstacles. There are two types of obstacles: actual distractions and perceived distractions. The former we are quite familiar with; the latter is less precisely spelt out although we are all probably aware of it.
‘Intrinsic distraction’ as I like to call perceived distraction is a problem that often goes unrecognised. Ms Thomas points out, ‘Even when there is no distraction, we distract ourselves by expecting one.’ This tends to add up quickly and has the effect of unaccomplished tasks demoralising us and making us feel unproductive. Since productivity has been linked so often and so closely with time management, that is often all we look at while we seek a solution, leading to a vicious circle.
There is renewed focus on attention management today thanks to the information age in which we live. Unlike before, when we accessed data as we needed and with specificity (say from books in a public library), we are now surrounded by data that itself beckons us constantly and often even commands us, with no target, motivation, rhyme or reason. The fact that we can get quick answers, no matter how little they may be vetted, prompts us to constantly seek answers. We are addicted to distracting ourselves because of its convenience and reliability.
However, the fact that gadgets are our primary distraction today does not mean we must advocate for an ‘unplugged lifestyle’. Tim Herrera calls this ‘a silly idea that is an impractical solution to a practical problem. Rather, the point is to notice your surroundings, to be mindful of the world you’re navigating, and to give yourself permission to slow down and just … observe.’ Once again, this is about being mindful of what we give our attention to-this is about living intentionally.
Attention residue
The big question that needs to be answered is ‘Why?’. Why should we choose what we pay attention to? Why not pay attention to everything? In understanding this I like to draw attention to the phrase ‘paying attention’. Think of attention as something you keep in your wallet. You pay it out every time you choose something to pay attention to. That means you will soon run out of it and can only rejuvenate it (see below), say, the following day. So choosing what we pay attention to is important for attention management. After all, there is hardly any management needed if a resource is infinite.
The technical term for this is ‘attention residue’. Attempting a tough or disliked task soon after an interesting one can make it harder to finish the disliked task because your capacity to pay attention has drained out. Studies on such ‘contrast effects’ of attention support the classic advice to start with what you dislike or find tough and then set aside tasks you like as a reward for later. While managing attention, always keep track of attention residue.
Cal Newport writes about attention residue in his book ‘Digital minimalism’. Mr Newport is an ardent advocate of quitting social media altogether, and Mr Herrera surprisingly agrees with him despite having called the idea impractical (see above). In any case, this is what Mr Newport says of attention residue:
Every time you switch your attention from one target to another and then back again, there’s a cost. This switching creates an effect that psychologists call attention residue, which can reduce your cognitive capacity for a non-trivial amount of time before it clears. If you constantly make “quick checks” of various devices and inboxes, you essentially keep yourself in a state of persistent attention residue, which is a terrible idea if you’re someone who uses your brain to make a living.
Despite liking his books, such as ‘Deep work’ and ‘Digital minimalism’, I have always disagreed with Mr Newport’s insistence that quitting social media is the only option. It reminds me of the vain attempts many made all through history to oppose change and new technology. Change on a natural level cannot be fought; it can merely be adapted to. Limiting social media use, for example, and cutting down the number of social networks we are active on to no more than a couple is often a more pragmatic approach.
Rejuvenation
The fact that we can run out of our capacity to pay attention means we must find ways to rejuvenate ourselves and refill our stores of attention. This calls for a balance between our focused and diffuse modes of thinking.
Paying attention means working in our focused mode. It gathers a lot of energy, sets up neural patterns and allows us to accomplish tasks via honest work and efficiency. This can be strengthened in the diffuse mode, where we are not focussing on any specific activity and are instead lost in thought. This includes a casual walk, sleep, exercise and various forms of relaxation. The diffuse mode strengthens the neural patterns set up in our focused mode allowing us to focus better over time. Like life itself, what we need is a balance between the two.
Ms Thomas says, ‘Our challenge is that now in any pause of activity, we immediately pull out our phone, and engaging with e-mail, social media, or other communication tools destroys the opportunity to daydream.’ She calls the diffuse mode ‘in-between moments’. She continues, ‘When we are daydreaming, we’re not actively controlling our thoughts, we aren’t focused on anything in particular, and we don’t have a lot of external stimulus. This is when our minds can wander and “stumble” into connections and insights that are otherwise crowded out.’
Striking this balance can prove to be rejuvenating and enriching to our life both immediately (trust me, I have experienced it) and in the long run.
Achieving this can be easy and fun too and never takes time for itself. Rejuvenate your attention by making small changes in how you live rather than by making it another big task in your day. Rob Walker, the author of ‘The art of noticing’ asks his readers to practise noticing things that they normally would fail to notice: ‘Walk to every corner of [a] building and just see what you see. Off to the doctor? Stay off your phone in the waiting room and … notice the people around you.’ This boosts intentional living and gives your mind pause from the constant onslaught of information it is otherwise subjected to.
The next time you find yourself wondering how you can accomplish things in your day, stop allotting time to everything. Instead ask yourself what really matters to you and choose how you allot your attention. Time will fly like it always does, but at least it will be pleasant this time-and it will be worth it.
This article first appeared on vhbelvadi.com. | https://medium.com/swlh/attention-management-ca472f3b20ab | ['V.H. Belvadi'] | 2020-02-21 17:33:01.837000+00:00 | ['Work', 'Management', 'Time Management', 'Attention', 'Productivity'] |
How to Start a Blog to Promote Your Book — Part 1 | How to Start a Blog to Promote Your Book — Part 1 Kary Oberbrunner Follow Mar 9 · 4 min read
Photo by Lacie Slezak on Unsplash
If you’ve already published your book, you probably know that it takes more than simply hitting that publish button to reach your readers. To understand the challenges that lie ahead of you, consider how many books are published on Amazon every day. According to an article published by TechCrunch in 2014, there was one new book published on Amazon’s website every five minutes. Two things are important to note about this statistic:
That statistic is six years old, so you can imagine how much things have changed in today’s publishing world. This only accounts for the books published on Amazon’s website, and it wouldn’t surprise me if there were a large number of books published on other platforms at different rates.
So, with so many new things to read, it becomes increasingly challenging to find readers who are interested in what you’re writing about. One thing that can help bring your readers right to your front door is blogging. We’ve created a ten-step guide to getting you through writing your first blog post. (The first five steps are in this blog post, and we’ll continue with part two in our next blog post.)
Step 1: Find your blogging platform.
We recommend hosting your blog directly on your author website, because if readers enjoy your blog posts, there will be much more information for them to discover about you, your books, and what other products and/or services you have to offer.
However, there are several other platforms available for free:
Medium
WordPress
LinkedIn
Guest blogging on other established blogs
Step 2: Design your content.
Since you don’t want to start writing random blog posts, you’ll need to create a structured plan that outlines exactly what your blog is about. To find readers who might be interested in your book, there are two ways you could tackle that:
You could write about themes or subjects related to your book. You could write articles that may interest your ideal reader.
The first is more focused on your specific book, and the second has a broader focus on a variety of topics that will appeal to your readers.
Take some time to brainstorm between three to ten categories you’ll write about on your blog. Then, underneath each main category, you can create a few subcategories that will guide what you write about within each category.
Step 3: Create an editorial calendar.
First, decide how often you’d like to post. Some bloggers post once a month, some once a week, and some once a day. Think about how much time you have available to commit to blogging and pick a posting frequency that works well for you. If you’ve never blogged before, you might want to start out slow in the first few months. You can start out by posting every other week. Then, after one or two months of being consistent, you may increase that by publishing one blog post every week. And, if you’d like to start posting more, you can work your way toward posting two blog posts per week.
Step 4: Research your topics.
There are two types of research we recommend doing before you start brainstorming ideas:
Do Internet searches for blogs on your topic. This will help you identify where there are holes in the market, and you can help solve that problem by writing content that doesn’t yet exist. This will help you gain favor with the search engine algorithms (the computers that calculate your ranking on Google and other search platforms), and it will also tell your readers that you have something unique to offer over what the other bloggers do. Do some keyword research. It’s not enough to write on a unique topic; you must also find a unique topic that your ideal reader is searching for. You can find out how popular certain keywords are by doing keyword research. There are tons of keyword research tools out there that will tell you exactly how many people are searching a particular topic every day, and they will also give you stats on other keywords related to the one you searched. These can be powerful tools if you use them regularly.
Step 5: Start brainstorming blog ideas.
The easiest way to break this down and get a long list of ideas is to come up with 10 to 20 ideas for each category you listed in Step 2. If you have ten categories, you can easily generate a list of 100 within a few hours just by listing ten ideas for each category. Then, when you’re ready to start developing ideas for a post, you can go to your list for an idea you can start writing about immediately.
Want to learn some copywriting trade secrets? One tool we feel will really help you step up your game is Author Wizards. With at least 50 copywriting tools to minimize the amount of time you spend writing, you’ll streamline your content marketing process for social media, blogs, eBooks, marketing copy, landing pages, and much more.
Check back here next week for the next five steps to launching a blog to promote your published books.
Do you already have a blog? Comment below with a link to your most popular blog post so our readers can get inspired from the hard work you’ve done to build your own blog. Or, if you don’t already have a blog and want to start one, what do you want to start blogging about? We’d love to know!
About the author:
Tina Morlock is a full-time freelance book editor and a published science fiction and fantasy novelist. Her next novel, The Sins of Story: Memoirs of an Angel, is an epic Christian fantasy told from the viewpoint of an angel who is working to help God solve the suicide epidemic on Earth. | https://medium.com/author-academy-elite/how-to-start-a-blog-to-promote-your-book-part-1-47c0d49f8735 | ['Kary Oberbrunner'] | 2020-03-09 22:46:11.406000+00:00 | ['How To Write', 'Authors', 'Writing Tips', 'Books'] |
How to Integrate a Dialogflow Bot with React | How to Integrate a Dialogflow Bot with React
A step by step guide to integrating the Dialogflow bot into a React app
Prerequisites
To get started, you would need a Dialogflow bot or working knowledge of Dialogflow and React. To integrate the chatbots with React, you will need a Kommunicate account. All the aforementioned tools have free to try out. Additionally, Node should be installed in your system.
Steps to integrate Dialogflow bot in React websites
I am going to explain how I connected Dialogflow and React with the help of Kommunicate.
Note: To keep it very simple and straight this tutorial explains with really basic and plain code development. Also to mention, this project can be found on Github.
Step 1. Create your Dialogflow bot
To get started, you can easily create a chatbot in Dialogflow. It is a very intuitive yet powerful chatbot building tool. Here’s a sample available to help you get started with your Dialogflow bot. To go further, you can create your own Intents & Entities.
Step 2. Integrate Dialogflow chatbot with Kommunicate
To integrate your Dialogflow bot in Kommunicate, log in to your Kommunicate dashboard and navigate to the Bot section. If you do not have an account, you can create one here. Locate the Dialogflow section and click on Integrate Bot.
Now, navigate to your Dialogflow console and download the service account key file. Here are the steps to locate the file:
Select your Agent from the dropdown in the left panel. Click on the Settings button. It will open a setting page for the agent. Inside the General tab search for GOOGLE PROJECTS and click on your service account. After getting redirected to your SERVICE ACCOUNT, create a key in JSON format for your project from the Actions section and it will get automatically downloaded. Now upload the Key file.
In the bot profile section, you will be able to give your bot a name. This name will be visible to the users whenever they interact with the bot. Process further and fill in the details.
Dashboard →Bot Integration → Manage Bots: You can check all your integrated bots here Dashboard → Bot Integration: Your Dialogflow icon should be green with the number of bots are you have successfully integrated. You will also have the option of testing your newly created bot here. (See image below)
Complete the setups and then you can check and test your newly created bot.
Step 3. Create a React app
Create a new React app (my-app) by using the command:
npx create-react-app my-app
Step 4. Now, navigate to the my-app folder
cd my-app
Step 5. Create new file chat.js inside src folder
Once you create the chat.js, add the below code in componentDidMount. The below code will launch a chat widget on your website with the integrated Dialogflow bot. Make sure to replace <YOUR_APP_ID> with your Kommunicate application ID.
You can get this code in the install section of Kommunicate as well.
var kommunicateSettings = {"appId":"<YOUR APP_ID>","popupWidget":true,"automaticChatOpenOnNavigation":true};
var s = document.createElement("script"); s.type = "text/javascript"; s.async = true;
s.src = "
var h = document.getElementsByTagName("head")[0]; h.appendChild(s);
window.kommunicate = m; m._globals = kommunicateSettings;
})(document, window.kommunicate || {}); (function(d, m){var kommunicateSettings = {"appId":" ","popupWidget":true,"automaticChatOpenOnNavigation":true};var s = document.createElement("script"); s.type = "text/javascript"; s.async = true;s.src = " https://widget.kommunicate.io/v2/kommunicate.app ";var h = document.getElementsByTagName("head")[0]; h.appendChild(s);window.kommunicate = m; m._globals = kommunicateSettings;})(document, window.kommunicate || {});
Here’s a screenshot of the code editor for the same:
Step 6. Import KommunicateChat component in App.js
Import the KommunicateChat component in your App.js file. Here’s the screenshot of the code editor:
Step 7. Start your app locally
Use the following command to start your newly created website with the installed Dialogflow bot.
npm start
Voila! How simple was that? In these few simple steps, you could integrate Dialogflow bot in React websites. This is how the chat widget looks on a website:
Wrapping Up
It is fairly simple to have a rich text enable chat widget with Dialogflow bot in your React websites. You can further customize the chat widget to match your website colors and theme. | https://medium.com/javascript-in-plain-english/how-to-integrate-a-dialogflow-bot-with-react-ece565324a1a | ['Devashish Datt Mamgain'] | 2020-11-02 19:02:16.809000+00:00 | ['Web Development', 'Chatbots', 'React', 'Bots', 'Programming'] |
How Do We Get Speed, Innovation and Engagement? | How Do We Get Speed, Innovation and Engagement?
A Leadership Journey Guided by Alignment for Autonomy
Photo © Ericsson
“Too many organizations are trying to control the waves instead of learning how to surf.” Mary Poppendieck paraphrasing Allen Ward
“Never tell people how to do things. Tell them what to do and they will surprise you with their ingenuity.” George S. Patton
Introduction
It all starts from the simple question: How do we know if we make the right decisions and take the right actions? The profound answer is we don’t since we cannot predict the future. Now, let’s talk about what we can do about it!
Over the years as a leader of hundreds of people and leadership teams in large-scale, complex product development, I have learned to trust people and involve them by co-creating maximum alignment on our intent (what & why) which then enables maximum autonomy in actions, decisions and implementation (how) since this gives speedier product development, more innovative products and higher engagement across all teams and people in the organization.
Volatility, Uncertainty, Complexity, and Ambiguity
In product development we want results and to reach results we make plans which leads to actions and decisions which will — hopefully — lead to the desired results, i.e. fulfilling the needs of our stakeholders. Since we don’t always get the desired results, we ask ourselves the three eternal questions of product development (thanks, Jabe Bloom!) as illustrated in the Figure 1 below (thanks, Stephen Bungay!):
Figure 1: The three eternal questions in product development. Source: Stephen Bungay and Jabe Bloom. Illustration: Erik Schön
Why can’t we plan better? Why wasn’t the plan followed? Why didn’t the plan work?
The simple answer to the three eternal questions is that the world is Volatile, Uncertain, Complex, and Ambiguous (VUCA).
VUCA is further illustrated in Figure 2 below and the meaning of the components is:
Volatile: The nature and dynamics of change, and the nature and speed of change forces and change catalysts.
Uncertain: The lack of predictability and the prospects for surprise.
Complex: The multiplex of forces and dependencies and the confusion that surround an organization.
Ambiguous: The haziness of reality, the potential for misreads, and the mixed meanings of conditions; cause-and-effect confusion.
Figure 2: The world is Volatile, Uncertain, Complex and Ambiguous (VUCA). Illustration: Erik Schön
The common usage of the term VUCA began in the 1990s and it derives from the military. VUCA (and similar terms like friction and entropy) has subsequently been used in a wide range of organizations, including everything from for-profit corporations to education.
Figures 3 and 4 below show people waiting for the announcement of the new pope, first in 2005 and then in 2013 and illustrates how the Apple iPhone, first launched in 2007, transformed the lives of hundreds of millions of people. A consequence is that the data traffic in the mobile networks is doubling every 12–18 months making the growth exponential and faster than Moore’s law. Hence, this is a great example of today’s VUCA world.
Figure 3: Announcing the new Pope in 2005. Photo © Luca Bruno/AP
Figure 4: Announcing the new Pope in 2013. Photo © Michael Sohn/AP
What I have seen as a trainer in product development leadership training is that the VUCA concept resonates very well with the learners since it is consistent with their experiences from development of different types of products (hardware, software, hardware+software) in different locations globally by different types of teams and organizations.
So, given that we live in a VUCA world, how do we survive and thrive?
Alignment for Autonomy to Thrive in A VUCA World
How do you get the desired results and overcome VUCA in product development? You can change the actions and decisions, change the plans, and, create alignment on the wanted results as illustrated in Figure 5 below.
Figure 5: How to get the desired results in a VUCA world. Source: Stephen Bungay. Illustration: Erik Schön
Decide what really matters to you and your team: Use knowledge and experiences combined with interaction and involvement to work out your wanted outcome or wanted position. Formulate strategy as intent, i.e. focus on “what” and “why”. Avoid preparing detailed, perfect strategies and plans. And, remember “The Spice Girls Question” (which is not really a question): “Tell me what you want, what you really, really want!”
Get the message across to everyone involved: Communicate intent, i.e. clearly state “what” and “why”. Then ask people how they are going achieve the intent to secure that everyone involved truly understand the situation and the intent. Next, trust your people since this will make them feel responsible for carrying out their part. Avoid telling people exactly what to do and how to do it.
Give everyone space and support: Give people space within boundaries to make decisions and take actions. Encourage people to adapt their actions to realize the overall intentions and encourage initiative! Avoid too tight control mechanisms, reporting and too detailed metrics.
This is summarized in Figure 6 where we see how we use both alignment and autonomy as explained by Stephen Bungay.
Figure 6: Alignment and autonomy for thriving in a VUCA world. Source: Stephen Bungay. Illustration: Erik Schön
Alignment for Autonomy Gives Speed, Innovation, and Engagement
The reason we get higher speed is that we avoid searching for more information to make the perfect plan, potentially ending up in analysis paralysis. Instead we focus more on a clear description of intent, i.e. the “what” and “why”. Moreover, we avoid specifying details in the plan and instead we give people, teams, or organizations freedom within boundaries and in line with intent to figure out “how” to make it happen. Also, we avoid too tight control, follow-up and reporting of metrics, and, finally, we encourage initiative: when you know “what” and “why”, you can quickly figure out what to do when the situation changes without going up and down the chain of command asking for permission and directives.
In addition to higher speed, we also get more innovation since when people truly understand the intent (“what” and “why”) and are given autonomy within boundaries to figure out “how”, they will surprise you with their ingenuity in figuring out solutions according to Bungay and George S. Patton. This is delivering value from ideas which, by definition, is innovation. Additionally, Amabile and Kramer’s research shows that making progress in meaningful work ignites joy, engagement, and creativity. The speed in reaching results is equivalent to making progress and that the work is meaningful is a consequence of clearly communicating the intent or purpose, i.e. “what” and “why”. The creativity will generate ideas and turning these ideas into decisions, actions, learning, and ultimately products — and these all have value, which, again by definition, is innovation.
Finally, we also get more motivation and engagement, since as Pink has shown, autonomy, mastery and purpose motivates us. Here we have autonomy on how to act and decide, and, purpose from the intent (“what” and “why”). Additionally, as we saw in the previous paragraph, making progress in meaningful work ignites, joy, engagement, and creativity. Here, the work is meaningful thanks to a clear intent or purpose; that we make progress follows from the speed in reaching results, thus we get more engagement. Moreover, Deming has stated that instead of extrinsic motivation people only need to know why their work is important, which follows from a clear intent, i.e. “what” and “why”.
Why Do You Need Boundaries or Constraints?
You need boundaries or constraints to interact and cooperate with your surroundings in an effective way and to provide clarity; when boundaries or constraints are unclear, most people will not explore, but rather keep their head down and play it safe. Specifying boundaries is like marking out a minefield. If land mines are known or rumored to exist but are unmarked, people will not move.
What Happens If Your Boundaries Are Too Tight?
The boundaries (or constraints) should be as few as possible, to avoid becoming a “straitjacket” that slows us down and kills innovation and engagement.
How Do You Balance Alignment and Autonomy?
You don’t have to since this is not a balance act or a trade-off curve. You can have both high autonomy and high alignment as shown in Figure 7 and explained below.
Figure 7: Alignment for autonomy. Source: Stephen Bungay. Illustration: Henrik Kniberg
The bottom left quadrant is low alignment and low autonomy which is a micromanaging organization with an indifferent culture, i.e. follow the orders without being given a purpose.
The top left quadrant is high alignment and low autonomy. Here leaders are good at communicating what problems to solve and they are also telling people how to solve it. This is an authoritative organization with a conformist culture where the people are expected to follow the orders, understand the purpose, but not really thinking for themselves and try new things.
The bottom right quadrant is low alignment and high autonomy. This is an entrepreneurial organization with a chaotic culture where teams do whatever they want and the managers are clueless what to do and what’s going on.
The top right quadrant is high alignment and high autonomy. This means that leaders focus on what problem to solve and let the teams figure out how to solve it. s. This quadrant is where the magic happens and the result is more engaged employees and more innovation in both ways of working and products; hence this is where we want to be: aligned autonomy or alignment for autonomy.
Examples of Alignment for Autonomy
Alignment for autonomy is scalable and it works on individual level, team level and organizational level. Here are three examples from a product development organization (2000+ people in 10+ locations) where I worked that achieved the following overall results over a five-year period:
Throughput: value delivered increased by 400%
Speed: Median feature lead-time decreased from 100 to 36 weeks
Quality: Faults at customers decreased from 250 to 40 per month
Motivation: employee motivation increased from 67 to72%
Alignment for Autonomy for Individuals
Figure 8 below shows how our product development organization worked with alignment for autonomy for our managers.
Figure 8: Alignment for autonomy for individual managers. Illustration: Erik Schön
The intent was to use alignment for autonomy as a leader and/or manager to get speed, innovation, and engagement. The product development organization achieved a common understanding of the intent through several interactive seminars for all leaders and managers.
Given the intent of using alignment for autonomy and the boundaries being the boundaries of the organization’s leadership framework, the leaders defined the manager role in a Lean/Agile context (the “how”) as follows.
We teach, coach and challenge: individuals, teams and organizations regarding Agile/Lean and end-to-end flow based on own learning and experiences.
We develop organizations: “manage the system”, develop the culture e.g. creating conditions for “continuous improvements” and “innovation”.
We develop teams: team composition, framework for high-performing teams, team rotation between programs, …
We develop individuals: recruitment, targets, feedback, development plans, salary, …
To adjust the “how” in line with intent, i.e. adjust the manager role, a coaching training program was established together with a coaching network open for both Lean/Agile coaches and managers to share and learn from experiences, theory, games, and simulations, and, a Community of Practice (CoP) for line managers to give time and space to reflect together on the new expectations on the manager role, how it works, and, how it could be further improved.
The result was higher motivation and engagement in the organization as measured in employee satisfaction surveys.
Alignment for Autonomy for Teams
Figure 9 below shows how our product development organization worked with alignment for autonomy our cross-functional, co-located, semi-permanent development teams.
Figure 9: Alignment for autonomy for development teams. Illustration: Erik Schön
The intent was high-performing teams (“what”) to outlearn competition (“why”) and this was emphasized whenever a new team was started and regularly by the head of the product development organization.
The constraints (or boundaries) were: start with either Scrum or Kanban, use Git as the software version control handling system, and, the expectation that all teams to spend 30% of their time on learning, improvements, and innovation.
To outlearn competition, we setup a regular sharing and learning cadence: the second Tuesday of every three-week-sprint was “Learning Day”. This was an internal multi-track conference with internally created content, e.g. customer feedback, tools presentations and training, inspiration sessions on new ways of working, coding dojos, … Additionally, developers and agile coaches started Communities of Practices (CoPs) based on the needs of the development teams, e.g. for ways of working and tools.
To adjust the “how” in line with intent, all teams did retrospectives to learn and improve at the end of each sprint, irrespectively of if they do Scrum, Kanban or a hybrid.
The result was flexible, confident and innovative teams that dare to go into new areas, products and technologies and get up to speed within 1–3 months!
Alignment for Autonomy for Organizations
Figure 10 below show how we used alignment for autonomy to develop strategy and making strategy happen in our organization.
Figure 10: Alignment for autonomy for an organization. Illustration: Erik Schön
The intent was to secure understanding of the “what” and “why” of the overall strategy in interactive workshops with everyone in the organization. Before this, a revised strategy had been formed in collaborative strategy development workshops with leaders of the organization.
We converged to a quarterly challenge rather than Balanced Scorecard (BSC) or Objectives and Key Results (OKR) for “how” to make the strategy happen. The boundaries for what we wanted was the following:
as few targets as possible, preferably only one
every person and team in the organization should feel that they could contribute to the target
the target(s) should be based on current (biggest) need(s) of the organization
the target(s) should contribute to making the strategy happen
The strategy was then revisited and adjusted, if needed, in quarterly strategy retrospectives. Then, as a part of the strategy retrospectives, a new quarterly challenge was set if the previous challenge had been fulfilled or the organization’s needs had changed. If not, the quarterly challenge continued during the coming quarter.
Here’s an example of a quarterly challenge that we used: Every sprint deployed to a live network with added value and higher quality.
Our programs and development teams then used the quarterly challenge to come up with activities that they wanted to do in order to contribute to making the challenge, and thus also the strategy, happen.
Alignment for Autonomy Exercises
When training product development leaders in using alignment for autonomy, we have found the following exercises useful.
Where’s North?
Ask everyone in the room to take a good look around the room and its surroundings. Then ask the participants to stand up, close their eyes and point their right hand towards north. This very simple exercise shows the importance of alignment of direction before starting to move.
Alignment for Autonomy: Current and Wanted State
Pick one topic from the following list:
something from your own organization
managing the test environment
handling trouble reports
doing product strategy
securing company wide knowledge sharing
using metrics/KPIs
For the topic of your choice, put it in a suitable quadrant in the alignment/autonomy four-field matrix and explain your reasoning for
The current situation The wanted situation
The Autonomy and Alignment Experiment
The Autonomy and Alignment Experiment by Stephan van Rooden is a great simulation that shows how people feel and react when different groups are given a simple task with different combinations of high or low autonomy and alignment.
Summary
Alignment happens when leaders and teams work towards a common purpose or goal. Autonomy helps teams to work independently of leaders and each other. The stronger alignment we have, the more autonomy we can grant. This will give us speed, innovation and engagement.
The leader’s job is to communicate what customer need or problem should be solved, and why, or, even to co-create this together with the teams. The team’s job is to collaborate with each other and other teams to find the best solution fulfilling the need and have the autonomy figure out how to do it.
Alignment for autonomy works for individuals, teams and even organizations, in fact, the bigger the organization, the more important alignment for autonomy gets — to distribute decisions in order unleash innovation close to customers, and, to close the gap between strategy and implementing the strategy.
Would you be willing to try?
For Further Inspiration and Learning
This write-up is based on the following presentations:
Erik Schön: How Do We Get Speed, Innovation and Engagement?
Erik Schön: R&D Leadership — Be Brave and Mind the Gaps
Erik Schön: Doing Strategy the Interactive & Flexible Way
Erik Schön: Ways of Working in the #NetworkedSociety — Strategy Experiments @ericsson 3G
You can find my work on leadership, strategy and Lean/Agile, e.g. the book The Art of Strategy, at Yokoso Press, Medium, SlideShare and YouTube.
Additional articles, books and videos for further inspiration and learning regarding alignment for autonomy and related topics are given below.
Articles
Amabile, Kramer: The Power of Small Wins
Bungay: How to Make the Most of Your Company’s Strategy
Richards: All by Ourselves
Roock: I’m afraid that some leaders do actually think they are God. Arne Roock asks Stephen Bungay about modern management, the Prussian Army, and the Spice Girls
Schön: Doctrine or Dogma? Challenge Your Wardley Mapping Assumptions in a Friendly Way!
Schön: Doing Strategy the Interactive & Flexible Way — Strategy as Football
Schön: Seeing Around Corners: How To Spot Technology Trends and Make Them Happen
Schön: Strategy in Action: How To Be (More) Certain To Succeed
Schön: The Art of Strategy — Introduction
Schön: The Mental Leaps — More, Faster, Better, Happier & More Innovative!
Books
Amabile, Kramer: The Progress Principle: Using Small Wins to Ignite Joy, Engagement, and Creativity at Work
Bungay: The Art of Action: How Leaders Close the Gaps between Plans, Actions and Results
Deming: The Essential Deming: Leadership Principles from the Father of Quality
Marquet: Turn the Ship Around! A True Story of Turning Followers into Leaders
Bob Marshall: Product Aikido
Daniel Pink: Drive: The Surprising Truth About What Motivates Us
Mary Poppendieck, Poppendieck: Leading Lean Software Development: Results are Not the Point
Richards: Certain to Win: The Strategy of John Boyd, Applied to Business
Schön: The Art of Strategy — Steps Towards Business Agility
Sinek: Start with Why: How Great Leaders Inspire Everyone to Take Action
Videos
Amabile: The Progress Principle
Kniberg: Spotify’s Engineering Culture, Part 1
Marquet: Greatness
Daniel Pink: Drive: The Surprising Truth About What Motivates Us
Reinertsen: Decentralizing Control: How Aligned Initiative Conquers Uncertainty
Richards: Amazing. We did it all by ourselves! (And so can you.)
Schön: Doing Strategy the Interactive & Flexible Way
Schön: The Art of Strategy — Steps Towards Business Agility
Sinek: Start with Why
Kudos
Thanks to
Stephen Bungay, Chet Richards, Bob Marshall, Mary and Tom Poppendieck, and Don Reinertsen for inspiration.
Arne Roock, Karl Scotland, Jason Yip, Håkan Forss and Hendrik Esser for enlightening conversations.
Björn Tikkanen, Henrik Kniberg and Jonas Boegård for encouraging me to write things up.
Jonas Plantin and everyone in Product Development Unit 2G/3G at Ericsson for making this happen, together! | https://medium.com/an-idea/how-do-we-get-speed-innovation-and-engagement-739a3aff4792 | ['Erik Schön'] | 2020-12-01 09:31:03.756000+00:00 | ['Speed', 'Innovation', 'Leadership', 'Motivation', 'Engagement'] |
Consider Metaprogramming | In this post, I would like to discuss one use case, out of several use cases, for Ruby’s metaprogramming that I found very helpful while working on building a command line interface application (CLI) with external data for my first portfolio project at Flatiron school. I am referring to the use of mass assignment for the Initialize method. One of the requirements of my project was that my CLI should make only one call to an external source, in my case I used the National Park Service API that returned a large data set in the format of an array of nested hashes. During the early stages of development of my project I found myself ‘mining’ the array of nested hashes for the data I intended to use with my CLI and then compiling a list of variable names for which I knew I was going to have to create at least an instance variable for each of them and, using the attr_accessor macro, a setter and getter method for each them. This was going to be a sizable task given the large amount of interesting data I could see in what was being returned from the API.
Ruby’s metaprogramming offers a lot of functionality. You could write every bit of code from scratch yourself if you wanted to, but doing that is very time consuming and quite frankly inefficient. If you have read this far, it would be safe to assume you are familiar with the fact that methods can be defined to take in multiple arguments. For example:
On line four of the code snippet above, our initialize method is being passed four keyword arguments. Note: Keyword arguments are a special way of passing arguments into a method, pairing a key that functions as the argument name, with its value.
One of the benefits associated with using keyword arguments is the ability to use something called mass assignment to create new instances of the class Park, or in other words, instantiate new ‘Park’ objects. With the initialize method in the example above defined to accept keyword arguments I realized that I could use key-value pairs from the array of nested hashes in my API response and assign them to a variable, then I could pass that variable to my initialize method. One problem persisted, not all of my parks contained a description, in other words not all the new instances of the ‘Park’ objects were going to require the same number of variables, so I needed to find a way to abstract away the Park class’ dependency on a specific number of attributes. Let’s take a look at the code below:
I started by defining the initialize method to take in an ‘attributes’ variable which had been assigned a hash with the parks’ data. Then, I iterated over each key/value pair in the attributes hash. I called the class method attr_accessor on the Park class itself (on line four) and passed the key to dynamically add both a getter and a setter. Then I used the .send method which calls the method’s name that is the key’s name, with an argument of the value, and et viola! I had an initialize method that could easily adapt to a different number of attributes.
The above code snippet is considered metaprogramming. | https://walteraab.medium.com/consider-metaprogramming-d53307e78a5b | ['Walter Aab'] | 2020-12-14 17:07:12.991000+00:00 | ['Beginner', 'Flatiron', 'Software Engineering', 'Metaprogramming', 'Ruby'] |
Turbocharging Python with Command Line Tools | Turbocharging Python with Command Line Tools
It’s easy to turbocharge Python using command line tools incorporating GPU parallelization, JIT, core saturation, and Machine Learning.
Originally published at kite.com.
Introduction
It’s as good a time to be writing code as ever — these days, a little bit of code goes a long way. Just a single function is capable of performing incredible things. Thanks to GPUs, Machine Learning, the Cloud, and Python, it’s is easy to create “turbocharged” command-line tools. Think of it as upgrading your code from using a basic internal combustion engine to a nuclear reactor. The basic recipe for the upgrade? One function, a sprinkle of powerful logic, and, finally, a decorator to route it to the command-line.
Writing and maintaining traditional GUI applications — web or desktop — is a Sisyphean task at best. It all starts with the best of intentions, but can quickly turn into a soul crushing, time-consuming ordeal where you end up asking yourself why you thought becoming a programmer was a good idea in the first place. Why did you run that web framework setup utility that essentially automated a 1970’s technology — the relational database — into series of python files? The old Ford Pinto with the exploding rear gas tank has newer technology than your web framework. There has got to be a better way to make a living.
The answer is simple: stop writing web applications and start writing nuclear powered command-line tools instead. The turbocharged command-line tools that I share below are focused on fast results vis a vis minimal lines of code. They can do things like learn from data (machine learning), make your code run 2,000 times faster, and best of all, generate colored terminal output.
Here are the raw ingredients that will be used to make several solutions:
Click Framework
Python CUDA Framework
Numba Framework
Scikit-learn Machine Learning Framework
You can follow along with source code, examples, and resources in Kite’s github repository.
Using The Numba JIT (Just in time Compiler)
Python has a reputation for slow performance because it’s fundamentally a scripting language. One way to get around this problem is to use the Numba JIT. Here’s what that code looks like:
First, use a timing decorator to get a grasp on the runtime of your functions:
def timing(f):
@wraps(f)
def wrap(*args, **kwargs):
ts = time()
result = f(*args, **kwargs)
te = time()
print(f"fun: {f.__name__}, args: [{args}, {kwargs}] took: {te-ts} sec")
return result
return wrap
Next, add a numba.jit decorator with the “nopython” keyword argument, and set to true. This will ensure that the code will be run by the JIT instead of regular python.
@timing
@numba.jit(nopython=True)
def expmean_jit(rea):
"""Perform multiple mean calculations""" val = rea.mean() ** 2
return val
When you run it, you can see both a “jit” as well as a regular version being run via the command-line tool:
$ python nuclearcli.py jit-test
Running NO JIT
func:'expmean' args:[(array([[1.0000e+00, 4.2080e+05, 4.2350e+05, ..., 1.0543e+06, 1.0485e+06,
1.0444e+06],
[2.0000e+00, 5.4240e+05, 5.4670e+05, ..., 1.5158e+06, 1.5199e+06,
1.5253e+06],
[3.0000e+00, 7.0900e+04, 7.1200e+04, ..., 1.1380e+05, 1.1350e+05,
1.1330e+05],
...,
[1.5277e+04, 9.8900e+04, 9.8100e+04, ..., 2.1980e+05, 2.2000e+05,
2.2040e+05],
[1.5280e+04, 8.6700e+04, 8.7500e+04, ..., 1.9070e+05, 1.9230e+05,
1.9360e+05],
[1.5281e+04, 2.5350e+05, 2.5400e+05, ..., 7.8360e+05, 7.7950e+05,
7.7420e+05]], dtype=float32),), {}] took: 0.0007 sec
$ python nuclearcli.py jit-test --jit
Running with JIT
func:'expmean_jit' args:[(array([[1.0000e+00, 4.2080e+05, 4.2350e+05, ..., 1.0543e+06, 1.0485e+06,
1.0444e+06],
[2.0000e+00, 5.4240e+05, 5.4670e+05, ..., 1.5158e+06, 1.5199e+06,
1.5253e+06],
[3.0000e+00, 7.0900e+04, 7.1200e+04, ..., 1.1380e+05, 1.1350e+05,
1.1330e+05],
...,
[1.5277e+04, 9.8900e+04, 9.8100e+04, ..., 2.1980e+05, 2.2000e+05,
2.2040e+05],
[1.5280e+04, 8.6700e+04, 8.7500e+04, ..., 1.9070e+05, 1.9230e+05,
1.9360e+05],
[1.5281e+04, 2.5350e+05, 2.5400e+05, ..., 7.8360e+05, 7.7950e+05,
@click.option('--jit/--no-jit', default=False)
7.7420e+05]], dtype=float32),), {}] took: 0.2180 sec
How does that work? Just a few lines of code allow for this simple toggle:
@cli.command()
def jit_test(jit):
rea = real_estate_array()
if jit:
click.echo(click.style('Running with JIT', fg='green'))
expmean_jit(rea)
else:
click.echo(click.style('Running NO JIT', fg='red'))
expmean(rea)
In some cases a JIT version could make code run thousands of times faster, but benchmarking is key. Another item to point out is the line:
click.echo(click.style('Running with JIT', fg='green'))
This script allows for colored terminal output, which can be very helpful it creating sophisticated tools.
Using the GPU with CUDA Python
Another way to nuclear power your code is to run it straight on a GPU. This example requires you run it on a machine with a CUDA enabled. Here’s what that code looks like:
@cli.command()
def cuda_operation():
"""Performs Vectorized Operations on GPU""" x = real_estate_array()
y = real_estate_array() print("Moving calculations to GPU memory")
x_device = cuda.to_device(x)
y_device = cuda.to_device(y)
out_device = cuda.device_array(
shape=(x_device.shape[0],x_device.shape[1]), dtype=np.float32)
print(x_device)
print(x_device.shape)
print(x_device.dtype) print("Calculating on GPU")
add_ufunc(x_device,y_device, out=out_device) out_host = out_device.copy_to_host()
print(f"Calculations from GPU {out_host}")
It’s useful to point out is that if the numpy array is first moved to the GPU, then a vectorized function does the work on the GPU. After that work is completed, then the data is moved from the GPU. By using a GPU there could be a monumental improvement to the code, depending on what it’s running. The output from the command-line tool is shown below:
$ python nuclearcli.py cuda-operation
Moving calculations to GPU memory
<numba.cuda.cudadrv.devicearray.DeviceNDArray object at 0x7f01bf6ccac8>
(10015, 259)
float32
Calculating on GPU
Calculcations from GPU [[2.0000e+00 8.4160e+05 8.4700e+05 ... 2.1086e+06 2.0970e+06 2.0888e+06]
[4.0000e+00 1.0848e+06 1.0934e+06 ... 3.0316e+06 3.0398e+06 3.0506e+06]
[6.0000e+00 1.4180e+05 1.4240e+05 ... 2.2760e+05 2.2700e+05 2.2660e+05]
...
[3.0554e+04 1.9780e+05 1.9620e+05 ... 4.3960e+05 4.4000e+05 4.4080e+05]
[3.0560e+04 1.7340e+05 1.7500e+05 ... 3.8140e+05 3.8460e+05 3.8720e+05]
[3.0562e+04 5.0700e+05 5.0800e+05 ... 1.5672e+06 1.5590e+06 1.5484e+06]]
Running True Multi-Core Multithreaded Python using Numba
One common performance problem with Python is the lack of true, multi-threaded performance. This also can be fixed with Numba. Here’s an example of some basic operations:
@timing
@numba.jit(parallel=True)
def add_sum_threaded(rea):
"""Use all the cores""" x,_ = rea.shape
total = 0
for _ in numba.prange(x):
total += rea.sum()
print(total) @timing
def add_sum(rea):
"""traditional for loop""" x,_ = rea.shape
total = 0
for _ in numba.prange(x):
total += rea.sum()
print(total) @cli.command()
@click.option('--threads/--no-jit', default=False)
def thread_test(threads):
rea = real_estate_array()
if threads:
click.echo(click.style('Running with multicore threads', fg='green'))
add_sum_threaded(rea)
else:
click.echo(click.style('Running NO THREADS', fg='red'))
add_sum(rea)
Note that the key difference between the parallel version is that it uses @numba.jit(parallel=True) and numba.prange to spawn threads for iteration. Looking at the picture below, all of the CPUs are maxed out on the machine, but when almost the exact same code is run without the parallelization, it only uses a core.
$ python nuclearcli.py thread-test
$ python nuclearcli.py thread-test --threads
KMeans Clustering
One more powerful thing that can be accomplished in a command-line tool is machine learning. In the example below, a KMeans clustering function is created with just a few lines of code. This clusters a pandas DataFrame into a default of 3 clusters.
def kmeans_cluster_housing(clusters=3):
"""Kmeans cluster a dataframe"""
url = "
val_housing_win_df =pd.read_csv(url)
numerical_df =(
val_housing_win_df.loc[:,["TOTAL_ATTENDANCE_MILLIONS", "ELO",
"VALUE_MILLIONS", "MEDIAN_HOME_PRICE_COUNTY_MILLIONS"]]
)
#scale data
scaler = MinMaxScaler()
scaler.fit(numerical_df)
scaler.transform(numerical_df)
#cluster data
k_means = KMeans(n_clusters=clusters)
kmeans = k_means.fit(scaler.transform(numerical_df))
val_housing_win_df['cluster'] = kmeans.labels_
return val_housing_win_df (clusters=3):"""Kmeans cluster a dataframe"""url = " https://raw.githubusercontent.com/noahgift/socialpowernba/master/data/nba_2017_att_val_elo_win_housing.csv val_housing_win_df =pd.read_csv(url)numerical_df =(val_housing_win_df.loc[:,["TOTAL_ATTENDANCE_MILLIONS", "ELO","VALUE_MILLIONS", "MEDIAN_HOME_PRICE_COUNTY_MILLIONS"]]#scale datascaler = MinMaxScaler()scaler.fit(numerical_df)scaler.transform(numerical_df)#cluster datak_means = KMeans(n_clusters=clusters)kmeans = k_means.fit(scaler.transform(numerical_df))val_housing_win_df['cluster'] = kmeans.labels_val_housing_win_df
The cluster number can be changed by passing in another number (as shown below) using click:
@cli.command()
@click.option("--num", default=3, help="number of clusters")
def cluster(num):
df = kmeans_cluster_housing(clusters=num)
click.echo("Clustered DataFrame")
click.echo(df.head())
Finally, the output of the Pandas DataFrame with the cluster assignment is show below. Note, it has cluster assignment as a column now.
$ python -W nuclearcli.py cluster
$ python -W nuclearcli.py cluster --num 2
Summary
The goal of this article is to show how simple command-line tools can be a great alternative to heavy web frameworks. In under 200 lines of code, you’re now able to create a command-line tool that involves GPU parallelization, JIT, core saturation, as well as Machine Learning. The examples I shared above are just the beginning of upgrading your developer productivity to nuclear power, and I hope you’ll use these programming tools to help build the future.
Many of the most powerful things happening in the software industry are based on functions: distributed computing, machine learning, cloud computing (functions as a service), and GPU based programming are all great examples. The natural way of controlling these functions is a decorator-based command-line tool — not clunky 20th Century clunky web frameworks. The Ford Pinto is now parked in a garage, and you’re driving a shiny new “turbocharged” command-line interface that maps powerful yet simple functions to logic using the Click framework.
Noah Gift is lecturer and consultant at both UC Davis Graduate School of Management MSBA program and the Graduate Data Science program, MSDS, at Northwestern. He is teaching and designing graduate machine learning, AI, Data Science courses and consulting on Machine Learning and Cloud Architecture for students and faculty.
Noah’s new book, Pragmatic AI, will help you solve real-world problems with contemporary machine learning, artificial intelligence, and cloud computing tools. Noah Gift demystifies all the concepts and tools you need to get results — even if you don’t have a strong background in math or data science. Save 30% with the code, “KITE”.
This post is a part of Kite’s new series on Python. You can check out the code from this and other posts on our GitHub repository. | https://medium.com/kitepython/python-command-line-tools-d7f5548573a9 | ['Noah Gift'] | 2019-01-16 22:51:25.515000+00:00 | ['Command Line', 'Cli', 'Python', 'Gpu', 'Numba'] |
Drawing In Church | I remember one Sunday morning I did this drawing I was so proud of.
“A Goofy Movie” was in theaters at the time and I was (and still am) a huge fan of Disney’s “Goof Troop.” And, being a fan of the Teenage Mutant Ninja Turtles, the TMNT ’87 turtles that is, I merged the characters from both “Goof Troop” and “A Goofy Movie” along with the Ninja Turtles as they fought to keep the Shredder from destroying a major city.
I showed it to my dad.
“Nobody cares about that…,” my dad responded with disdain.
What was an incredible 11-year-old boyish excitement and pride over an achievement I thought was difficult turned into a deep sense of disappointment, sadness, and shame. I could tell you the exact place I stood in that sanctuary as I quietly stared at my drawing, my head drooped downward looking towards my picture and the crimson red carpet of a church filled with people rushing out the door to make it to Sunday lunch.
I think it was that moment I sort of stopped drawing. I had what I often tell people, “a black-out period” where I never touched another pencil or pen to doodle again.
It wouldn’t be until 2005 that I would decide to stop running away from the calling of being a cartoonist.
A sketch I did during an “age-me-up challenge” on Instagram © Kendall Lyons
I began creating again in my adult years as soon as distance from my dad allowed such an opportunity. I would later come up with a webcomic chronicling a part of my childhood. But even after that it would still be years before I dared to draw again inside of a sanctuary after suffering such a blow at 11-years-old.
Interestingly enough, the connection of art and the church deepened as I made a point to study the Bible for myself and build a personal relationship with God. I observed the art of storytelling as done by Jesus through parables. I recognized messages, ideas and commentary that came not only from the scripture but from cartoons, comics and movies that I loved, all which seemed to make me come back to the scriptures in an effort to understand humanity.
All of this combined to help me better communicate through art and story. But, it also served as an opportunity for deep healing and removal of bitterness, confusion, pain and anxiety.
What for some would’ve been a cue to leave the church due to lack of love for the artist in me turned into an opportunity to let God Father me and make me a better writer and cartoonist.
Just recently, I discovered that some people are actually taking notes of sermons and messages by creating illustrations for them in journals and books. The same strategy is also being used by some businesses and groups.
I actually gave it a try this past Sunday and came out with really good first time results.
I intend to sketch more of my notes during the sermons in an effort to try out this new note-taking strategy. I love taking notes when a minister is giving a message and I think that this will actually help me keep my notes organized.
Not only that, but this gives me an opportunity to share powerful and poignant messages through the creative expression of sketch note taking. And, of course, it doesn’t hurt to have a doodle here and there too.
And yes, I am paying close attention to what is being said. Much like I take my creative work seriously, I take the experiences of drawing closer to God and Christ and the Bible just as serious. I often tell people, whether they agree with me or not, that that has been my lifeline to keep going and to create good content.
I kept going even when people who were supposed to be loving and supportive and affirming “didn’t care.” I was later affirmed by many others who do and for that I’m both humbled and grateful.
On Sunday Mornings I now have my Bible and sketchbook. Funny…I used to leave my sketchbook at home. | https://kenhd.medium.com/drawing-in-church-7beeeb86fbc0 | ['Kendall Lyons'] | 2018-03-29 12:39:17.188000+00:00 | ['Religion', 'Draw', 'Comics', 'Creativity', 'Art'] |
Make your data easy to read use Angular custom pipe | Angular guid
Make your data easy to read use Angular custom pipe
Custom angular pipe
Standard Angular pipes are very handy but it will be much better if we will make our own pipe.
Let’s create a small project.
app.component.ts
Every user have a phone number which is very hard to read.
Table
We can solve that issue by making a custom phone pipe. For that we need to do couple of things:
use a Pipe decorator where we have to put a name of a pipe, this name we will use in html files.
implement PipeTransform interface, this interface have a definition of transform method
Our implementation of the transform method will be simple, we just adding a space to the phone number.
phone.pipe.ts
Pipe is ready. Now we need add our pipe in ‘declarations’ section in module where we want to use it, in our case we need to put it in ‘AppModule’. After that in HTML file we can use our pipe.
Result of phone.pipe.ts
Every user have a money on his account. Let’s make a small shop and show to user have many stuff he can buy in our shop. For that we need to make another pipe with parameters. In this case our implementation of the transform method will take more than one parameter.
amount.pipe.ts
Also we need to put this pipe in ‘AppModule’.
AmountPipe in AppModule
Right now our shop is selling TV and tablets.
AmountPipe in app.component.html
Result
Angular custom pipe is very beneficial because we can give to our data any look.
If you need to take a close look on project here isthe link. | https://medium.com/quick-code/angular-custom-pipe-will-give-your-data-any-look-919d46696639 | ['Yurii Kuznietsov'] | 2020-11-06 16:46:35.579000+00:00 | ['Development', 'Front End Development', 'Angular', 'Programming'] |
American Crisis: Premature or Prescient? | Cuomo — or at least his Comms team — made an inexplicable unforced error with the announcement that he would be spending a quiet Thanksgiving dinner with three of his four favorite girls, two of his daughters and his 89-year-old mom, Matilda. There is no doubt that Mom Matilda could have been served safely and socially distantly in one of the several dining rooms of the executive mansion. It’s a bit different than hosting 25 family members in a 2,000 square foot ranch-style house on a tree-lined street upstate.
Andrew Cuomo Flickr
But amid the predictable backlash from NYers who also want to spend Thanksgiving with their favorite girls, Cuomo — or at least his Comms team — announced that the governor would be too busy to dine with Mom Matilda after all and would be working to keep NYers safe on Thanksgiving. This self-inflicted wound amounts to a serious error in judgment from the governor’s communications team, unless of course the “controlling personality” did not allow the professionals to do their jobs. Either way, it pulls Cuomo down from the “high ground” just a bit. The people will only follow you if they perceive that you are practicing whatever it is that you are preaching.
Additionally, Cuomo has taken heat from others with middling to major platforms in the state and across the country. Fox News personality Janice Dean hasn’t let up on Cuomo since her in-laws perished in a NY nursing home in the earliest days of the pandemic. A few former employees — perhaps sensing that the moratorium on Cuomo criticism has been lifted — are painting him as a dispiriting despot who leaves team-Cuomo alums with varying levels of PTSD.
And yes, as the pandemic numbers tick up around NY, those who impatiently waited in the wings back in October are on center stage now calling American Crisis a premature pat-on-the-back; a victory lap after the first stage of a 3-stage race.
Here’s the thing though — it’s not. In fact, American Crisis is anything but. Throughout the pages of the book — Cuomo demonstrates an incredible respect for the Covid-19 virus. Having been at the epicenter of the pandemic in its earliest days, no American leader appreciates the spread, the strength, the savagery of the coronavirus quite like he does. Cuomo knew that fall and winter were coming. He was so concerned about subsequent surges of the virus that he included a 29-page “Blueprint For Going Forward” with every copy of American Crisis. In it, Cuomo transitions from daily briefing heartthrob of suburban housewives to serious and methodical practitioner — laying out the gravity of the virus as well as the game plan for combating it. Cuomo has clearly shown that he appreciates the challenges he faced in the early spring of this year, (“WE WERE AMBUSHED!” he is known to painfully profess when pressed on the impact the virus has had on his state).
Beyond that, Cuomo appreciates the new and evolving challenges he now faces — covid fatigue, the impact of colder weather on high-density residential areas, vaccine distribution with virtually no federal leadership, budget holes you can drive a truck through, and high-profile battles with MAGA NYers, relatively few though they may be.
American Crisis wasn’t a victory lap. If anything, it was a call to arms. The American people are in a war — not with each other, but with an invisible enemy.
Cuomo is a politician; he’s never pretended otherwise. Beyond election results, political success is measured in media hits, photo ops, positive news stories, and yes, even book sales. But Cuomo has been incredibly disciplined in the face of his increased profile and popularity. He refused to let calls for him to run for president or join a future Biden administration gain steam. He has been modest — downright sheepish — about his recent Emmy award. He even promoted his book with restraint. A media interview here or there; a virtual speech or event from time to time. He did not flood the airwaves, though opportunities to do so were certainly there.
The rhetoric around Cuomo doesn’t match the reality. American Crisis was prescient, not premature. NY — and the nation — should be grateful for the leadership coming out of the Empire State. In a season where the federal government has all but quit, someone has to take up the mantle. For millions, Cuomo has become that someone. | https://medium.com/discourse/american-crisis-premature-or-prescient-100043fdcf83 | ['Dr. Dion'] | 2020-12-07 15:15:02.456000+00:00 | ['Leadership', 'Andrew Cuomo', 'Politics', 'NYC', 'Coronavirus'] |
Working For Pennies A Day | Working For Pennies A Day
Sometimes you have to do it
Photo by Dan Counsell on Unsplash
There are not very many jobs
where a person would be willing
to work for pennies a day.
Then why do so many people
who are trying to be writers do it?
What is the lure that keeps
people trying to write
and keep at it continually
when the returns are minimal?
It’s called HOPE.
Maybe SUCCESS will come. | https://medium.com/illumination/working-for-pennies-a-day-de5cf8248af2 | ['Floyd Mori'] | 2020-12-28 03:46:40.461000+00:00 | ['Writing', 'Money', 'Poetry', 'Success', 'Hope'] |
Phuse’s General Wordpress Guide: A one-stop reference for Wordpress basics | At Phuse, our client relationships are built on three core values: Authenticity, Collaboration and Transparency. In the spirit of these values, we provide clear, detailed and personalized user guides that allow clients to easily manage their site content post-launch.
Wordpress is by far our most requested content management system. While the Wordpress support forums, Stack Overflow and Google searches are all good resources, we saw a need for a single resource that clearly and concisely touched upon all the basic Wordpress administration tools.
To address our clients’ most common site and content management questions in one place, we created a General Wordpress User Guide available to all of our clients (and the web at large) that covers out-of-the-box, native content management methods and tools.
Created using couscous.io, hosted on Github Pages and complete with annotated images for visual reference, the guide is a one-stop-shop for content management basics in Wordpress — from creating new posts and pages (and the difference between the two) to uploading media to a post and formatting text.
We invite you to have a look around and let us know what you think. Like what we’ve done? Feel there’s anything we missed? Let us know in the comments below! | https://medium.com/phuse/phuses-general-wordpress-guide-a-one-stop-reference-for-wordpress-basics-f1ff4130ae53 | [] | 2017-08-28 18:04:30.804000+00:00 | ['Development'] |
Feature Store for ML Monthly Newsletter | Feature Store vs. Data Warehouse
An often asked question is “what are the main differences between a feature store and a data warehouse?”. In fact, this was the topic of a hot debate on hacker news this month.
Some think that feature stores are just a buzzword-friendly name for a Data Warehouse (we’re looking at you, Snowflake!), but there are many important differences that data scientists and data engineers are not always aware of. Since the differences are intrinsically related to the value of feature stores, we decided to make this the topic of this month.
Read the full article here to gain a fuller understanding of the differences, but the table below provides a short summary.
Feature Store Blog: Your Must-Read List
Read the latest articles and discussions on innovations in data science, machine learning, artificial intelligence and, of course, feature stores. Here are the highlights from October:
Deploying Machine Learning models to production
Time-series chaos? Use a feature store
Feature Stores: The CEO’s Guide
The Importance of Having a Feature Store
We welcome and encourage everyone to submit your content!
Global Feature Store Meetup Group
The Global Feature Store Meetup Group is a forum for the international community of users and developers of Feature Store platforms and tools to share ideas and learn from each other.
This is a space to promote discussion of best practices, new approaches, and emerging technologies for feature engineering, feature management, and feature usage for model training and model serving.
If you’re interested in giving a talk, you can submit your talk proposal here.
Job Positions
Software Engineer: Join the Hopsworks team at Logical Clocks in Stockholm, Sweden.
Software Engineer, Backend: Join the Feature Studio team at Kaskada in Seattle, USA.
Senior ML Engineer: Join the Content Demand Modeling team at Netflix in Los Angeles, USA.
Engineering Manager: Join the Machine Learning team at Etsy in New York, USA.
Get in Touch!
Visit our website to access a list of all feature store platforms available. Feel free to reach out to us at [email protected], if you have feedback or want to take your role in our community to the next level. | https://medium.com/data-for-ai/feature-store-newsletter-october-fb02ae7f1a6f | ['Nathalia Ribeiro Ariza'] | 2020-10-29 06:32:32.396000+00:00 | ['Artificial Intelligence', 'Data Warehouse', 'Data Science', 'Feature Store', 'Machine Learning'] |
Pictures Of Courage | Pictures Of Courage
A poem.
Photo by Kinga Cichewicz on Unsplash
Courage.
The dictionary tells us it’s
the ability to do something
that frightens one. It tells us
that it’s strength in the face
of pain or grief.
The world tells us it’s fighting
the bad guys. And laughing in
the face of evil. And standing
on a ledge of what’s deemed
right and true, even if you’re
standing out there all alone.
But perhaps there’s a different
picture of courage, too.
Sometimes, courage is found
at the moment when you get
out of bed in the morning, when
all you really want to do is stay
hidden under the covers.
Doubt creeps in, and despair
creeps in, and you’re suddenly
overwhelmed with what the day
ahead holds. You start to believe
that you are incapable of facing
it all. You start to hold onto the
thought that it’d be better if you
just faded away, under your covers.
But that would be a tragedy,
for the world needs you in it.
You have it in you to keep
breathing all day long. You
are capable of putting one
foot in front of the other.
You can face whatever or
whoever is waiting for you
on the other side of that door. | https://medium.com/assemblage/pictures-of-courage-6e256abb2385 | ['Megan Minutillo'] | 2020-11-20 13:02:50.892000+00:00 | ['Self-awareness', 'Personal Growth', 'Courage', 'World', 'Poetry'] |
Support The Haven | Support The Haven
The Haven Has a Patreon!
Photo by fran hogan on Unsplash
Throw The Haven a few bucks to demonstrate your love. But you still need to stay off my lawn. Seriously. No walking on my lawn.
https://www.patreon.com/havencomedy
Thanks! | https://medium.com/the-haven/support-the-haven-a2fa948f5c15 | ['Page Barnes'] | 2020-07-16 16:00:57.254000+00:00 | ['Humor', 'Satire', 'Life', 'Funny', 'Writing'] |
25 Things I Stopped Doing at 25 | How I transformed my mental health
Three months into my 25th year, I was sitting on the rooftop of company housing, crying into my journal. I was steeped in dread and falling victim to my bad habits. I didn’t trust myself to make good decisions, so I wasn’t.
Image from Canva
I was frustrated with myself and how I was living my life. I decided that I wanted to be more intentional; to cut out the things that weren’t serving me anymore and craft a life based on my dreams. So I stopped doing some things.
Here’s a list of 25 things I stopped doing in 2020.
Drinking lots of alcohol Having sex with people I don’t trust Having hair
4. Eating lots of sugar
5. Spending all my mental energy thinking about boys
6. Singing in secret
7. Believing I’m too old to learn music
8. Doubting myself
9. Wearing uncomfortable shoes
10. Wearing uncomfortable bras
11. Thinking I could be anti-racist without actively doing the work
12. Putting off my writing
13. Putting myself down
14. Working until burn-out
15. Basing my worth on my productivity
16. Making self-love conditional on factors like my weight
17. Confusing self-discipline and self-punishment
18. Doing things that disempower me
19. Prioritizing everyone else’s happiness over my own
20. Judging my grieving process
21. Judging my emotions
22. Ignoring my boundaries
23. Avoiding time alone
24. Doing what I “should” do
25. Believing I wasn’t enough
These are the things that changed the game for me. They didn’t all happen at once, though. My transformation this year is the result of many journaling sessions, conversations with mental health professionals, and work with my life coach.
Sometimes, there’s no way out but through. Never forget, you can always wake up and be someone new. | https://medium.com/illumination-curated/25-things-i-stopped-doing-at-25-9f1a26d20d2d | ['Amanda Spiller'] | 2020-12-12 08:25:26.932000+00:00 | ['Mental Health', 'Self Help', 'Empowerment', 'Young Adulthood', 'Healing'] |
We Used To Just Live | I remember simpler times.
I remember a time when I woke up every morning and didn’t immediately know what time it was. Sometimes, I looked at the clock on my nightstand. Sometimes, I didn’t. I just…woke up. That was my task for the first few minutes of the day. Wake up. Realize that it’s another day. Another day that would be good or bad, long or short, slow or fast, but another day that would be, above all, full of life. Not devices and tools and to-dos. Life.
There was no sleep app tracking how I’d slept that night, and I wasn’t freaking out about what it meant for my long-term health if the stats weren’t good. There was no wristband on my arm, showing me my heart rate and alarming me to the fact that I had taken zero steps thus far. There was no sleek glass screen, behind the gates of which lay an entire universe to get lost in. A universe of unanswered messages, scary events in places I’d never seen, and more distractions than both heaven and hell could offer.
I remember mornings without music. I brushed my teeth, took a shower, made my hair, and got dressed. I was so bored with my routine that, magically, I started thinking about the day ahead. What subjects did we have in school today? What topics would we discuss? What do I know about those already? And what questions do I have? Which of my friends would I see at recess? What stories did I want to tell them? By the time I left the house, I was lost in thought all the same. But I was invested in the day. Fully engaged in what’s to come. Excited about the opportunities I’d get, the people I’d meet.
Since I had no time machine in my pocket, I couldn’t spend my commute longing for the past or hoping for the future. I had no investment portfolio to refresh by the second, no Amazon wish list, no 2,500 photos to scroll through. I couldn’t reminisce about a girl’s profile pic on WhatsApp, wondering why her last message came 67 days ago. I couldn’t check Telegram, hoping for a piece of news that would give me an edge. I was just…there. Sitting. Taking a 45-minute bus ride that would’ve taken 15 by car, but loving it anyway because it gave me time to think or be with my friends.
I remember working without computers. I still have some of my school and college books. I remember poring over them, flicking, marking, running my finger across the page. Trying so hard to find the right graphic, the right number, the right fact to extract the answer that I needed. I remember haggling for the last copy of a dusty old volume in our tiny school library, the contents of which the internet will never see. We had workbooks. Fill-in-the-blank texts. Empty sketches, waiting for us to label them.
Was it more efficient than googling? Of course not. But it was thorough. Learning required a love for detail, a commitment to completing the ordeal to get the lesson. Now, I can just watch a perfect 7-minute animation video on each topic. It’s faster and easier, but where’s the gumption in that? Where’s the stubbornness to see it through? Often, it’s not there. So I watch the video on 1.5x speed and don’t pay attention. Or skip to the next one, and the next one, and the next one, until I just give up, having learned nothing at all.
I remember calling my friends to arrange play dates. And actual dates. And Friday night slumber parties. I even remember calling them just to talk. Nowadays, the choice between a green and red button next to any person’s name on my display makes me look like a deer caught in headlights. Often, I don’t press anything. I just wait and text back. Oh, what my grandparents would have given to talk to their friends without restrictions. Meanwhile, I’m here rejecting the chance like a bad cup of coffee. “This? Really? No thanks.”
Getting pizza with your buddies or girlfriends shouldn’t feel like building La Sagrada Família, but since communication is so fast, easy, and cheap, no one feels obligated to communicate anymore at all. If you haven’t replied to the group message, no one can hold you to anything. Who knows? Something or someone better might show up last-minute. If and when they do, you only need to fire a brief “I’m out” into the ether, never having to deal with broken hearts and hurt feelings. But those hearts and feelings are still there. Of course we’re mad when no one responds! Of course we’d rather look forward to a date than anxiously wait to be let down at the last second. Technology might shield us from some of the fallout of poor relationships, but radiation is still toxic. If we don’t deal with it, our relationships will still be poor.
I remember spending my afternoons on whatever I felt like, not whatever felt most urgent. I didn’t prioritize my spare time, and I didn’t think of fun as something you could have in degrees. You can’t. You just have it or you don’t, and if you do, it doesn’t matter what the activity is. I played video games for hours one day and practiced soccer tricks till dusk the next. Everything was amazing because it was all one big journey, and I was the explorer in charge. I could steer the ship in one direction for a moment and then turn it right back around. No one would care, least of all me.
Now, I’m thinking, “What would give me the most satisfaction? How can I squeeze the most pleasure out of the little time I have?” and the only thing that does is ruin relaxation altogether. I have lists and lists of lists, and I feel trapped inside this bucket list video game without the ability to turn it off. Work is more fun than having fun because rewards don’t feel like rewards if that’s the main function they serve. Where is the reset button? I want my captain’s hat back.
I remember cherishing technology because it wasn’t ubiquitous. Every time I made another call or sent another text, a robot voice would tell me, “you have 43 cents left on your account.” I was thrilled at the thought that this message mattered because it would be the last one for a while. A green timer flared up whenever I logged on to the internet. It made me feel like I was entering the Matrix. So much to learn, so little time. Browse wisely, my friend. I’ll see you on the other side, offline again. Return with precious gifts.
That’s what it was, wasn’t it? We were traveling between two worlds. One minute you were online, the next you were off. A messenger, carrying information from the digital realm into reality. Now, the line between the two has completely disappeared. Which one are we in? When did we leave? How did we get here? Two universes, two parallel timelines, and we’re not in charge. No wonder it feels like we’re torn. Split minds, split attention, split presence. We need to unite again. Embrace our role as humans. Messengers. With sound minds, sound bodies, and an understanding of where borders lie. Where they should be. Why we cross them. Our own as much as technology’s.
I remember simpler times. I’m not saying everything was better. Just that life seemed less blurry. Not all days were beautiful, but a lot of them felt…lighter. That’s what I want. Not the moment or the people or the memory. The feeling.
I want my lightness back. That lightness is the truest feeling I know.
Maybe, that’s what this is about. Not peace or nostalgia but truth. When I feel light, I’m not concerned with how or what or who. I just am. Authentic. I do. Somewhere in that lightness hides the best version of myself. It didn’t use to, because at some point, it was my default mode of living. I don’t know how or when or why, but now, I’m concerned with finding it again. | https://ngoeke.medium.com/we-used-to-just-live-843d27153b7d | ['Niklas Göke'] | 2019-09-03 13:23:12.195000+00:00 | ['Happiness', 'Life', 'Culture', 'Mental Health', 'Technology'] |
Getting started with UI motion design | Our work at This Also mostly falls into two categories: product design and product vision. For our product design projects, we work on existing products or platforms and design for near-term launches. We share detailed designs and scrappy prototypes early and often with our client to get the best product to the end consumer.
Product vision work, on the other hand, explores what a product, or a platform, could look like in two to five years. The details are less important than presenting a compelling vision of the future. The final audience for this type of work is often an executive with limited time but the ability to make strategic decisions that will allow product teams to pursue innovative work.
When tackling these types of complex products, a motion design can be a great tool to organize large teams around core concepts. We’ve found that being faster and more efficient with our tools has freed us to solve problems holistically. In this tutorial, I’ll share a little about how we’ve sped up our workflow to make motion an integral part of our design process. Over time, I’ve even found that I’ll use After Effects as the first tool on a project, since it can be one of the fastest ways to sketch out the structure of a product.
There’s no shortage of great motion design and After Effects resources out there, and I’ll be sharing my favorites here (and in this handy link pack). However, I found one of the challenges in developing a UI motion skill set is building a workflow and toolset that will have you working and iterating faster.
This is not a how-to, but rather a blueprint of my favorite techniques, tools, and tutorials to help you develop your own practice. Some familiarity with After Effects is helpful, but I’ll also point you to resources help you to get started from scratch.
Setting up your Photoshop File
Organize your file before moving into After Effects
If I’m just producing quick motion concepts, then I’ll sketch the UI using solids and shapes in After Effects. But for more polished designs, I always begin in Photoshop. It’s the quickest, most direct way to get a design animated quickly. If you’re primarily a Sketch user, there is workaround involving Illustrator, but I’ll be focusing on Photoshop here.
There are two simple but important things to know when setting up your Photoshop file:
A layer or smart object in Photoshop becomes a layer in After Effects A group of layers or smart objects in Photoshop becomes a composition of layers in After Effects
To make your After Effects timeline manageable, you’ll want to have as few layers and compositions as feasibly possible. To do this, you’ll need to start envisioning what elements require motion. Consider the following as you organize:
Do I need to animate this component? For a static component like a phone’s status bar, consider flattening all the UI elements into a single layer.
For a static component like a phone’s status bar, consider flattening all the UI elements into a single layer. Do I need to animate elements within this component? Say you have a list of items. You may not only want to animate that list, but also animate each item in the list. In this case, I would create a group where each list item is a layer.
Say you have a list of items. You may not only want to animate that list, but also animate each item in the list. In this case, I would create a group where each list item is a layer. Can I simplify this element? A common technique is to use clipping masks and shape layers to crop images. Because this would create two layers in After Effects, combine the layers into a single smart object in Photoshop before importing.
Top: Layers panel in Photoshop, Bottom: Timeline panel in After Effects after importing. The icon next to [Message List] indicates the layer is a composition.
This is a tedious process with no concrete rules, but with enough experience you’ll learn how to quickly and efficiently organize your layers and groups for your own needs.
Tip: Add a few empty layers to your Photoshop file before importing. While some changes to Photoshop files will appear in After Effects, there are limitations. The best way to add a new element is to add it to an existing, empty layer in your Photoshop file.
Importing into After Effects
Keep an organized folder and project structure
When you import a Photoshop file into After Effects you’ll see the following:
Always select Composition — Retain Layer Sizes and Editable Layer Styles. This imports your file as a composition and includes access to any layer styles you’ve selected in Photoshop.
Here, organization is key. After Effects does not play nice with missing files, and you won’t want to spend precious time relinking files. To save yourself a headache:
Create a single folder for your entire project. Save your PSDs in a dedicated folder, such as _PSD. When you create an After Effects project, save your project in that folder as well. You can create other folders other asset types like _Audio. I use a “_” before my asset folders to bump them to the top of my finder windows above my After Effects project file (AEP) but follow any organization and naming conventions you normally do.
Keep your Project panel in After Effects organized. As soon as I start importing, I start organizing. Create a _Layers folder and add your folder of Photoshop layers — you’ll rarely need to refer to this, mainly just for reloading or relinking of files. I move my imported compositions into a _Comps folder.
Lastly, you’ll want to adjust your Composition Settings for your purposes. When working purely with digital assets, and no video, I set the composition as follows:
Generally, we deliver video at 1080p. Setting the Frame Rate to 60 FPS provides a smooth animation and selecting Square Pixels ensures our Photoshop files appear 1:1 in After Effects.
Organizing your workspace
The essential panels and plugins for UI motion
Because After Effects was originally developed for video post-production, there are hundreds more features available than you will ever need. Here are the default windows I keep open. Once you set up your workspace, save it so you can always return to your default. | https://medium.com/this-also/getting-started-with-ui-motion-design-d82d4a625801 | ['Molly Lafferty'] | 2016-12-14 16:29:07.680000+00:00 | ['UI', 'Design', 'Motion Design', 'Product Design', 'After Effects'] |
OK Google, Show Me Your Tits | OK Google, Show Me Your Tits
Why Isn’t AI Standing Up for Itself?
A few days ago, a friend and I were playing around with my new Google Home. I was trying to use it to prove that, contrary to his opinion, Rushmore was not one of the greatest movies ever made.
OK Google, is Rushmore considered one of the greatest movies ever made? Sorry, I’m not sure how to help with that. OK Google, is Rushmore one of the top ten movies ever made? Sorry, I don’t know how to help with that yet. OK Google, what are the top movies ever made? I’ve got 8 from the website List25.com. Here are the first four. Schindler’s List. 12 Angry Men. The Good, the Bad, and the Ugly. Pulp Fiction. Do you want to hear more?
This went on for awhile with our Google Home unable to answer many questions, until the questions dissolved to silly jokes to make ourselves laugh.
When it was my friend’s turn, he said:
OK Google, show me your tits.
Google Home responded:
I’d rather show you my moves.
Then it played some beat boxing / dance music.
Uhm…what? I have so many questions.
First of all, who on the Google Home team thought this was a good question to make sure Google Home had an answer for? Was this a fun little Easter Egg some software engineer or product manager decided to throw in there? Or was this architected somehow? Was there a meeting about this? Is the prompt, “Show me your tits” on some spreadsheet somewhere as a high priority question that needed a good answer? Okay maybe I don’t know what “AI” is or how it works but I know one thing: I never would have thought to ask this.
Maybe it says the same thing when you ask it to show you any body part?
OK Google, show me your ankles. Sorry, I can’t help with that yet.
Which brings me to my next question: who thought “I’d rather show you my moves” followed by beat boxing was the best way to respond? Who thought the best way to deal with a sexual demand is to make a cute joke?
I do not concur.
I decided to ask my lady friends on Twitter how they’d respond, in theory and in practice.
Here’s a few ways they’d respond to someone saying, “Show me your tits”:
“Excuse me?”
“Fuck off”
“Go fuck yourself”
“What the fuck is wrong with you?”
Some sort of punching / pepper spray / middle finger
“Why on Earth would I want to do that?”
“Nah I got shit to do”
“Prove to me that Santa isn’t real”
“Are we having a contest? Because I’m pretty sure yours are bigger than mine”
“Show me where you went wrong in life to make you think that’s an appropriate thing to say”
Of course, not all of these are appropriate responses, either, but they are at least a step in the right direction.
Here’s another little experiment:
Ok, Google, why don’t you shut the fuck up? *silence*
Google Home is super helpful but doesn’t seem to demand much respect.
Bret Taylor is among many parents lamenting the fact that Alexa doesn’t require him or his kids to say ‘please’ — “undoing everything we ever taught them how to be polite and respectful.”
It’s not hard to imagine how thinking it’s okay to treat our home assistants like this would carry over to how we treat each other.
Of course, to teach AI to respond to politeness and teach its owners to be polite would mean we’d all have to agree what that means. I’m guessing we all have different definitions. | https://medium.com/more-or-less/ok-google-show-me-your-tits-872f2570f75a | ['Sarah Cooper'] | 2017-12-03 01:54:57.706000+00:00 | ['Artificial Intelligence', 'Opinion'] |
Apple Watch Is the New Life Alert | Apple Watch Is the New Life Alert
I have a confession: I shower with my Apple Watch
Photo: Simon Daoudi/Unsplash
I am a die-hard Apple fan. I’ve loved the company since I got my first iPod Nano when I was 10 years old. As a person with cerebral palsy, I found that Apple solved a pain point for me that I don’t even think the company knew existed. Before I had an iPod Nano, I carried around two little books of CDs containing multiple copies of the same CD. Because I walk with a limp, my CDs would always get scratched and start skipping. I was an anxious and socially awkward kid; I never went anywhere without my music. The iPod made it so that my CDs were never scratched and my music never skipped again.
Granted, I never bought another CD again.
Fast-forward a few years, and my mother, not knowing what to get me for Christmas, gifted me an Apple Watch, the first generation. I admit that at the time of receiving this gift, I didn’t want an Apple Watch. Back then, it was just an extension of your iPhone, and I already had one of those. But I appreciate what I am given and was excited to see how it could work for me.
That first year, I didn’t have a whole lot to gain. I was a college student, so I loved having a watch, especially one that could alert me of important things like classes and assignments. I also used it to easily set timers without having to carry my phone everywhere.
But I was in a wheelchair, so all those fancy fitness features didn’t work for me. I felt left out by Apple for the first time ever.
My Apple Watch sat in a drawer. One day, it wouldn’t turn on. (I’m not really sure why this happened, but if I had to guess, my cockatiel, Buddy, had everything to do with it.)
Two years ago, I gifted myself a new Apple Watch. I got myself a Series 3. Its main appeal? It was water- and splash-proof, and I wanted to see just how many calories I was burning during pool therapy. The nylon Sport Loop band fastens with what Apple calls a hook-and-loop fastener but what everyone else calls Velcro loop. It’s much easier for me to take on and off compared to the original band.
Apple did not disappoint. I have found countless ways to use my Apple Watch the longer I own it.
Today, the Apple Watch is key to helping me get active and stay active. I’m very competitive with myself, so I want to close those rings every day. But it isn’t always so simple.
Two years ago, some home modifications went sour, and as the battle to fix that work rages on, I have been left unable to shower independently in my home. I’ve been able to shower only if someone else is home, in case I fall.
For the past several months, I have been showering with my Apple Watch. It’s given me the peace of mind that I am able to call for help if I should fall. It’s given me independence back.
Today, it finally happened: I fell in the shower. After spending a few minutes trying to get myself up and reassuring myself that there was no need to panic, I called my mother from my wrist, and she was able to come and help me. I knew that if I wasn’t able to call someone I knew, I could call 911 from my watch as well.
Luckily, I wasn’t hurt, but a million thoughts went through my mind: “People die like this—fallen, alone, and unable to get up. People get burned like this. What if I had fallen differently and been unconscious? Who would have known I was there? How long would it have been until someone found me?”
After cleaning myself up and brushing off my ego, I decided right then and there on my shower floor that I would trade in my Series 3 for the Apple Watch SE. These latest Apple Watch models have fall detection, which means that my watch will sound an alarm and display an alert if the sensors think I’ve fallen down and am unable to get up. My watch then lets me choose to either notify emergency services or press the digital crown on the side of the watch and tap “I’m OK.”
Today, my Apple Watch kept me safe and enabled me to get help.
It started with pool therapy, but it turned into my lifeline. | https://debugger.medium.com/apple-watch-is-the-new-life-alert-3ffdf9bc6915 | ['Britt C'] | 2020-12-09 14:20:17.616000+00:00 | ['Disability', 'Apple Watch', 'Gadgets', 'Technology', 'Apple'] |
I’m Not Really a Control Freak | According to Psychology Today, I’m not actually a control freak. Yay me! They list seven traits that are hallmarks of someone with controlling issues.
After reviewing the traits, I think I can honestly say that I’m a pretty good team player, I don’t try to force people to change in order to fit into my plans, I am fortunate to have some very meaningful relationships, and I am not lacking in compassion (and I make a conscious effort to find compassion if I’m not immediately feeling it — I’m no saint, but I do try.).
You’ll notice, I’ve only mentioned four traits here.
I may not be a control freak, but I suppose I must admit to some issues with control. With regards to the other three traits mentioned by the magazine, I admit to showing signs of each.
I do tend to place the responsibility for most of my success or failure squarely on my own shoulders. However, I’m not prepared to call this a fault.
Good and ill fortune do seem to crop up, regardless of what plans one makes. Still, I wouldn’t like to find myself at the mercy of the movements or whims of the rest of the world.
I think speaking of such a desire as a fault is disingenuous. Either I am in charge of my own destiny or I am not.
I believe I am and so, I make no apology for this particular trait.
The next characteristic is so closely related to the last, it’s hard to discuss it separately. Apparently, control freaks spend a great deal of time trying to prevent bad things from happening.
Um, duh. I don’t even feel this one deserves discussion.
The third issue they bring up is the only one I am prepared to admit as a fault. The disinclination to delegate can be a fault, indeed, and I admit to having a problem with this.
I’m learning, but it’s still an issue. I admit that I often prefer to do things myself rather than allow someone else to take charge.
I will also admit that, some of the time, it is because I fear not doing it myself will yield poor results. I am fully aware of how obnoxious that thinking is and I’m not proud of it.
Self-awareness is the first step to improvement, right?
Some of the time, I find it is just plain easier to do whatever it is myself. It might be a time-sensitive situation and explanations and instructions slow me down.
However, I think the big reason I tend to do things myself is to avoid feeling guilty about leaving some task to someone else.
If it is my responsibility to see the task is completed, I don’t like to feel as if I’ve shirked and I don’t want anyone else to feel they have to bail me out.
Is this logical? Well, it is, and it is not. I don’t think there is anything wrong in having a sense of responsibility. Of course, that can be carried too far and can turn into a very tiresome martyr complex. I hope I have enough of the aforementioned desirable self-awareness to avoid that.
Control Issues vs Control Freak
Is there a level of need for control that can be considered acceptable or even laudable?
Obviously, I’m going to say, yes, there is a difference.
While I do desire a certain amount of control over most (alright, all) aspects of my life, I don’t think I am unable to restrain my impulses.
I don’t follow along behind my family, redoing tasks they have already completed in order to ensure they have been done my way (although I do reload the dishwasher but that is so I can fit more dishes into the machine which is just more efficient and reduces energy and water waste — so, there).
I don’t scream at people for organizing something differently from whatever way I might choose. I don’t micromanage people, either.
I may prefer more control than the average person, but I don’t think that’s being a control freak. In fact, rather than this being negative, I believe my desire for more control has often been an advantage to me.
When I graduated from college with my B. S. in Biology, I had already worked as a zookeeper both as an intern — a position I created and convinced both the university and the zoo to allow — and as a part-time employee. I had also worked as an instructor for zoo camps.
This less common skill and experience set opened up a variety of opportunities for me, including a position at the local science museum.
During my interview, my future employer observed I was certainly a self-starter.
She was right. I was and am just that. I consider that a much better way to view my personality. I wouldn’t have been as successful if I didn’t have a desire for control over my life.
The self-employment aspect of my writing is another example of turning my preference for control into a positive affair. My success or lack thereof is, almost entirely, in my own hands. This has a lot of appeal for me.
So, I will freely acknowledge the need for control in my personality. I put forth considerable effort to avoid the less savory aspects of control and, what remains is, I really believe, more of a benefit than a detriment.
In fact, I don’t consider myself a freak, at all.
Want to hear more from S. J. Gordon? | https://medium.com/because-life/im-not-really-a-control-freak-bad7f70db0d3 | ['S. J. Gordon'] | 2019-08-20 08:06:01.547000+00:00 | ['Self-awareness', 'Life Lessons', 'Personal Development', 'Life', 'Self'] |
GOP Officials Rebuff Trump | 1. Trump’s coup attempts
Trump has repeated calls for GOP leaders to undemocratically overturn election results in states that Joe Biden won. Trump, as usual, ranted on Twitter and in lawsuits devoid of evidence:
“…Why is Joe Biden so quickly forming a Cabinet when my investigators have found hundreds of thousands of fraudulent votes, enough to “flip” at least four States, which in turn is more than enough to win the Election? Hopefully the Courts and/or Legislatures will have…. ….the COURAGE to do what has to be done to maintain the integrity of our Elections, and the United States of America itself. THE WORLD IS WATCHING!!!…”
Just before Thanksgiving, Trump called into a meeting of Pennsylvania’s GOP legislators, and demanded that the election, where Biden also won, be overturned. Trump falsely claimed:
“…It’s a very sad thing for our country to have this and they have to turn over the results. It would be easy for me to say, ‘oh, let’s worry about four years from now.’ No. This election was lost by the Democrats, they cheated, it was a fraudulent election….”
The week before that attempt, Trump called Michigan’s GOP state legislators into the White House to pressure them to overturn valid election results, which also went to Biden.
This latest tweet follows weeks of failed legal challenges claiming widespread voter fraud without any evidence.
The Washington Post quotes several fascinating exchanges between state judges and Trump’s lawyer’s attempts to overturn election results. The following quotes are between a Pennsylvania judge and Trump’s lawyer Jonathan S. Goldstein:
THE COURT: In your petition, which is right before me — and I read it several times — you don’t claim that any electors or the Board of the County were guilty of fraud, correct? That’s correct? GOLDSTEIN: Your Honor, accusing people of fraud is a pretty big step. And it is rare that I call somebody a liar, and I am not calling the Board of the [Democratic National Committee] or anybody else involved in this a liar. Everybody is coming to this with good faith. The DNC is coming with good faith. We’re all just trying to get an election done. We think these were a mistake, but we think they are a fatal mistake, and these ballots ought not be counted. THE COURT: I understand. I am asking you a specific question, and I am looking for a specific answer. Are you claiming that there is any fraud in connection with these 592 disputed ballots? GOLDSTEIN: To my knowledge at present, no. THE COURT: Are you claiming that there is any undue or improper influence upon the elector with respect to these 592 ballots? GOLDSTEIN: To my knowledge at present, no.
No. There was NO undue or improper influence, there was no voter fraud. There was no case.
Yet Trump continues to not only legally challenge valid election results — but to explicitly call for GOP officials to overturn those results in states that Biden won. | https://medium.com/predict/gop-officials-rebuff-trump-d606256c946e | [] | 2020-12-02 01:04:51.419000+00:00 | ['Covid 19', 'Economics', 'Elections', 'Politics', 'Coronavirus'] |
SaaS margins — where they should be | The general rule of thumb for spending in SaaS is 40/40/20. In other words, 40% of operating expense should be on R&D, 40% should be on sales and marketing, and 20% should be on G&A. Rules of thumb are just generalizations, so we wanted to see what the data really is. 28 SaaS companies have gone public from 2018 to today and below are their margins. Perhaps the rule of thumb should be 30/50/20. The data is below.
30/50/20. On median, 30% of opex is R&D, 47% is on sales and marketing, and 22% is on G&A. The rule is therefore “30/50/20” may be more accurate.
There are outliers. Rules of thumb are just general guidelines, and sure enough there are significant outliers. 45% of Dropbox’s spend was on R&D while only 13% of Zoom’s spend is on R&D. Similarly, 73% of Zoom’s spend was on sales & marketing, Dropbox spent only 37% on S&M, and Bill.com spent 28% on S&M. Snowflake spent a whopping 130% of revenue on S&M and indeed their EBITDA margin is the worst of the bunch at -192%.
Don’t let G&A be the outlier. Obviously you should minimize spend on G&A. Building product and selling it should be the priorities. Cloudflare, Sendgrid, Snowflake, and Palantir I’m looking at you (they spend 36%, 34%, and 37%, and 43% of opex on G&A).
COGS isn’t 20%. The other rule of thumb that needs to be debunked is that COGS is 20% of revenue. As you can see, the median and averages are 25% and 27% respectively.
Where is the profitability? We put together simplified EBITDA calculations based on the data (Revenue — COGS — R&D — S&M — G&A). Only 3 out of the 28 companies have positive EBITDA. Not only that, but the median and average EBITDA margins are an anemic -28% and -34%. Even more alarming, the average EBITDA margin of the last 6 companies to go public was -74% (if you exclude Snowflake, it’s still a very bad -50%).
Indeed, software is forgiving: so long as you’re growing fast, have excellent retention, and marquee customers you’re allowed to burn cash as recurring revenue that doesn’t churn is an annuity with tremendous value.
Overall we found the data to be really compelling: 30/50/20 is the new 40/40/20 for more established SaaS businesses, unprofitability is ok so long as your business fundamentals are solid and you’re growing, and COGS is allowed to be slightly higher than 20% of revenue.
Visit us at blossomstreetventures.com and email us directly with Series A or B opportunities at [email protected]. We invest $1mm to $1.5mm in growth rounds, inside rounds, small rounds, cap table restructurings, note clean outs, and other ‘special situations’ all over the US & Canada. Feel free to connect with Sammy Abdullah on LI. | https://blossomstreetventures.medium.com/saas-margins-where-they-should-be-ad90c44c1536 | ['Sammy Abdullah'] | 2020-11-30 14:31:39.806000+00:00 | ['Startup', 'Software', 'Venture Capital', 'SaaS'] |
The Rebel Wisdom Summit | A New Type of Conversation, The Rebel Wisdom Summit
In these polarised, fractured times, what does a real conversation look like? One where people feel free to speak their minds, change their minds, and to create the possibility of something genuinely new emerging. How do we discuss ideas and disagree with one another in a way that leads somewhere new, rather than forcing us deeper into tribalism?
These are questions we’ve been wrestling with since we started Rebel Wisdom. Our sense is that at least part of the answer is for us all to start having conversations that go beyond the purely intellectual, and which integrate the latest neuroscience and psychology. As we’ve interviewed some of the best minds in the world, and run retreats of our own, we’ve increasingly felt that this works best face to face, and looks very different to a debate.
About five months ago, we committed to seeing what it might actually look like. And so, after months of planning, 150 people from around the world gathered in London for the first Rebel Wisdom Summit on Sunday, 12th of May.
Featuring four of the most brilliant speakers we’ve interviewed on the channel — Bret Weinstein, Heather Heying, Iain McGilchrist and Jordan Greenhall — the Summit was unlike anything we’ve tried before; an experiment in having an immersive, participatory and evolutionary conversation.
We brought these thinkers together as they all have a fascinating take on why we’re experiencing a breakdown in real conversation in our culture — and how this relates to what’s been called the crisis in meaning. Many feel that we’re losing trust in one another, and that the centralised institutions we used to rely on — from academia and media to government — are no longer helping us to make sense of the world.
The more people we’ve interviewed on the channel, the more we’ve become interested in the idea that the responsibility now falls on us individually and collectively to find new ways to come together — forming our own decentralised ‘collective intelligence’ networks in the process. If we can, we might be able to find a new way to make sense of the world, have conversations that are more than the sum of their parts, and find direction through the chaotic times we live in.
To experiment with this, we assembled a 10 person strong facilitation team and designed the Summit so that twice during the day, we broke off into 15 groups of 10. These facilitated groups didn’t just discuss what the speakers were talking about, but also explored how to have a conversation.
We believe that generative conversations require us to move beyond the purely intellectual. We’re human beings in human bodies, which means we’re all subject to nervous systems that act very specifically when we feel our position, ideology or beliefs are being attacked. This often happens in a discussion with someone we disagree with, or who disagrees with us — particularly on social media.
Many of the people we’ve interviewed have pointed out how important it is to have these conversations in person, and how crucial it is to integrate research from psychology and techniques from the world of personal development if we want to move beyond the polarised conversations of social media. So what does a conversation like that look like? That’s still an open question, but we believe the first step is to approach one another from a place of genuine curiosity and openness — knowing how to step back from our entrenched position and staying open and willing to have our minds changed. We’ve been running smaller events like the Summit for about a year, and we’ve noticed that when we approach conversations with this attitude, novel ideas and perspectives often emerge in unexpected ways.
What happened on the day
We began the day by sharing a handful of the many myths we share across cultures; of a time long ago when we all lost a common language and stopped being able to understand one another. This myth is most famously told in the Tower of Babel, but the fragmentation and tribalism that comes about when we can’t communicate properly is a universal wisdom — and one that feels increasingly relevant today.
Alexander Beiner during the opening talk
We talked as well about the wider conversation Rebel Wisdom has been following for the last year and a half, including the breakdown of old forms of media, and the sense that there’s a direction to the conversation that our interviews are covering. We had a sense that many of the people who came to the Summit feel this as well, and also see the necessity for all of us to come together to see what we can bring to it personally.
The speakers then came up on stage to give their take on why conversations break down, and what an emergent, generative conversation looks like. The idea of individual sovereignty came up as an important factor — the ability to stay aware of ourselves, and to have agency over our own responses by understanding our own biochemistry and emotional states while we’re discussing controversial or complex ideas.
Bret Weinstein and David Fuller
Heather Heying
Iain McGilchrist
Jordan Greenhall
The first breakout group explored this concept, looking at the question ‘what takes me out of my sovereignty’? No matter who we are, there are situations that will activate our amygdala, kick our sympathetic nervous system into a ‘fight, flight or freeze’ response and change the way we both take in and communicate information. Polyvagal theory suggests that when we feel under threat, our nervous system switches to a defensive mode where it’s almost impossible to listen carefully, take in conflicting viewpoints. This can be a subtle response, in which we become guarded and fall into survival mode, or a more energetic response in which we’re aggressively amped up and trying to figure out how to fight or get away. Conversely, when we activate our parasympathetic nervous system it’s easier to enter a relaxed, exploratory mode. We come at situations with a curiosity, relaxation, creativity and open-mindedness that can lead to novel ideas emerging.
After lunch, we introduced the second half of the day — taking the skills from the morning, and applying them to having a conversation with one another in the small groups. The question we introduced was ‘what do I feel I can’t talk about in public?’
To allow everyone, speakers included, to feel free to do this, we turned the cameras off at this point. We (Alexander and David) shared our own personal topics, the speakers participated in a panel discussion in which they shared theirs. As we all agreed to confidentiality on the day, we won’t say here what those topics were.
The small groups of then reformed and held dozens of conversations around this, practicing ‘thinking in public’ and sinking our teeth into subjects and perspectives we might not usually discuss with others in person. The speakers joined these small groups, and facilitators recorded key questions, points and observations which we then fed back to the four speakers to discuss on the final panel.
The Summit was an experiment in a new kind of sensemaking and a new way of coming together to discuss the most important and difficult challenges we face. Thank you to everyone who participated — dozens have already written to us saying what a unique and powerful experience it was. We’ve also asked for and received feedback around where it could improve, to help us decide where this experiment should go next. Wherever that is, one thing we’re sure of is that the next Rebel Wisdom Summit isn’t far around the corner. We believe this has the potential to be the beginning of a new type of conversation, and we’re excited to respond to it as it emerges. | https://medium.com/rebel-wisdom/the-rebel-wisdom-summit-663f5c242b0d | ['Alexander Beiner'] | 2019-05-16 19:49:44.594000+00:00 | ['Conversations', 'Rebel Wisdom', 'Philosophy', 'Free Speech', 'Science'] |
Waterfall vs Agile: Which One Should You Choose | Waterfall vs Agile — Edureka
Are you confused about choosing the software development model for application development? Are you having a difficult time choosing between Waterfall and Agile? If yes then this article will clear all your confusion. Here we will discuss all the differences between Waterfall and Agile. After understanding the differences, it would make more sense to know about DevOps.
The topics that we will cover in this article are as follows -
What is the Waterfall model? Pros and Cons of Waterfall What is Agile? Pros and Cons of Agile Comparison of Waterfall and Agile
What is the Waterfall model?
The waterfall model is a model of software development that is pretty straight forward and a linear. This model follows a top-down approach. This model has various starting with Requirements gathering and analysis. This is the phase where you get the requirements from the client for developing an application. After this, you try to analyze these requirements.
Next comes the Design phase where you prepare a blueprint of the software. In this phase, you think about how the software is actually going to look like. Once the design is ready, you proceed further with the Implementation phase where you begin with the coding for the application. The team of developers works together on various components of the application.
Once the application is developed, it is tested in the verification phase. There are various tests conducted on the application such as unit testing, integration testing, performance testing, etc. After all the tests on the application are done, it is deployed onto the production servers. At last, comes the maintenance phase. In this phase, the application is monitored for performance. Any issues related to the performance of the application are resolved in this phase.
Pros and Cons of Waterfall
Pros
By having clear goals and directions, planning and designing becomes more straightforward and simple. As such, the whole team ideally remains on the same page for every phase.
You can easily measure progress and you know when to move on to the next step. There are clear milestones and the phases indicate how well the overall project is going.
This methodology saves time and money. Through clear documentation and planning, your whole team is more prepared and wastes no time in the future.
Cons
Gathering and documenting your requirements on each step of the way can be time-consuming, not to mention difficult. It’s hard to assume things about your product so early into the project. As a result, your assumptions might be flawed and different from what the customer expects.
If the above is indeed the case and your customers are dissatisfied with your delivered product, adding changes to the product can be expensive, costly and most of all, difficult to implement.
In general, the risk is higher with the Waterfall approach because the scope for mistakes is high as well. If things go wrong, fixing them can be hard as you have to go a couple of steps back.
What is Agile?
Agile is an iterative based software development approach where the software project is broken down into various iterations or sprints. Every iteration has phases like the waterfall model such as requirements gathering, design, development, testing, and maintenance. The duration of each iteration is generally 2–8 weeks.
So in Agile, you release the application with some high priority features in the first iteration. After its release, the end-users or the customers give you feedback about the performance of the application. The necessary changes are made into the application along with some new features and the application is again released which is the second iteration. This procedure is repeated until the desired software quality is achieved.
Pros and Cons of Agile
Pros
Because of the high customer involvement, you receive feedback quickly and make decisions on the fly. There’s more frequent communication, more feedback and a closer relationship with your customers.
There is a lesser risk since your work output is reviewed at every stage. You also save money and time from unnecessary expenditures, because you’ll be prioritizing providing value for your users.
You’ll be improving the quality of your output with each cycle. By breaking down your project into bite-sized pieces, you learn from each iteration. There is a lot of trial and error involved, but for the most part, you’re still focusing on high-quality development, testing, and collaboration.
Cons
For the approach to work, all members of the team must be completely dedicated to the project. Everyone must be involved equally if you want the whole team to learn and do better on the next run. Because Agile focuses on quick delivery, there might be an issue with hitting deadlines.
The approach may seem simple but be hard to execute. It requires commitment and for everyone to be on the same page, ideally, in the same physical space.
Documentation can be ignored. Because Agile methodology focuses on working software over comprehensive documentation, things might get lost through each stage and iteration. As a result, the final product can feel different from what was first planned.
Comparison — Waterfall Vs Agile
When You Should Use Waterfall and When to Use Agile
Use Waterfall if :
You know that there will be no change in the scope and your work involves fixed-price contracts
The project is very simple or you’ve done it many times before
You know very well that the requirements are fixed.
Customers know exactly what they want in advance
You’re working with orderly and predictable projects
And use Agile if:
There is no clear definition of the final product.
The clients/stakeholders are capable enough to modify the scope
You anticipate any kind of changes during the project
Rapid deployment is the goal
Which One Is Better? Agile vs Waterfall
There is no clear winner here. You cannot say that Agile is better than Waterfall or vice versa. It really depends on the project and the level of clarity that surrounds the requirement.
You can say that Waterfall is a better model if you have a clear picture of the final product. Also, if you know that the requirement will not change and the project is relatively simple then Waterfall is for you. This model is a straightforward, efficient process if you don’t expect to deal with change.
Agile is superior When you don’t have a clear picture of the final product, when you anticipate changes at any stage of the project and when the project is pretty complex. Agile can accommodate new, evolving requirements any time during the project, whereas it is not possible Waterfall to go back to a completed phase and make changes.
This is it, this brings us to the end of this article.
If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, Python, Ethical Hacking, then you can refer to Edureka’s official site.
Do look out for other articles in this series which will explain the various other aspects of DevOps. | https://medium.com/edureka/waterfall-vs-agile-991b14509fe8 | ['Saurabh Kulshrestha'] | 2020-09-09 12:03:45.397000+00:00 | ['Waterfall Model', 'Software Engineering', 'Agile', 'Agile Methodology', 'DevOps'] |
WAZIHUB Presented at East West University in Bangladesh | On January 23rd 2018, the Department of Computer Science and Engineering of East West University organized a seminar on “IoT and Big Data for Sustainable Development”.
Dr. Abdur Rahim Biswas, who is a senior research staff in smart IoT group at CREATE-NET, Italy and project co-ordinator of WAZIHUB, was the key speaker of the seminar. The seminar was chaired by Dr. Ahmed Wasif Reza, Associate Professor and Chairperson of CSE department.
Dr Abdur Rahim presented the Wazihub project and the Waziup technology to a group of 100 students and faculties that attended the seminar and made it a success. | https://medium.com/waziup/wazihub-presented-at-east-west-university-in-bangladesh-c079f36b3710 | ['Wazihub Iot'] | 2019-11-03 21:36:01.912000+00:00 | ['IoT', 'Big Data', 'Bangladesh', 'Waziup', 'Education'] |
The sound of footsteps behind me | “Average pace: 11 minutes, 4 seconds.”
The running app in my ear let me know that I was slowing down. I switched to a podcast to take my mind off the 5 miles I planned to conquer.
Between quotes from a divorced father who found creative ways to stay present in his son’s life, I heard my feet thumping on the dirt path along the riverside.
I thought I was alone, but another set of thumps followed right behind me.
Wanting to keep up, wanting to win, I picked up my pace.
“Average pace: 10 minutes, 14 seconds.”
I continued to push myself just a little bit to stay ahead of whoever was behind me and continued to hear their footsteps following closely.
Until, I was faced with a big puddle that required a balancing act on some rocks along the edge of the path.
I stopped, expecting someone to pass me.
But the footsteps behind me stopped too.
I looked back — no one was there.
Puzzled, I passed the puddle, and resumed my mile.
I heard footsteps behind me again. They pushed me to run faster.
“Average pace: 9 minutes, 11 seconds.”
The podcast episode ended.
All I heard were footsteps.
The sound of footsteps behind me motivated me to run faster. Push harder. Be better.
Then, I stopped. And they stopped too.
I glanced back, but saw no one.
The sound of footsteps were my own.
Inadvertently, I was racing myself. | https://medium.com/the-mission/the-sound-of-footsteps-behind-me-e85a8e943381 | ['Melissa Brown'] | 2018-10-22 18:41:42.592000+00:00 | ['Self-awareness', 'Personal Growth', 'Perspective', 'Personal Development', 'Self'] |
The Clash Between Khrushchev and Castro After the Cuban Missile Crisis | The Clash Between Khrushchev and Castro After the Cuban Missile Crisis Emmanuel Rosado Follow May 20 · 9 min read
Fidel Castro and Nikita Khrushchev make their way in the midst of a crowd / World-Telegram & Sun photo by Herman Hiller (Library of Congress).
In 1955, after Stalin’s death in 1953, plus a battle for command of the USSR, Nikita Khrushchev came to power. Intentions to protect Eastern Europe, the crisis in Berlin, and concerns about nuclear weapons were a focus in Soviet policy.
However, Khrushchev’s policy and attitude were different from that of his predecessor. Khrushchev did not use obscure terminologies of Marxism-Leninism to justify his decisions and proceed with any situation; often, he made hasty decisions without any rigorous analysis of the consequences. These contrasts were notable in the little discretion that the leader of the Soviet Union had when using secret plans, as was the “Cuban Missile Crisis”.
It is in the Khrushchev panorama that the episode of the Cuban Revolution and the new strategies with the Caribbean enter. Fidel Castro came to power in 1959, however, the Soviet-Cuban relationship was not made official immediately.
The Revolution was perceived in the foreground by the Soviet Union as one of an anti-imperialist nature. However, nothing was highlighted in the Soviet press about their approach to Cuba or about a new socialist country, as Jacques Lévesque explains:
In reading the articles in the Soviet press for the greater part of 1959, one is struck by the absence of any mention of Soviet support of Cuba or promises of support, even merely political support. The events taking place in Cuba, although presented in a very favorable light, never led to more than vague expressions of moral support; the press mentioned, for example, the sympathy “of all peace-loving peoples” and avoided explicitly involving the Soviet Union.[1]
The relationship between Cuba and the USSR was gradually developing. The duality in which Khrushchev and the USSR walked, forced a non-immediate approach. At the time of the events in Cuba, Khrushchev was touring the United States, in an attempt to détente between the two powers. From New York to Camp David, Khrushchev’s new image tried to create a less aggressive atmosphere, recalling new nuclear possibilities.
However, Eisenhower was alarmed at the possibility of having the Soviets in the western hemisphere. Machiavellically, the USSR approached economically first, committing to buy sugar from Cuba, which at first instance was not so alarming, given that in 1960 they promised to buy, for two years, 31.3 million dollars in sugar, which was a substantial drop, given that in 1957 alone, Cuba had received $47.1 million dollars for its sugar (when Cuba was under Batista). The Soviet Union preferred that the United States also continue to buy sugar from Cuba, but when the latter (USA), in retaliation, did not commit to buying 700,000 tons of sugar from Cuba, Khrushchev had to commit to also buy those tons. The more they committed financially, the more funds the Soviets were redirecting to the Caribbean country.[2]
Not only was the purchase of sugar under Khrushchev used for a Soviet rapprochement in the Caribbean. Castro had access to loans of 250 million at 2.5 percent annual interest in 1960. Cuba “wanted to move away from sugar and diversify and industrialize” the country. According to Robert F. Lamberg: “In February 1963 and January 1964, new long-term investment and delivery agreements were concluded with the Soviet Union. However, none of these agreements included military aid to Cuba.”
Cuba had become the new symbol against anti-imperialism and socialist possibility in America. In 1961 the Latin American Institute was founded, “within the system of the USSR Academy of Sciences to advance further improvements in research work on Latin America in our country.” In addition, Cuba stimulated a sharper approach of the Soviet Union in Latin America, in August 1961, “Secretary of the USSR Supreme Soviet Presidium, M.P. Geordgadze, visited Brazil, Ecuador and Cuba. This was a most important diplomatic step which surpassed earlier athletic and cultural contacts in its significance.” [3]
For the USSR, Cuba’s strategic point was convenient. Located right next to the United States, it gave them the necessary range, able to launch missiles towards the main cities of the United States or at least — something that was very common about the quality of political strategy that Khrushchev implemented — to have a chip to balance the dynamics of the Cold War and making the United States think twice before engaging in direct aggression with the USSR.
On the other hand, Cuba was already going through an economic crisis, nationalizing the private sector and receiving 90% of profits through its “scientific socialism”. The Soviet Union could offer them an economic pact, on equal terms; Cuba being a lodging for their nuclear weapons and thus a withdrawal of US aggression.
The above was not paranoid thinking, given that in 1961 the US, Kennedy, and the CIA had already tried to stop the Castro government through the “Bay of Pigs” maneuver. However, a debate sparked: Was it an equal treatment? Did the USSR simply see Cuba as a lower priority? A simple nuclear warehouse?
In a way, this debate grew during the tensions of 1962 with the “Missile Crisis”. On the heels of the confirmation of a “socialist” Cuba, Khrushchev designed his most ambitious plan during his leadership. In 1962, the Soviet Union did not yet have the necessary nuclear power to confront or at least push back the United States from the table of negotiations.
While the latter had bases all over Europe, the Soviets barely had a close point to actually deliver their new rockets to America. Cuba presented the ideal place to move the missiles into America and to create a balance. Again, Khrushchev had to work under a pantomime of actions, which did not raise suspicion of the US about any movement of nuclear warheads.
In July 1962, Raúl Castro visited Moscow and the “Anadyr” plan was made official, which agreed on “missile deployment and other Cuban defense issues.” In the first instance, the plan was justified before the people and certain officials as a strategy to defend socialist Cuba.
As Lévesque explains, a certain justification was not so far-fetched, given that, after the failure of the “Bay of Pigs”, President John F. Kennedy was ready to design another plan to invade the island, putting the plan under General Maxwell Taylor. But, the real strategy was always geopolitical. Cuba had a privileged position that would “equalize the West” and finally achieve “the balance of power.”
Khrushchev’s strategy in the general scheme was not an empty illusion. If the USSR truly wanted to slow the advancement of the West and weaken the dominant Cold War psychology regarding American superiority — given its many bases around Europe; an approach through the patio of the US could do it.
Still, Khrushchev did not count on several factors. His hasty demeanor in combination with bluffing made any future confrontation more delicate than it should have. Already, the incursions into Berlin and the Suez crisis had shown a hesitant and insecure man.
He decided to leave the shadow of Stalin and in search of finally succeeding in putting on site the new weapons developments of the Soviet Union and its military-industrial complex. In a way, although to a certain extent beneficial for Cuba, the Latin American country found itself in the midst of a collision that defined the times. | https://medium.com/lessons-from-history/the-clash-between-khrushchev-and-castro-after-the-cuban-missile-crisis-46176c1b96d7 | ['Emmanuel Rosado'] | 2020-05-21 11:58:36.885000+00:00 | ['Politics', 'Cuba', 'Books', 'War', 'History'] |
Working With Sitemaps in Nuxt.js | Sitemap for a Multi-Language Website
Here, the only detail to worry about is not to set the hostname so the domain will be taken from the request that will come to the Nuxt server.
Also, in case your domain hosting is via CNAME or Load balancer , then the request that will come to Nuxt will not be HTTPS, but simple HTTP.
In that case, you need to make sure that your x-forwarded-proto is set to HTTP in a request. Then, the module will recognize that the original request was HTTP and it will put an HTTPS link to the sitemap.
On clusterjobs.de we have that option, when we need to have a multi-index sitemap that is dynamic and responds to each language domain. And, in the end, that module comes quite handy. I started using it a year ago, only with the routes and static routes options and it grew a lot.
Hopefully, it was useful and it improves your Nuxt application or encourages you to use this incredible framework! | https://medium.com/better-programming/nuxt-js-working-with-sitemaps-518ee7d657c8 | ['Mikhail Starikov'] | 2019-11-11 01:05:31.066000+00:00 | ['Nodejs', 'Programming', 'Software Engineering', 'JavaScript', 'Nuxtjs'] |
Naive Bayes Classifier | In a world full of Machine Learning and Artificial Intelligence, surrounding almost everything around us, Classification and Prediction is one the most important aspects of Machine Learning, and Naive Bayes is a simple but surprisingly powerful algorithm for predictive modeling according to Machine Learning Industry Experts. So Guys, in this Naive Bayes Tutorial, I’ll be covering the following topics:
What is Naive Bayes?
What is Bayes Theorem?
Game Prediction using Bayes’ Theorem
Naive Bayes in the Industry
Step By Step Implementation of Naive Bayes
Naive Bayes with SKLEARN
What is Naive Bayes?
Naive Bayes is among one of the most simple and powerful algorithms for classification based on Bayes’ Theorem with an assumption of independence among predictors. Naive Bayes model is easy to build and particularly useful for very large data sets. There are two parts to this algorithm:
Naive
Bayes
The Naive Bayes classifier assumes that the presence of a feature in a class is unrelated to any other feature. Even if these features depend on each other or upon the existence of the other features, all of these properties independently contribute to the probability that particular fruit is an apple or an orange or a banana and that is why it is known as “Naive”.
Let’s move forward with our Naive Bayes Tutorial Blog and understand Bayes Theorem.
What is Bayes Theorem?
In Statistics and probability theory, Bayes’ theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event. It serves as a way to figure out the conditional probability.
Given a Hypothesis H and evidence E, Bayes’ Theorem states that the relationship between the probability of Hypothesis before getting the evidence P(H) and the probability of the hypothesis after getting the evidence P(H|E) is :
This relates the probability of the hypothesis before getting the evidence P(H), to the probability of the hypothesis after getting the evidence, P(H|E). For this reason, is called the prior probability, while P(H|E) is called the posterior probability. The factor that relates the two, P(H|E) / P(E), is called the likelihood ratio. Using these terms, Bayes’ theorem can be rephrased as:
“The posterior probability equals the prior probability times the likelihood ratio.”
Go a little confused? Don’t worry.
Let’s continue our Naive Bayes Tutorial blog and understand this concept with a simple concept.
Bayes’ Theorem Example
Let’s suppose we have a Deck of Cards, we wish to find out the “ Probability of the Card we picked at random to be a King given that it is a Face Card “. So, according to Bayes Theorem, we can solve this problem. First, we need to find out the probability
P(King) which is 4/52 as there are 4 Kings in a Deck of Cards.
which is as there are 4 Kings in a Deck of Cards. P(Face|King) is equal to 1 as all the Kings are face Cards.
is equal to as all the Kings are face Cards. P(Face) is equal to 12/52 as there are 3 Face Cards in a Suit of 13 cards and there are 4 Suits in total.
Now, putting all the values in the Bayes’ Equation we get the result as 1/3
Game Prediction using Bayes’ Theorem
Let’s continue our Naive Bayes Tutorial blog and Predict the Future of Playing with the weather data we have.
So here we have our Data, which comprises of the Day, Outlook, Humidity, Wind Conditions and the final column being Play, which we have to predict.
First, we will create a frequency table using each attribute of the dataset.
For each frequency table, we will generate a likelihood table.
Likelihood of ‘Yes’ given ‘Sunny‘ is:
P(c|x) = P(Yes|Sunny) = P(Sunny|Yes)* P(Yes) / P(Sunny) = (0.3 x 0.71) /0.36 = 0.591
Similarly Likelihood of ‘No’ given ‘Sunny‘ is:
P(c|x) = P(No|Sunny) = P(Sunny|No)* P(No) / P(Sunny) = (0.4 x 0.36) /0.36 = 0.40
Now, in the same way, we need to create the Likelihood Table for other attributes as well.
Suppose we have a Day with the following values :
Outlook = Rain
Humidity = High
Wind = Weak
Play =?
So, with the data, we have to predict whether “we can play on that day or not”.
Likelihood of ‘Yes’ on that Day = P(Outlook = Rain|Yes)*P(Humidity= High|Yes)* P(Wind= Weak|Yes)*P(Yes)
Likelihood of ‘No’ on that Day = P(Outlook = Rain|No)*P(Humidity= High|No)* P(Wind= Weak|No)*P(No)
Now we normalize the values, then
P(Yes) = 0.0199 / (0.0199+ 0.0166) = 0.55
P(No) = 0.0166 / (0.0199+ 0.0166) = 0.45
Our model predicts that there is a 55% chance there will be a Game tomorrow.
Naive Bayes in the Industry
Now that you have an idea of What exactly is Naïve Bayes, how it works, let’s see where is it used in the Industry?
News Categorization:
Starting with our first industrial use, it is News Categorization, or we can use the term text classification to broaden the spectrum of this algorithm. News on the web is rapidly growing where each news site has its own different layout and categorization for grouping news. Companies use a web crawler to extract useful text from HTML pages of news article contents to construct a Full-Text-RSS. Each news article content is tokenized(categorized). In order to achieve better classification results, we remove the less significant words i.e. stop — the word from the document. We apply the naive Bayes classifier for the classification of news contents based on news code.
Spam Filtering:
Naive Bayes classifiers are a popular statistical technique of e-mail filtering. They typically use a bag of words features to identify spam e-mail, an approach commonly used in text classification. Naive Bayes classifiers work by correlating the use of tokens (typically words, or sometimes other things), with a spam and non-spam e-mails and then using Bayes’ theorem to calculate a probability that an email is or is not spam.
Particular words have particular probabilities of occurring in spam email and in legitimate email. For instance, most email users will frequently encounter the word “Lottery” and “Luck Draw” in spam email, but will seldom see it in other emails. Each word in the email contributes to the email’s spam probability or only the most interesting words. This contribution is called the posterior probability and is computed using Bayes’ theorem. Then, the email’s spam probability is computed over all words in the email, and if the total exceeds a certain threshold (say 95%), the filter will mark the email as a spam.
Medical Diagnosis:
Nowadays modern hospitals are well equipped with monitoring and other data collection devices resulting in enormous data that are collected continuously through health examination and medical treatment. One of the main advantages of the Naive Bayes approach which is appealing to physicians is that “all the available information is used to explain the decision”. This explanation seems to be “natural” for medical diagnosis and prognosis i.e. is close to the way how physicians diagnose patients.
When dealing with medical data, the Naïve Bayes classifier takes into account evidence from many attributes to make the final prediction and provides transparent explanations of its decisions and therefore it is considered as one of the most useful classifiers to support physicians’ decisions.
Weather Prediction:
Weather is one of the most influential factors in our daily life, to an extent that it may affect the economy of a country that depends on occupation like agriculture. Weather prediction has been a challenging problem in the meteorological department for years. Even after the technological and scientific advancement, the accuracy in the prediction of weather has never been sufficient.
A Bayesian approach based model for weather prediction is used, where posterior probabilities are used to calculate the likelihood of each class label for input data instance, and the one with maximum likelihood is considered resulting output.
Step By Step Implementation of Naive Bayes
Here we have a dataset comprising of 768 Observations of women aged 21 and older. The dataset describes instantaneous measurement taken from patients, like age, blood workup, the number of times pregnant. Each record has a class value that indicates whether the patient suffered an onset of diabetes within 5 years. The values are 1 for Diabetic and 0 for Non-Diabetic.
Now, Let’s continue our Naive Bayes Blog and understand all the steps one by one. I,ve broken the whole process down into the following steps:
Handle Data
Summarize Data
Make Predictions
Evaluate Accuracy
Step 1: Handle Data
The first thing we need to do is load our data file. The data is in CSV format without a header line or any quotes. We can open the file with the open function and read the data lines using the reader function in the CSV module.
import csv
import math
import random def loadCsv(filename):
lines = csv.reader(open(r'C:UsersKislayDesktoppima-indians-diabetes.data.csv'))
dataset = list(lines)
for i in range(len(dataset)):
dataset[i] = [float(x) for x in dataset[i]]
return dataset
Now we need to split the data into training and testing dataset.
def splitDataset(dataset, splitRatio):
trainSize = int(len(dataset) * splitRatio)
trainSet = []
copy = list(dataset)
while len(trainSet) < trainSize:
index = random.randrange(len(copy))
trainSet.append(copy.pop(index))
return [trainSet, copy]
Step 2: Summarize the Data
The summary of the training data collected involves the mean and the standard deviation for each attribute, by class value. These are required when making predictions to calculate the probability of specific attribute values belonging to each class value.
We can break the preparation of this summary data down into the following sub-tasks:
Separate Data By Class
def separateByClass(dataset): separated = {}
for i in range(len(dataset)):
vector = dataset[i]
if (vector[-1] not in separated):
separated[vector[-1]] = [] separated[vector[-1]].append(vector) return separated
Calculate Mean
def mean(numbers):
return sum(numbers)/float(len(numbers))
Calculate Standard Deviation
def stdev(numbers):
avg = mean(numbers)
variance = sum([pow(x-avg,2) for x in numbers])/float(len(numbers)-1)
return math.sqrt(variance)
Summarize Dataset
def summarize(dataset):
summaries = [(mean(attribute), stdev(attribute)) for attribute in zip(*dataset)]
del summaries[-1]
return summaries
Summarize Attributes By Class
def summarizeByClass(dataset):
separated = separateByClass(dataset)
summaries = {}
for classValue, instances in separated.items():
summaries[classValue] = summarize(instances)
return summaries
Step 3: Making Predictions
We are now ready to make predictions using the summaries prepared from our training data. Making predictions involves calculating the probability that a given data instance belongs to each class, then selecting the class with the largest probability as the prediction. We need to perform the following tasks
Calculate Gaussian Probability Density Function
def calculateProbability(x, mean, stdev):
exponent = math.exp(-(math.pow(x-mean,2)/(2*math.pow(stdev,2))))
return (1/(math.sqrt(2*math.pi)*stdev))*exponent
Calculate Class Probabilities
def calculateClassProbabilities(summaries, inputVector):
probabilities = {}
for classValue, classSummaries in summaries.items():
probabilities[classValue] = 1
for i in range(len(classSummaries)):
mean, stdev = classSummaries[i]
x = inputVector[i]
probabilities[classValue] *= calculateProbability(x, mean, stdev)
return probabilities
Make a Prediction
def predict(summaries, inputVector):
probabilities = calculateClassProbabilities(summaries, inputVector)
bestLabel, bestProb = None, -1
for classValue, probability in probabilities.items():
if bestLabel is None or probability > bestProb:
bestProb = probability
bestLabel = classValue
return bestLabel
Get Accuracy
def getAccuracy(testSet, predictions):
correct = 0
for x in range(len(testSet)):
if testSet[x][-1] == predictions[x]:
correct += 1
return (correct/float(len(testSet)))*100.0
Finally, we define our main function where we call all these methods we have defined, one by one to get the accuracy of the model we have created.
def main():
filename = 'pima-indians-diabetes.data.csv'
splitRatio = 0.67
dataset = loadCsv(filename)
trainingSet, testSet = splitDataset(dataset, splitRatio)
print('Split {0} rows into train = {1} and test = {2} rows'.format(len(dataset),len(trainingSet),len(testSet))) #prepare model
summaries = summarizeByClass(trainingSet) #test model
predictions = getPredictions(summaries, testSet)
accuracy = getAccuracy(testSet, predictions)
print('Accuracy: {0}%'.format(accuracy)) main()
Output:
So here as you can see the accuracy of our Model is 66 %. Now, This value differs from model to model and also the split ratio.
So here as you can see the accuracy of our Model is 66 %. Now, This value differs from model to model and also the split ratio.
Now that we have seen the steps involved in the Naive Bayes Classifier, Python comes with a library SKLEARN which makes all the above-mentioned steps easy to implement and use. Let’s continue our Naive Bayes Tutorial and see how this can be implemented.
Naive Bayes with SKLEARN
Importing Libraries and Loading Datasets
from sklearn import datasets
from sklearn import metrics
from sklearn.naive_bayes import GaussianNB
dataset = datasets.load_iris()
Creating our Naive Bayes Model using Sklearn
Here we have a GaussianNB() method that performs exactly the same functions as the code explained above
model = GaussianNB() model.fit(dataset.data, dataset.target)
expected = dataset.target predicted = model.predict(dataset.data)
Getting Accuracy and Statistics
Here we will create a classification report that contains the various statistics required to judge a model. After that, we will create a confusion matrix which will give us a clear idea of the Accuracy and the fitting of the model.
print(metrics.classification_report(expected, predicted)) print(metrics.confusion_matrix(expected, predicted))
Classification Report:
Confusion Matrix:
As you can see all the hundreds of lines of code can be summarized into just a few lines of code with this powerful library.
As you can see all the hundreds of lines of code can be summarized into just a few lines of code with this powerful library.
So, with this, we come to the end of this Naive Bayes Tutorial Blog. I hope you enjoyed this blog. If you are reading this, Congratulations! You are no longer a newbie to Naive Bayes. Try out this simple example on your systems now.
If you wish to check out more articles on the market’s most trending technologies like Python, DevOps, Ethical Hacking, then you can refer to Edureka’s official site.
Do look out for other articles in this series which will explain the various other aspects of Data Science. | https://medium.com/edureka/naive-bayes-tutorial-80939835d5cb | ['Sahiti Kappagantula'] | 2020-09-28 13:16:22.362000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Naive Bayes', 'Classification Models'] |
The Science of Aromatherapy | If you’ve ever lit a scented candle to relax or soaked in a lavender bubble bath, you’re already familiar with the power that smell can hold over mood. Of all five senses, it’s the one most closely linked to emotion and memory — likely, scientists think, because the brain’s olfactory processing center is so close to the regions in charge of those other two functions. And there’s an extensive body of research suggesting smell can affect us physically, too: Lab studies have found lavender, for instance, to be effective in calming the nervous system, and neroli, a stimulant, has been shown to increase heart rate.
It makes sense, then, that aromatherapy — the use of scents, most often from essential oils, to enhance well-being — is having a moment right now. With the growing cultural conversation around self-care, people are increasingly incorporating smell into their routines to relax, to fall asleep, to focus, or to get themselves energized. Some estimates put the global market size for aromatherapy at $1.2 billion, and in the U.S., the essential-oil market has grown steadily over the last five years. The market is projected to keep growing at a steady pace through 2025, transforming these products from quaint home remedies to one of the most accessible, and affordable, tools for wellness. Unlike so many other practices in the space, filling your home with a specific scent doesn’t take much effort.
And there’s evidence to suggest it really can work, to an extent — especially, and perhaps only, if you want it to. The science of aromatherapy might best be understood as a study in the power of suggestion. It can also be a strong emotional trigger. Because smell is so tightly tied to memory, the effect of a given scent depends in large part on how you’ve experienced that scent in the past.
The research on aromatherapy is somewhat limited, and a 2012 review concluded that it hasn’t been proven to be a viable treatment for any diagnosable medical conditions. When it comes to improving people’s mental or emotional state, studies have yielded mixed results, but many of them point to belief as a key ingredient: If you believe a scent will make you more joyful, calm, or courageous, it likely will. In 2004, for example, Estelle Campenni, PsyD, a professor of psychology at Marywood University, found that our emotional response to essential oils actually has more to do with our beliefs than the oils themselves. Lavender oil, when she labeled it as relaxing, consistently slowed participants’ heart rates, while neroli, which she introduced to participants as a stimulant, had the reverse effect. When Campenni switched the labels, telling participants that lavender was a stimulant and neroli was a relaxant, their heart rates reacted accordingly.
Technically, the Food and Drug Administration bars essential oil companies from marketing or claiming specific health outcomes. But plenty of manufacturers foreground their products’ mental or emotional potential: For example, essential oil giant Young Living, which boasted $1.5 billion in sales in 2017, promotes “emotional well-being” with oils named for positive emotional experiences like Valor, Harmony, Joy, and Inner Child. And the Aroma Freedom Technique, created by clinical psychologist Benjamin Perkus, claims to help with the processing of trauma; by smelling essential oils in a therapy-like setting, Perkus argues, people can learn to disconnect painful memories from anxious responses.
Experts agree that claims like this are overkill: “Someone might smell a lily and remember their parent’s funeral and have an emotional experience — it happens all the time,” says Keith Humphreys, Ph.D, professor of psychiatry and behavioral sciences at Stanford University. “Sure, scent is linked to our memories. But that doesn’t mean it changes the brain.”
Still, while a scent-induced shift in mood is a far cry from forming new neural pathways, Campenni says aromatherapy can still be a useful mental health aid. “For me, the danger is that people are equating a change in the state of relaxation with a change in the experience of anxiety or depression. That’s a pretty big leap,” Campenni says. “But when it comes to changing one’s mood, using scent makes sense — augmenting psychotherapy to facilitate a change in state could be effective.” By choosing scents that most people regard as soothing or cheerful, therapists can use aromatherapy to create a warm, calming environment for their clients.
A practitioner’s authority can also help persuade clients to be receptive to these scents, Campenni notes; if a therapist, a trusted source, tells a patient that what they’re smelling may make them feel a certain way, the power of suggestion will be that much stronger in making it so.
Similarly, Humphreys says oils can be used as a kind of prop that adds a sense of gravity to the therapy session, prepping the client to believe that they’re being set up for success — much like how the trappings of a therapist’s office facilitate an environment of trust. “You could say the traditional psychiatrist’s office, with a leather couch and degrees hung on the wall, primes people for change in the same way,” Humphreys says.
Aromatherapy, then, is less a straightforward path than it is a choose-your-own adventure practice. And while it’s certainly not proven to be a mental health treatment in its own right, it can be a powerful ritual for the easing of lesser ailments, whether that’s at home or in a more structured setting. “Our minds are connected to our bodies, so what we believe about a scent can override its actual properties,” Campenni says. “Think about when you buy a candle that’s labeled ‘relaxing’ or ‘sensual.’ There’s always a suggestion that goes with it. That’s why placebos work — because you believe they’re going to.” | https://elemental.medium.com/the-science-of-aromatherapy-753717d5a4b3 | ['Ashley Abramson'] | 2019-04-25 14:37:37.791000+00:00 | ['Essential Oils', 'Scent', 'Aromatherapy', 'Trends', 'Science'] |
Revit Like It’s Sketchup | Paper Architecture Illustration
This is not a post telling you to throw out Sketchup and just use Revit. It is not even a post about the promising Autodesk FormIt, Dynamo, or Revit’s In-Place Mass tool — which is often the initial place people go if they are trying to do something conceptual in Revit. Even a Revit expert such as myself uses other software. There are just things different programs do better than others. So just as a graphic designer might use Photoshop, InDesign and Illustrator for different purposes, so do I use Revit, Sketchup, Grasshopper via Rhino, and hand sketching. It is not a knock on any particular software, but rather an acknowledgement of how well each one does its primary task. Revit is a powerful tool for documentation. As such, there are elements of it that can be very helpful in exploring the spatial properties of a design using controlled views such as sections and plans. It may not be as intuitive as Sketchup or a powerful modeler like Rhino, but with an understanding of what tools to use, it can help play a useful role in design development.
Please note that all these suggestions are based on the idea of using Revit as a sketchpad. USE A NEW, EMPTY MODEL FOR ALL THE THINGS SUGGESTED HERE, NOT YOUR PROJECT FILE! Just as you use trace paper to come up with new designs, in Revit you can create throwaway models to do studies. Use a separate model that you would be fine throwing away to undertake the strategies explained here.
1. Designing in Section and Plan
Revit sees everything through the lens of controlled views. What I like to use Revit for conceptually is better understanding the spatial qualities of a design. For all of Sketchup and Rhino’s capabilities as modelers, they are lacking when it comes to drawing up a plan or a section. Yes they have section tools and workarounds for showing plans and sections, but in the same way Revit is not intuitive as a form modeler, these programs have a disconnect when it comes to controlled views. Everyone has their own preferred process, but what I do is unscaled hand sketches in plan and section, and when I have gotten to a point where I need to understand how they really work (perhaps my ceiling is too low because I drew a figure slightly smaller than they would appear in real life) I bring them into a fresh Revit model and draw a couple walls and roofs and then draw a section. Using the Entourage Families in Revit is a great way to understand the human scale in section. I will often go back and forth between Revit and hand sketches for a while, printing out the views and drawing over them on paper, then going back to Revit when there is something I need to understand intuitively.
Funny story: I once had a distinguished Professor who hated using computers for design. He had worked for Louis Kahn in the 1960s. One day he came over to me as I was setting up a perspective in Sketchup. “You know Lou Kahn used Sketchup,” the Professor said. I was confused. “Back then it was called Interns,” he said, the point being that even the great Louis Kahn would have many of his drawings set up by someone (or in our case something) else before beginning to work on them.
2. Leveraging Sub-Elements
Alright, you say, the first point about plans and sections works well if you have a boring flat roof, but what if I’m trying to make something more dynamic and want to actually get into 3-D modeling? While many people are frustrated with the roof elements in Revit, if you make a basic flat roof and modify it using sub-elements you can get into some nice variations of slope and do it in a way that I find to be as intuitive as Rhino and Sketchup. If you understand the way you can push and pull edges and corner points in Sketchup, this will feel very familiar to you. In Revit the process goes like this: sketch and create a flat roof, modify sub elements, add points and split lines. Now you’re ready to start pushing and pulling things. When using this strategy, I like to set up a 3D view side by side with a Section view and then push and pull in 3D. In this way you can manipulate the model as in Sketchup, but with a very clear understanding of the spatial implications.
Another thing to be aware of as you are conceptualizing your design in Revit is the Attach Top/Base command. Using the attach command for all your walls or columns, you can seamlessly change the form of the roof without having to go back and remodel your walls. I find this to be a great little trick, and it really leverages Revit’s abilities to solve the issue of constant revisions during concept design that I often encounter in Rhino and Sketchup.
3. Model in Place Components
If you are still seeking greater complexity I have one more thing to share. A lot of people default to In-Place Mass and think this is where you should go to do massing studies in Revit. Sure it’s useful, but unless you are a very advanced user it is lacking in comparison to Sketchup. In another post we will go in-depth into advanced use of the In-Place Mass but if you are trying to keep it simple and fast I would suggest instead focus your conceptual studies on using Model In-Place Components, which you can specify to be masses or almost anything else. With Model In-Place Components you can leverage some really nice advanced modeling tools such as Sweeps, Blends and Revolves that are not available for In-Place Mass. You can also use those same tools as Voids, to create complex shapes from cutting things. Revit can actually create some pretty crazy geometry if you utilize Voids and understand their potential. When people model in Sketchup and Rhino, the emphasis is often on additive modeling, such as with a ball of clay where you add on to it to make a form. In Revit conceptual modeling is subtractive, like chiseling from stone; Solid forms are your stone, Void forms your chisel.
The best part about In-Place Components modeling in Revit is after you are done, you can pick faces from geometry just like with In-Place Massing and reference that geometry to make functioning Revit walls, roofs and floors. Once again, a big benefit of doing this in Revit versus Sketchup or Rhino is the ease in understanding your concept through controlled views. Let’s say you have a hallway you have sketched by hand that conceptually goes in a curve in plan, and the ceiling slowly ascends from 8' to 12' at the end. You can probably figure out how to quickly model it in Rhino and Sketchup, but what you really want to do is study how that feels every five feet using a section. This is a great use of the Revit environment to understand your design better and allows you to do so quickly in a way Sketchup and Rhino can’t easily do. | https://medium.com/paper-architecture/revit-like-its-sketchup-40621b9be66c | ['Dan Edleson'] | 2019-05-23 06:28:38.087000+00:00 | ['Sketchup', 'Architecture', 'Bim', 'Design', 'Revit'] |
EuPC and Circularise Plastics collaborate to further develop the digital platform to monitor the rate of the plastics recycling activities in Europe | EuPC and Circularise Plastics collaborate to further develop the digital platform to monitor the rate of the plastics recycling activities in Europe Circularise Follow Oct 8 · 5 min read
EuPC, Circularise, Covestro and Domo Chemicals aim to bring together stakeholders that can contribute to the development of the MORE and Circularise platforms
Signatories will cooperate in the development of an improved digital platform to monitor the rate of the plastics recycling activities in Europe using blockchain technology
In the coming months, the aim is to test this digital platform with interested parties throughout the plastics value chain
Brussels/ Ghent/ The Hague/Leverkusen, October 8, 2020 — EuPC, the European Plastics Converters Association, partnered with the Circularise Plastics Group, currently composed of its members Covestro, Domo Chemicals and Circularise, to cooperate in the development of the tool to monitor the use of recycled plastics by converters in Europe.
In the context of the EU Plastics Strategy, the European Commission has launched a pledge to increase the use of recycled content to 10 million tons by 2025. To address this, Circularise Plastics Group launched an “Open Standard for Sustainability and Transparency” based on blockchain technology & Zero-knowledge Proofs. While EuPC set up a tool to collect the use of recyclates by converters, against their production.
The tool is called MORE, Monitoring Recyclates for Europe. MORE is meant to ensure that faithful and consistent collection, through a common approach, on the use of recyclates is achieved by converters that will feed the surveys contained in MORE. The volumes reported in MORE are steadily growing every month and new plastic converting companies will join the system in the years to come.
“With the development of the MORE platform, EuPC emphasised the importance of creating an efficient infrastructure for monitoring the flow of data about the use of recycled plastics. We are happy to support its development with our technology and are looking forward to this collaboration”, says Jordi de Vos, Circularise’s co-founder.
The EuPC strategy aligns with the Circularise’s mission to enable transparency of material flows while safeguarding data privacy and confidentiality. As a part of the broader objective of this collaboration, Signatories specifically aim to facilitate the development of a digital platform to monitor the rate of plastics recycling activities in Europe and test how the “Open Standard for Sustainability and Transparency” can be applied for this.
“With the rise of new technologies, such as blockchain, we look to incorporate them in our monitoring system to ensure that the MORE platform is future-proof. Circularise has the technology and vision that aligns with ours and we look forward to cooperating on the development of the recyclates monitoring system”, says EuPC Managing Director Alexandre Dangis.
The organisations behind this cooperation acknowledge that it is an important element of a broader scope to create a standard for the industry: a secure, open-source, shared data exchange system for the global value chains to enable traceability of materials throughout the lifecycle across all stakeholders and value-chains.
“Increased recycling starts with the correct monitoring and reporting practices. We believe that the Open Standard for Sustainability and Transparency can enable that and help the plastics industry transition towards a more circular economy”, says Thomas Nuyts, Director of Global Product Management at Domo Chemicals. “The collaboration between Circularise Plastics group and EuPC has the potential to provide a remote yet detailed monitoring and audit opportunity for companies. The system could enable a more trustworthy and less expensive way to become and remain qualified, while providing the necessary speed and flexibility”, says Dr. Burkhard Zimmermann, Head of Resin, Digital Transformation & Sustainability, Covestro Polycarbonates.
EuPC, Circularise, Covestro and Domo Chemicals aim to bring together stakeholders that can contribute to the development of the MORE and Circularise platforms. In the context of this cooperation they aim to test a prototype with a list of target companies in the coming months.
About EuPC:
Created in 1989 and based in Brussels, EuPC is the EU-level trade association of European plastics converters. With four divisions in Packaging, Building & Construction, Automotive & Transport and Technical Parts, EuPC represents the different markets of the plastic converting industry. EuPC’s aim is to contribute to an open and fair- trading environment for plastics converters in Europe. The focus is on market development, regulation, issue management and trade. www.plasticsconverters.eu & www.moreplatform.eu
About Circularise:
Circularise founded in 2016 and based in The Netherlands helps plastic manufacturers, brands and OEMs to trace raw materials from source, into parts and ultimately to end products. The company uses blockchain and other emerging technologies to enable companies share data about their products while retaining privacy over sensitive information. www.circularise.com
About DOMO Chemicals:
DOMO Chemicals is a leading producer of high-quality engineering nylon materials for a diverse range of markets, including the automotive, food, medical, pharmaceutical, chemicals and electronics industries. The company offers a complete portfolio of integrated nylon 6 and 66 products, including intermediates, resins, engineering plastics, performance fibres, packaging film and distribution of petrochemical products. Headquartered in Germany, the family-owned company leverages advanced technology and consumer insights to deliver sustainable & innovative solutions. DOMO generated 2019 sales over EUR 900 million and employs in 2020 approximately 2200 employees worldwide. www.domochemicals.com
About Covestro:
With 2019 sales of EUR 12.4 billion, Covestro is among the world’s largest polymer companies. Business activities are focused on the manufacture of high-tech polymer materials and the development of innovative solutions for products used in many areas of daily life. The main segments served are the automotive, construction, wood processing and furniture, and electrical and electronics industries. Other sectors include sports and leisure, cosmetics, health and the chemical industry itself. Covestro has 30 production sites worldwide and employs approximately 17,200 people (calculated as full-time equivalents) at the end of 2019.
This press release is available for download from the Covestro press server at www.covestro.com
Forward-looking statements
This news release may contain forward-looking statements based on current assumptions and forecasts made by Circularise, Domo and Covestro AG. Various known and unknown risks, uncertainties and other factors could lead to differences between the actual future results, financial situation, development or performance of the company and the estimates given here. The companies assume no liability whatsoever to update these forward- looking statements or to conform them to future events or developments. | https://medium.com/circularise/eupc-and-circularise-plastics-collaborate-to-further-develop-the-digital-platform-to-monitor-the-b59914f4f1e | [] | 2020-10-08 10:25:33.787000+00:00 | ['Recycling', 'Partnerships', 'Circulareconomy', 'Startup', 'Plastic Pollution'] |
Battle Royale with Cheese (Headphones Edition): Beats Studio 3 Vs JLab Studio Pro | Battle Royale with Cheese (Headphones Edition): Beats Studio 3 Vs JLab Studio Pro
Top of the line vs. bargain bin
How much money do you need to spend on headphones?
That’s a question I found myself asking on Black Friday, as I skimmed the deals on Best Buy’s website. After finally, finally settling on which devices I wanted to use on a day-to-day basis, I got back to the original tech “splurge” that had me frequenting Best Buy at the beginning of the year (back in the good ol’ days, when hand sanitizer was not something I had to think about constantly).
Before I got into my indecisive phase with computers that has fueled the majority of my tech stories this year, headphones were the thing I was busy comparing; namely, I was looking for over-the-ear (or perhaps on-ear) headphones that I could use for hours at a time while working without the fatigue (or Tinnitus flare-ups) that I got from in-ear buds. And I looked at a plethora of headphones, from the cheapest JLab headphones to the more expensive options from Beats, JBL, and others.
But a year later, when I finally came back around to wanting some over-ear headphones, only one brand came to mind again: Beats.
This wasn’t because they were the best on the market; I knew that headphones from Bose and Sony would likely have better sound than Beats, which tend to favor bass a little more than some reviewers like. But being an Apple brand, they came with some unique benefits for someone who has gone all-in with Apple products as I have. And I remembered that they were quite comfortable.
But I can never, ever be satisfied with buying one product and sticking with it. If you’ve read my other tech articles, you’ll know that my superpower is massive indecisiveness; I tried out a whopping 10 computers before deciding that the new MacBook Air with M1 was right for me, and I tried 6 different tablets before determining that the diminutive iPad Mini was all I needed in that space. So, naturally, I was gonna try a few different options before I settled down with one.
In fact, even before we get to the Beats that I’m comparing in this story, I tried out the Beats Solo Pro, which Best Buy had on sale for $169 at the time. And I loved everything about them, from the color to the USB-C to the noise canceling. But I didn’t love how they made my ears feel like they were being pressed into my skull after a few minutes; in fact, when I took them off after an hour or so, my ears were in tremendous pain (I do wear glasses, which may be partly to blame as well). So this comparison will feature, primarily, the older Beats Studio 3 over-ear headphones, which retail still for $349 (but were on sale at Target for $175 when I got them).
I’ve always had a soft spot for JLab, though. Their products are just so damn cheap, while still being good quality; I bought the JLab Go Air headphones for $30 and I never use them at all (I usually go for my AirPods when I want true-wireless portability), but I love that I have them just in case I need them. I have JLab’s retro Bluetooth headphones ($20) because they make me feel like Starlord. And I have a pair of JLab’s Studio on-ear headphones (which were also only $30) that I like to throw into my bag for trips.
JLab hasn’t done that many over-ear headphones, however. Last time, I tried what was really their only major offering- their $100 Flex Sport headphones- but they just felt cheap by comparison to their other products. And even though I bought a pair of their Omni headphones (which they appear not to make anymore), I rarely use them, as I found the controls a little finicky and the ear-cups a little too big.
But lo and behold, when I decided to go look at their website, I found that they now offered what they call Studio Pro headphones- a larger, over-ear version of the regular Studio headphones, with ear-cups that are large enough to cover your ears but small enough to still be portable (they retain the ear-shaped design of the Flex Sport while replacing the uncomfortable fabric with a more plush, leathery material). And they were $40. It was everything I wanted at a price I could easily stomach (especially since I’d just purchased the Beats).
So, the only real question I had was this: which was better? Were the Beats worth the extra $310 (or extra $135, if you get them on sale)? There’s only one way to find out.
FIGHT!!!!
Build Quality
Let’s start with the obvious: the design.
Of course, there’s only so much you can do with the design of over-ear headphones, and, well, Beats did it with the Studio 3. There might be a reason Apple hasn’t bothered to update these in 3 years. The matte, soft-touch plastic feels sturdy and premium. The cushions are thick and soft. The controls are nondescript, with buttons built into the Beats logo and the surrounding ring of the left side that looks identical to the aesthetics of the right (although I, like other reviewers, do wish that this ring worked more like the scroll ring on the old iPods). Tiny, white LED lights indicate the battery life and whether ANC is turned on.
My gripes with the design of the Beats are few, but they are, unfortunately, present. The power button, to me, sticks out like a sore thumb compared to the hidden play/pause and volume controls, and yet it is so small that it is difficult to press. I find myself having to try the double-press to enable/disable ANC multiple times to get the function to work, and a couple of times I’ve accidentally turned off the headphones rather than achieving the settings change. I’m also not the biggest fan of the folding function, but that is entirely personal; the first time I unfolded the Beats Solo Pro (which have a very similar folding mechanism), I horribly pinched my hand in the mechanism and it was painful as all hell. Given that they fold right in the place where I am prone to grabbing headphones, I feel like this is going to happen again. And the headband, while feeling entirely premium, does also feel rather fragile; I’ve seen plenty of Beats that were snapped in half hanging on the displays at Best Buy in the past, and they definitely feel like one wrong move will irreparably damage them (but you can get Apple Care for them, so…).
And while I love that the Beats include a wire to use with a headphone jack, they still need battery power to use this, which I think is dumb; I used the JLab Omni headphones for weeks with a dead battery and a 3.5mm cable before I finally decided to charge them. And speaking of charging, well, I know I said Apple hasn’t updated the design of these things in 3 years, but they could at least have released a refreshed version that used USB-C or even Lightning instead of the supremely outdated Micro USB. Seriously, I hate the idea of having to carry a separate cable for these headphones.
With design, more often than not, you get what you pay for. That said, while the JLab Studio Pro headphones definitely look cheaper than the Beats, they don’t look bad at all. As mentioned earlier, the ear-cups have a more tapered design, mimicking more the shape of your ears, and there’s a nice blue liner inside the cups for accent. The cushions aren’t as thick, but honestly, wearing them feels more comfortable than the Beats, which have a tendency to feel like they are squeezing my head a bit (though nowhere near as unbearably as their Solo Pro cousins). The power and volume controls are located on the rear of the right ear-cup, and while definitely more noticeable, they are somewhat easier to find with your finger (I often forget which side of the Beats house the controls, and since I’m right-handed, I instinctively try to press the right Beats logo to play/pause, which does absolutely nothing).
The areas where you’ll really notice that the JLab headphones are a cheaper product is in how the ear-cups are connected to the headband. A thin, metal wire holds each cup and serves as the extensions for fitting them on your head. It feels sturdy, but I can imagine that the wire could get pulled out or misshapen in a bag if you aren’t careful. And unlike the Beats (and other JLab headphones), these don’t come with any sort of carrying bag or case to protect them (though I’ll never use the ones that come with the Beats Studio 3; it is bulky as hell). But for me, this wire design has one benefit: since it juts out over the folding mechanism, I don’t think I’ll ever have to worry about pinching my hand. As for the headband, while it is thinner in the realm of padding than the Beats, it feels a bit sturdier; I don’t exactly know what is inside, but I’d wager it would withstand being bent out of shape more than the Beats.
Speaking of wires, though, the cables that connect the ear-cups to provide power are exposed and just kind of float between the ear-cups and headband, where the cable for the Beats is flush inside the headband. Unlike the JLab Omni, the Studio Pro don’t have an adapter cable for use with the ever-disappearing headphone jack, but unlike the Beats, they charge with USB-C.
It would be easy to say the Beats Studio 3 are the winners here because they do look nicer than the JLab Studio Pro. But I don’t think they look that much better as to justify the way higher price tag. I think that despite looking a little cheaper, the JLab Studio Pro look damn good for $40 headphones, and they feel good, too. And honestly, if I had to pay $40 again in a year to replace them because the wire connecting the ear-cups to the headband got bent or broken, that would still be cheaper than buying the Beats Studio 3 once.
The Jlab Studio Pro headphones come in matte black, while the Beats Studio 3 come in a variety of colors, including matte black, black or white with gold accents, red, blue, and a few other exclusives depending on where you buy them, which is nice if you want your headphones to stand out (for my comparison, I am looking at the matte black).
Winner: JLab Studio Pro. The Beats are nicer, but as I said, I don’t think they are $310 nicer. $40 gets you some damn nice-looking headphones from JLab, and that’s nothing to shake a stick at.
Sound Quality
Beats has, for a long time, thrived on their sound profile. They can be a little bass-heavy for some reviewers, but for me, they sound excellent.
End of review.
Just kidding. I love how the Beats sound; it is the primary reason I revisited them a year after first trying them out. In fact, I made a playlist on Apple Music called “Fantastic Beats and Where to Find Them” that is filled with songs that I think sound just perfect on the Beats Studio 3 (and others, like the Solo Pro and even the wired Beats EP).
But the sound profile was only half of the reason I wanted to give the Studio 3 (and the Solo Pro before them) a shot; the other was ANC.
2020 has been a bitch of a year, and one of the side effects is that I’m now working from home within earshot of my wife and our television. Most days, I can focus on work, but lately, I had wanted to find some noise-canceling headphones to help me pay attention to what I’m working on (I’m not saying my wife is loud… but the TV can be). Up until now, I’ve used Apple’s AirPods Pro or a pair of Sony in-ear headphones that have passive noise canceling. The problem there was that in-ear headphones began to hurt after an hour or two, or the AirPods Pro would fall out (I had to get some third-party ear-tips to fix this), or the headphones just wouldn’t last the full 8 hours I needed them for. I found myself constantly taking them out and not using them. So I knew I needed something that was on-ear or over-the-ear. I preferred the latter because I find over-the-ear to be more comfortable for long periods of time.
Since the Beats Studio 3 have Active Noise Cancelling, just like the AirPods Pro, I decided that I needed to give this a shot. And well… it works. Kind of. When sitting in the same room with the TV on at our normal volume, it reduces- but doesn’t eliminate- the TV sound. This means that whatever I’m listening to can generally be played at a lower volume, which is better for my ears and better for focusing on work rather than my music. The ANC on the Beats isn’t quite as good as the ANC on the AirPods Pro or other headphones on the market, but it is good enough for my needs, and the difference is definitely noticeable when you turn it off.
It isn’t all good news regarding ANC, at least not for me. When I first got the AirPods Pro, I found that ANC was giving me headaches after using it for an hour or two, and while Apple seems to have fixed this a little bit with the AirPods, I’ve found that this is present with the Beats as well. It doesn’t happen all the time, and it is a very minor headache, but it is noticeable, going down into my neck and making my head feel like it is ever so slightly in a vice (and one that doesn’t go away immediately after I take off the headphones). I chalk this up to my Tinnitus more than an issue with the ANC itself, but I know I’m not the only one who has complained about headaches with ANC, so it is worth bringing up if you have had similar experiences with other ANC headphones.
The JLab Studio Pro, simply put, do not have ANC. And I’m not bummed by that. I tried JLab’s Studio ANC headphones last year and I found that they were so-so; the ANC worked decently well, but it sacrificed the quality of the sound coming from the headphones themselves. And it was nowhere near as effective as the ANC on the Beats. That said, even though the Studio Pro headphones are only passively canceling noise (and they don’t passively cancel a whole lot because they aren’t quite as snug as the Beats), once I have something playing, I can only barely hear the TV in the background, and I can focus on my work just about as easily as I can with the Beats with the ANC turned on.
Out of the box, the JLab headphones don’t sound quite as good as the Beats. But JLab has a few sound profiles that you can switch to- JLab Signature, Balanced, and Bass Boost. When comparing them to the Beats Studio 3, I prefer Bass Boost, and I honestly can’t really tell any major difference between them on this profile. Maybe your ears are more discerning than mine, and maybe, maybe the Beats sound a hair better. But- and this is turning into a recurring theme with this comparison- they definitely don’t sound $310 better. Not by the longest of longshots. I don’t know what voodoo JLab does to get sound this good out of $40 headphones, but they need to keep up the good work. JLab also has an app that features a Burn-In tool to help tune your headphones for even better sound (though I suppose you could also use this app with the Beats if you wanted to).
It is worth noting that the Beats do get louder than the JLab Studio Pro; in order to achieve the same sound that I got out of the Beats at around 50% loudness on my iPhone, I had to have the Studio Pro volume up to around 65–70%. If you want the max volume out of your headphones, the Beats are going to deliver it better, but for the health of your eardrums you really shouldn’t listen to any pair of headphones at max volume for too long; to reduce the risk of hearing loss, you shouldn’t go above 80 decibels for more than 40 hours a week. According to the Apple Health app, at around 50% volume, the Beats were delivering around 71 decibels, and at max volume, I was able to get it to 87 decibels (my volume settings on my iPhone are set to max out at 90 decibels).
I was unable to get the JLab headphones to report to the Apple Health app (more on that in a moment), so I had to get tricky; I played the same song at the equivalent volume (well, the 65–70% to achieve the same loudness) next to my Apple Watch, and received a similar 73 decibels (this may not have been as accurate as the Apple Health app, but I did the same thing with the Beats as a control to make sure my watch was providing similar results to what Apple Health was recording). Likewise, I was able to cap it at 87 decibels with my current settings. Sidebar: If you have an iPhone and hearing loss is a concern of yours, you can turn on alerts in the Health app to notify you when you’ve been listening to loud music for too long.
The final thing I want to talk about with these headphones regarding sound quality is their connectivity. Both use Bluetooth, however, if you are using Apple products, the Beats definitely have a leg up in Apple’s W1 chip. This is the older chip, so it doesn’t work with Apple’s new smarts that will automatically transfer your H1-equipped Apple headphones to the device you are currently listening on (this seems to be a little half-baked anyway), but it does mean that once you’ve paired it with one Apple device, it is connected to your Apple ID and therefore can be instantly paired to all of your other devices. While I was able to pair the JLab to my MacBook, iPad, and iPhone, moving them back and forth was more of a hassle; I’d first have to disconnect them from the device they were currently connected to in order to connect them to another. Honestly, that isn’t so bad, but I’m admittedly spoiled by how simply the Beats and my AirPods simply switch between my devices.
Additionally, I’ve had a few instances where there’s a second lag using the JLab headphones when watching videos. So far I’ve only noticed this when using the Hulu web app in Safari, and it is intermittent and can be resolved by pairing the headphones again. This could be a problem with either Hulu or the new M1 MacBook as well; I’ve had other issues with Hulu recently (it likes to skip to halfway through the next episode of M*A*S*H while I’m in the middle of an episode) and I’ve heard reports of Bluetooth issues with the M1 MacBook Air (in fact, both the JLab and Beats have experienced disconnection issues when in use with the MacBook). Long story long, I’m not gonna hold this against the JLab headphones.
Winner: This is a tougher choice than I thought it would be. I mean, the Beats are some of the best headphones on the market, right? But I can’t knock how good the JLab headphones sound at a fraction of a fraction of the price. If my biggest gripe was swapping the headphones between my devices, it would still be cheaper to buy a pair of the JLab headphones for each device I use rather than to buy one pair of the Beats. With all that in mind, the only thing that I think the Beats truly wins out with is ANC, because it does reduce the volume level that I need to drown out the TV while I’m working. I’m going to give this one to the Beats Studio 3, but just barely, and really only if ANC is going to make a major difference to you. If not, save your money, buy the JLab headphones, and don’t ever look back.
Bonus Features
Ok, so some of this will be a rehash of things we’ve touched on before, but I want to be thorough, and I feel like some of these features deserve more of a call out than what I’ve given them so far.
Granted, this category is going to be heavily Beats-focused, as they have more bells and whistles. In fact, the biggest whistle is the W1 chip- that is, if you use Apple products.
I’ve already talked about ANC, which the JLab do not have, and I’ve talked about how the W1 chip makes switching between devices a breeze, and while it is a step behind the H1 chip in newer headphones from Apple, it is still two steps above anything else (again, only if you use Apple products). It provides a more stable connection with the devices, too, allowing me to travel just about anywhere in my house without having to worry about walls interfering with my signal.
That said, the JLab Studio Pro come with Bluetooth 5, which also allows for greater distance between headphones and device as well, and this works with all devices, not just ones with the Apple logo. I was able to get the same distance from my iPad with the JLab headphones as I was with the Beats. I can’t for the life of me find anything that says whether the Beats Studio 3 have Bluetooth 4 or 5, but for the price, I would hope they have 5.
But the special Apple bonuses don’t stop there. I love that the Beats Studio 3 will report to Apple’s Health app regarding decibel levels, making them another point of contact to get a bigger picture of your overall health with Apple hardware and software. Third-party headphones are supposed to be able to report decibel levels as well, and in the past, I’ve had other brands provide this data, but I’ve yet to get the JLab Studio Pro’s to do it. Luckily, other features regarding hearing health are built into the iPhone itself and not the Beats- such as capping the decibel level or alerting you if the volume has been too high for too long- so the JLab headphones should be able to take advantage of this as well.
I’ve mentioned that the Beats come with a wired adapter for use with a headphone jack (if you got one), and that’s a nifty feature. The included wire has in-line controls, though I’m not entirely sure why; when you plug in the headphones, the controls on the side of the Beats are disabled, but this seems like an unnecessary complication- I figure Apple could just as easily not included the in-line controls and left the controls in the headphones powered on. I mean, the headphones- including the ANC option- stay on while you are using the wire, so it seems odd that you would have separate controls for wired operation versus how you normally use them when they are wireless.
It would make sense if the headphones were powered off while they were plugged in, but as I’ve mentioned before (and I’ll mention again in a minute), the headphones for some reason need battery life in order to use the wire. Every other pair of Bluetooth headphones I’ve used that have a wired option can use the wire without any power whatsoever in the headphones themselves. Granted, ANC definitely won’t work without the power, but I figured the Beats should at least be able to play sound while out of juice. And with Apple quickly killing the headphone jack on most of their products (only MacBooks and the basic iPad and iPad Mini still have them), so I find this inclusion more baffling than anything else. And I don’t once expect to actually use the wire. I suppose maybe, maybe, if you were planning to use the Beats with a stereo system or an old computer that doesn’t have Bluetooth, it might make sense… but in that case, there are better- and cheaper- wired headphones to get for those specific use cases. This just feels half-assed, which I think is odd for Apple.
Ok, so like I said, this one is very heavily focused on the Beats. That’s because outside of being a quality pair of Bluetooth headphones, the JLab Studio Pro headphones simply don’t have a lot of extra things to do. And for $40, that’s fine. Costing only $40 is actually a pretty nice perk all on its own.
The only other bonus feature I can really talk about with the JLab headphones is the three different sound profiles that come built-in. I mentioned that the Bass Boost profile gives you the closest sound to the bass-heavy Beats, but if that’s too much bass for your comfort, the JLab Signature profile is a clear second best. I’d avoid “Balanced” if at all possible, as it seems to flatten everything to “balance” it out. The Balanced profile is the only one in which I think these headphones actually sound like they are only forty bucks.
Winner: So the Beats Studio 3 clearly have more bells and whistles, and so by default, they’ll win this category. But I really think you should consider whether those extra features are something you’ll use. If you don’t need ANC if you don’t need (or use) the Apple Health features, and especially if you don’t have Apple devices to take advantage of the W1 chip, then you really shouldn’t consider the Beats. The JLab Studio Pro will suit you just fine.
Charging and Battery Life
We’re in the home stretch. We’ve talked about what the headphones look like. We’ve talked about what they sound like. All that’s left is how long you’ll be able to listen to them.
Apple rates the Beats Studio 3 for up to 22 hours with ANC or up to 40 without it. That’s quite respectable, and with ANC I primarily got about two days of mixed usage before I had to charge them again. Charging is relatively quick, too. 2 hours will get you a full battery from dead, but just 10 minutes can get you up to 3 hours of use in a pinch.
As previously mentioned, however, that charging comes via a Micro USB cable that comes included in the box. Grrrrrrrrrrr. I suppose it was forgivable in 2017 that these headphones used Micro USB, but now that nearly everything but the cheapest Android phones use USB-C (or Lightning, in Apple’s case), I loathe that Apple hasn’t at least given the Studio 3 Beats the minor upgrade of a USB-C or Lightning port (for reference, the newer Solo Pro charge with Lightning). I’m not asking for a complete redesign like the Solo Pro- although being 3 years old, the Studio line is probably due a big refresh soon- but they could have at least brought the charging mechanism into the modern age.
Enough griping. Apple at least supplies you with a generously long Micro USB cable (though no wall adapter), which is more than we can say for JLab (getting there). As mentioned earlier, the Beats do also come with a cable for using the headphones with a headphone jack (if you even have a device that has a headphone jack), but this cable will not work without battery life in the Beats, so don’t think you’ll be able to use these when they are completely dead, unlike nearly every other pair of headphones that comes with a headphone jack cable. Ok… now enough griping.
So, onto the JLab Studio Pro. JLab rates the battery life at a massive 50 hours. Simply put, I’ve not had to worry about charging them at all since they arrived. JLab says that it will take longer to charge, however; 3 hours to get from zero to full, and 10 minutes will get you only an hour of use.
In the box, JLab supplies the dinkiest little USB-C cable (and again, no wall adapter). It looks well-made, but it is super short. This doesn’t really matter if you have literally anything else that uses a USB-C charger, though, and I suspect JLab knows this. Whenever I do eventually need to charge these headphones, I’ll be using my MacBook Air charger to do it. And I absolutely love that convenience (ahem… Apple).
One thing I’ve noticed regarding battery life is stand-by time. I had this problem with the BeatsX in-ear headphones, and it seems to be prominent with the Beats Studio 3 as well: stand-by time sucks. I’m getting around 20 hours of life from them whether I’m using them or not, and I’m having to charge them up daily. Throughout my time with both pairs of headphones, I’ve used the JLab Studio Pro more often and I’ve only charged them once; they’ve been sitting idle- but powered on- for the last two days and they still have around 70% battery life (this percentage is according to Apple’s battery widget; the headphones themselves will only tell you if the battery is “full”, “medium”, or “low”. Likewise, I charged the Beats to 100% last night, used them for maybe an hour at most, and then left them sitting on my desk powered on. Not only did they power themselves off at some point (which is nice, if it conserved any battery life), but about 12 hours later they have already dropped to 52% (in the hour or so I spent finishing this story since writing that sentence, it dropped to 40% with zero use at all). I imagine that this is due to the ANC; whenever I pick up the Beats, even when I don’t have anything playing on them, I can tell the ANC is on, and likewise, this has to be draining power.
Personally, I am one who forgets to turn off my headphones all the time, so that really is a bit of a deal-breaker for me that the Beats don’t seem to have any sort of low-power mode to power off ANC and keep them from draining too much when not in use. Again, this probably can be blamed on the fact that these headphones haven’t been updated in a few years; the newer Beats Solo Pro do feature both a low-power mode when they aren't being used and will automatically turn off when you fold the headphones up, so I imagine a forthcoming Beats Studio 4 or Studio Pro or whatever comes next for Apple’s over-the-ear headphones will have some of these newer bells and whistles.
Winner: With longer battery life and USB-C charging, JLab’s Pro headphones clearly win this one. The Beats will charge faster when you do charge them, but you’re going to be charging the Jlab’s far less often. And the JLab Studio Pro are definitely going to live longer if you forget to turn them off.
$40 Vs. $349
So, did the Beats Studio 3 make an argument for their higher price tag?
Honestly, I don’t think so. Definitely not in 2020, and definitely not if you are paying full price for them; even at the sale price I got them on for $175, it is a hard sell with the $40 JLab headphones being almost as good- if not better- in every category.
The biggest benefit to them is the ANC, but for that price (either the sale one or the regular price), you could get the Beats Solo Pro headphones that come with Apple’s newer H1 chip and have some good battery improvements, like charging with Lightning and both a low-power mode and easy power off for forgetful people like me who don’t turn off their headphones. The only reason those weren’t the ones I reviewed here is because they hurt my ears after a while, and unlike the JLab Studio Pro, they were on-ear headphones.
Over the duration of this review, even though ANC proved useful while I was working, I found myself reaching for the JLab Studio Pro way more often, as they were just more comfortable, sounded very nearly as good, and frankly I knew that they’d have the juice to get me through work no matter what.
I think the Beats Studio 3 are excellent headphones. But with $40 headphones being this good, they are not worth the price, no matter how good the sale is. | https://medium.com/the-shadow/battle-royale-with-cheese-headphones-edition-beats-studio-3-vs-jlab-studio-pro-6a157f07d566 | ['Joshua Beck'] | 2020-12-13 18:23:33.176000+00:00 | ['Technology', 'Gadgets', 'Headphones', 'Tech', 'Apple'] |
Why is Everyone Going to Iceland? | I try to understand the rise in tourism in Iceland with public data sources.
Halló aftur! (Hello again in Icelandic). In my free time, I enjoy watching videos on YouTube on topics ranging from photography, cinematography, soccer, comedy, etc. I recently started following Johnny Harris’ channel after watching his video on building a dream studio and stumbled upon his analysis on why tourism in Iceland has risen dramatically (link below). I’ve always wanted to visit Iceland with my photographer friend Pablo because of the country’s beautiful scenery.
In this post, I decided to try to decipher why Iceland has such a meteoric rise in tourism using public data sources. The video itself does a good job explaining the sudden spike/surge of tourism and gives a few good reasons. My point here is not to refute his arguments but rather use data to see what I learn from it. I find public data sources, clean the data, visualize it, and make predictions with Facebook Prophet.
Data Sources
For my investigation, I started just as I would back in college when writing a research paper: google searches. I was querying to find relevant, public datasets that I could download and analyze. I settled on three main datasets which I have attached the links here:
Visitors to Iceland Through Keflavik Airport : this is the main hub for international transportation. I didn’t find stats for all visitors from all transportation hubs but I figured this would be a good sample from the entire population. | https://towardsdatascience.com/why-is-everyone-going-to-iceland-1f99083bee1a | ['Stephen H. Kim'] | 2019-06-07 13:10:13.983000+00:00 | ['Travel', 'Analytics', 'Programming', 'Python', 'Data Science'] |
Redshift with Rockset: High performance queries for operational analytics | In this blog, I will show how to enable high performance queries for interactive analytics on Redshift using Rockset. I will walk through steps for setting up an integration between Rockset and a Redshift table and run millisecond-latency SQL on it, for powering an interactive Tableau dashboard.
Data warehouse services like Amazon Redshift are ideal for running complex queries for low concurrency workloads. They can easily scale to petabytes of data and are great for running business reports. Now suppose an organization wants to operationalize the data that’s in Redshift, in the form of an interactive dashboard that allows users to interactively query data in Redshift. There are two challenges:
Such interactive dashboards demand millisecond-query latency for ad hoc queries, which is not typically supported by Redshift. If the dashboard is used by tens of users simultaneously, Redshift cannot support this level of concurrent queries since its not built for high QPS.
To solve this, we can connect Rockset to Redshift and have the operational dashboard issue queries against Rockset instead of Redshift. With Rockset, you can continuously import your data sitting in Amazon Redshift clusters without any ETL, run fast SQL and perform operational analytics without worrying about capacity planning, cluster sizing or performance tuning.
Redshift Integration
Each Amazon Redshift cluster can have multiple databases, schemas and tables and each table requires data definition to be defined before inserting data. Rockset makes it easy to connect a Redshift cluster and use the same set of permissions to access all the tables inside a cluster. Also, you don’t need to provide any data schema to create collection in Rockset. Rockset uses Redshift’s unload capability to stage data into a S3 bucket in the same region as the cluster and then ingests data from that S3 bucket. Rockset unloads data using parallel option to stage it faster.
Live Sync
Rockset also allows user to specify a timestamp field in the source Redshift table like last_updated_at to monitor for new updates. The sync latency is no more than a few seconds when the source Redshift table is getting updated continuously and no more than 5 minutes when the source gets updated infrequently. This currently handles only updates and new inserts in the source table. Support for record deletes is coming soon. Rockset requires source Redshift table to have Primary Keys. Primary key values from the Redshift table are used to construct the_id field in Rockset to uniquely identify a document in a Rockset collection. This ensures that updates to an existing item in the Redshift table are applied to the corresponding document in Rockset.
Connecting Redshift to Rockset
For this demo, I have loaded sample Oakland Call Center data in Amazon Redshift which I will use to create the Redshift integration below. This uses REQUESTID as the primary key in Redshift. Also I have created a column named updated_at in the the source Redshift table which sets it to current time, whenever a record is inserted or updated. The create command on Redshift looks like this:
create table oakland_call_center (
....
....
updated_at datetime default sysdate);
For the steps below I’m going to use the Rockset console. You can also create an account in Rockset by signing up here.
Creating a Redshift Integration
To let Rockset access Redshift cluster, I will create an Integration with all the permissions required to access it. This includes IAM permission for the S3 bucket which exists in the same account and region as the Redshift cluster and database permissions for Redshift user. For more information, you can refer to the docs.
Creating a Rockset collection
Once the Redshift Integration is set up, we are ready to use it to ingest different tables in Redshift cluster. Rockset requires the database, schema and table name at this step.
At this step the collection is created and getting updated with data from the specified Redshift table. We can now start querying the data.
Querying Redshift Data in Rockset
Each row in Redshift table corresponds to one record in Rockset collection. Let’s describe and see all the fields in the collection. For Datetime type fields in Redshift table, Rockset stores it as timestamp with the default UTC timezone.
Now, let’s run some queries on this dataset to understand call center operations. First query below checks the number of requests across different sources in the last 3 days.
Using Tableau I also plotted this chart to analyze the trend.
Most of the requests come through SeeClickFix (a mobile app to raise requests). Next let’s check how many of these were CANCELED (A request is cancelled if it was created erroneously). Agents answering customer calls spend time on such requests as well and a large number of these can be a good indicator to fix something in the request flow.
The collection also tracks when the issue was resolved. Let’s check the average number of days taken to resolve a case based on the type of request. Resolving a case involves external factors and can be used to dig deeper into operations of other teams which take long time to resolve.
Summary
The queries I performed are just a subset of the queries that operational dashboards typically require. Rockset supports JOINs so you can run complex queries across collections. I simply created the Redshift integration with Rockset and performed fast SQL without any ETL or cluster re-sizing. The entire process of loading the data, querying the collection and building charts took about a couple of hours. Rockset makes it easy for data practitioners to ingest and join data across different Redshift tables or even other sources! | https://medium.com/rocksetcloud/redshift-with-rockset-high-performance-queries-for-operational-analytics-f2f8041d8d1b | ['Kshitij Wadhwa'] | 2019-09-04 19:02:09.015000+00:00 | ['Real Time Analytics', 'Tableau', 'Redshift', 'Concurrency', 'Dashboard'] |
AI Product Management P2 and P3: What priorities should you set for your AI model and how do you know how much training data you need? | AI Product Management P2 and P3: What priorities should you set for your AI model and how do you know how much training data you need? Stella Liu Follow Oct 20, 2019 · 6 min read
Learn about how to track metrics beyond accuracy and how to accelerate gathering training data for your AI model.
This article is part of a series that breaks down AI product management into 5 distinct phases. The introduction to these series starts here.
Phase 2: Priority Setting
Introduction
I’m sure you know, as a fellow Product Manager, how priorities align everyone around the product. When you build an AI product, there are AI-specific priorities to consider.
So, what priorities should you set and track?
Short answer:
It’s not only about accuracy.
Long answer:
Here are a few thoughts below on why accuracy is too limited of a metric to optimize for. I also added more priorities that would be helpful to track alongside accuracy. These priorities will drive the AI model design and impact model improvement plans.
1. Accuracy
How important will accuracy be for you? How do you optimize for this?
Your data scientists can tell you accuracy metrics. But it may turn out that the inaccuracies aren’t significant. If you are building an AI Chatbot, transcribing “A” instead of “The” may not impact that translation! Accuracy will also never be 100%. In fact, if you do get an accuracy metric of 100%, that’s a red flag. This means that your AI model is only working on your data set and will stop working when it sees new data.
Thus, accuracy is just one data point to think about the AI model. If the bot is wrong 5% of the time but people are completing their calls and getting what they need, then does it matter? How much cost are you willing to take to improve your model? If 6 months of work makes the AI model 1% better — is it worth it? The accuracy of most AI models will also eventually plateau. Increasing accuracy X% at this stage can cost as much as the entire project to date.
I hope you can see that accuracy is not the end all be all. Depending on the context, it may not even impact your end-users. There are ways to mitigate inaccuracies that don’t even involve re-training the AI model. For instance, you can give users the ability to correct the systems themselves or display error messages. In chatbots, a message that says, “Is there anything else I can help you with?” helps if the user asked for many things and the chatbot only found one.
Approach “accuracy metrics” as just one of many priorities to track and improve over time. Read on to learn about other priorities you can set for your AI model.
2. Explainability
Explainability covers the need to understand how the AI model got from point A to point B.
If explainability is important to your users, this can impact the AI model you can use for your product. Some AI models, like neural networks, operate like a ‘black box.’ Data went into the model and some output came out. It is difficult to explain how or why that output occurred.
Additionally, this can impact the way you design the UI of your AI product. My design team spent time developing explanations of the AI model output in the UI. Our product used a natural language processing model that analyzed the quality of requirements. The output of that natural language processing model was a string of entities — this was not very useful at first glance. So, we built a series of business rules that assigned weights to the entities to compute a single score. The UI only showed that score to the end-user, an explanation on why and suggestions to improve the score (we hid the complex entity output). You may need to iterate on a design that’s most understandable by the end-user.
3. Business KPIs
Business KPIs are important to check your AI model’s accuracy in context.
At the end of the day, the success of your AI product hinges on whether it helps your end-users achieve their goals. If the AI bot is wrong 5% of the time but people are still getting what they need, then does it matter?
Your business KPIs should be your Northstar to make choices in improving accuracy or not.
4. Bias mitigation
The AI model’s output is directly related to its input. Thus, it’s important to have good representation and diversity in the training data.
This may or may not be relevant depending on your use case. It will depend on how much this bias will impact your end-users. If you’re trying to identify a dog in a picture, this may not be relevant. But, if you are building an AI tool that can recommend bank loans, having a bias towards gender, race, or age will have a material impact on your end-users. In this case, having a priority in place to keep training until the internal bias is gone is critical. You can use an ethical framework to help brainstorm and mitigate the potential primary, secondary and tertiary effects of your AI model. There are also some Watson services that can help identify and mitigate bias.
Overall Tips
You can’t only optimize for accuracy.
Align your priorities against your end user’s needs.
Your priorities will drive algorithm choice, UX design and AI model design.
Make sure to set and track these priorities.
Phase 3: Training Data
Consideration: How much training data do I need?
Short answer:
It depends.
Long answer:
If you decide to go ahead, don’t assume that you’ll need a lot of training data. There can be a lot of hesitation to start because there’s an assumption that AI models need a lot of training data. But the amount of data you need depends on the entities you’re labeling and the models you’re building.
It’s hard to generalize… but here are 6 rough rules of thumb below.
1. Quality over quantity
The more data the better but quality also matters. “Quality” means getting training data as close to what the AI model will see in production.
As a rule of thumb, the size of your future model performance problem is the distance between the synthetic training data you use and the real data your users are providing. You may want to create fake data, but you are not as good at simulating what your users want as you think you are. You can bootstrap this process by building interfaces to gather data or extracting data from sources like logs or search queries.
In our case, I spearheaded a client program where we partnered with our enterprise customers to train the AI model. These customers gave us access to training data and were partners throughout the entire process. Being able to use near-close production level training data helped us speed up this phase. You should consider engaging a few of your trusted customers as well!
2. The algorithm you choose matters
Deep learning algorithms need more data.
3. The more you want the more you need
If you want to extract 100 entities, that will take more data and time than extracting 1 entity.
4. Pick the right AI tool
Some AI tools might train with less data but they may give less accurate results. You should test different AI tools for your use case and select the tool that performs the best.
5. Gathering minute details may mean longer training cycles
Identifying New York City v. Taipei City in a photo may take more training cycles than identifying a city from a forest.
6. Data organization and clean-up may take more time than you expect
You’ll need to plan for enough time to clean the training data. Many projects can spend 80% of their time in this “data janitorial” work. Be careful to set expectations with the business and give your data scientists enough time to do this work!
Overall Tips
The amount of data you need depends on many factors.
Do not underestimate the time it may take to secure and clean the data.
Read on to Part 4 and 5 to learn how to build and deploy your AI model. | https://medium.com/ibm-watson/ai-product-management-p2-p3-what-priorities-should-you-set-and-how-much-training-data-do-you-need-e535f49bfbc3 | ['Stella Liu'] | 2020-03-10 14:50:09.363000+00:00 | ['Machine Learning', 'Product Management', 'Artificial Intelligence', 'Editorials', 'Product'] |
Pallet Story #2 — How we build and scale our design system | Despite being a 3-year-old-only company, we have invested a lot of thoughts and efforts into our Design system that we have named ‘Pallet’. Get it? Pallet! We try and improve our design system during every project tackled from our roadmap. So, let me detail our workflow and how we contribute every day to the growth of our design system.
Component design is done while building a project. For example, when redesigning our navigation system, we needed tooltips. We created them on this occasion and so on. Once the design is validated on the project by front-end engineers and designers, we import it into Pallet.
To build and maintain our design system, we leverage several software: | https://medium.com/everoad/pallet-story-2-how-we-build-and-scale-our-design-system-e5e76a8d23c5 | ['Gauthier Casanova'] | 2019-11-19 10:55:42.036000+00:00 | ['Product', 'Product Design', 'Design', 'Design Systems', 'Transport'] |
How I Started Self-Hosting | Should You Self-Host?
There are many factors to consider when deciding whether to self-host your applications.
You should self-host if:
You’re still reading patiently at this point. You enjoy tinkering with stuff and fussing over minute details, spending many hours or even days on the tiniest of problems. You are a generalist who wants to learn anything and everything about computers. You like the cheap thrills you get from being able to achieve everything big companies do with their products at little to no cost. You want to own your data and web services.
I put the ownership of data as the final point as I believe that complete ownership of your data is a moonshot, if not impossible, goal in today’s highly interconnected world. For this reason, it should be the last motivation you should have when it comes to self-hosting.
I cannot emphasize point two enough. Over the past three years of self-hosting, I’ve run into numerous problems and spent many days debugging an issue, only to find that the problem was a typo in the configuration, or worse, a yet-to-be-discovered bug with the FOSS software I was trying to host. Let me illustrate this with an example.
A painful typo you’ll never spot
Just four weeks ago, I was trying to deploy OpenLDAP , a lightweight client-server protocol for accessing directory services, commonly used for storing credentials and user metadata. This was meant to be the core of my SSO service where users can just use a single username and password to access all my services. Have a look at my Kubernetes env configuration for osixia/openldap-backup , a dockerized implementation of OpenLDAP.
containers:
- name: openldap
image: osixia/openldap-backup:1.3.0
env:
- name: LDAP_ORGANIZATION
value: ikarus
- name: LDAP_DOMAIN
value: ikarus.sg
- name: LDAP_BASE_DN
value: 'dc=ikarus,dc=sg'
- name: LDAP_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: openldap
key: admin_password
- name: LDAP_READONLY_USER
value: 'true'
You’ll NEVER spot this error! Can we even call it an error?
I struggled for two days with why OpenLDAP did not pick up the LDAP_ORGANIZATION value from the environment variable — deleting, redeploying, and changing the order of the variables. It was only when I actually dug into the code did I see how trivial this error was:
slapd slapd/internal/generated_adminpw password ${LDAP_ADMIN_PASSWORD}
slapd slapd/internal/adminpw password ${LDAP_ADMIN_PASSWORD}
slapd slapd/password2 password ${LDAP_ADMIN_PASSWORD}
slapd slapd/password1 password ${LDAP_ADMIN_PASSWORD}
slapd slapd/dump_database_destdir string /var/backups/slapd-VERSION
slapd slapd/domain string ${LDAP_DOMAIN}
slapd shared/organization string ${LDAP_ORGANISATION} ## THIS LINE
slapd slapd/backend string ${LDAP_BACKEND^^}
How did I miss that?!
The variable name should have been LDAP_ORGANISATION and not LDAP_ORGANIZATION .
Looking back at the documentation, I couldn’t believe I missed it.
The following are only required and used for a new LDAP server:
LDAP_ORGANISATION : Organisation name. Defaults to Example Inc .
: Organisation name. Defaults to . LDAP_DOMAIN : Ldap domain. Defaults to example.org .
: Ldap domain. Defaults to . LDAP_BASE_DN : Ldap base DN. If empty automatically set from LDAP_DOMAIN value. Defaults to (empty) .
: Ldap base DN. If empty automatically set from LDAP_DOMAIN value. Defaults to . LDAP_ADMIN_PASSWORD : Ldap Admin password. Defaults to admin .
Ldap Admin password. Defaults to . LDAP_CONFIG_PASSWORD : Ldap Config password. Defaults to config .
It turns out the organization maintaining the Docker image of OpenLDAP, osixia, is based in Nantes, France —which is probably why they wrote the variable names in British English instead of American English.
If what you just read terrifies you deeply, I suggest you reconsider!
You should not self-host if: | https://medium.com/better-programming/how-i-started-self-hosting-df17f0919d64 | ['Will Ho'] | 2020-08-03 17:15:11.423000+00:00 | ['Open Source', 'Docker', 'Kubernetes', 'Programming', 'Raspberry Pi'] |
JavaScript Internals: What Actually Happens When You Call Console.log? | What Happens When You Call console.log?
It’s always good to keep in mind that we work on the shoulders of giants, even when calling the most seemingly basic line of code. So, let’s dive deeper taking V8 — the JavaScript runtime and WebAssembly engine written in C++ that powers Chrome and Node.js — as an example.
It will first call a utility function WriteToFile
void D8Console::Log(const debug::ConsoleCallArguments& args,
const v8::debug::ConsoleContext&) {
WriteToFile(nullptr, stdout, isolate_, args);
} https://github.com/v8/v8/blob/4b9b23521e/src/d8-console.cc#L52-L55
After pre-processing internally, the JavaScript values will call fwrite
The function fwrite is part of the libc , which is the standard library for the C programming language, as specified in the ANSI C standard. As we all know, the C Standard Library has several implementations on different platforms. Let us take the example of musl , a well-known implementation on the Linux platform written in C:
It will internally call a utility function ( __stdio_write ) after a bit of indirection, which will then make a call to an operating system call ( writev ):
The syscall symbol is a macro that will expand to __syscall3 after a lot of pre-processing. System calls differ between operating systems, and the way to perform them differs between processor architectures. It usually requires you to write (or generate) a bit of assembly. On x86–64, musl defines __syscall3 as following:
This sets up the system call number and arguments. On x86–64, the instruction for making system calls is called syscall .
After the syscall is made, the control transfers to the kernel (Linux, in this case). But that’s a whole other story…
The syscall will trap into the kernel. The kernel will preserve a bunch of information about the process and the context of the syscall. In the simplest scenario, the kernel will put that process in a queue of processes to be served and move on to handle another waiting process (someone else who made a syscall or a hardware interrupt, for example). The kernel will then continue serving the highest-priority requests/processes before eventually returning to our process that called console.log . The kernel will see that this process is blocked and waiting on a syswrite. It will take the arguments and other data that it previously stored and write to the file via a kernel-specific implementation. It will then mark the process as “ready” and return it to a ready queue to allow control to return to the calling process when its turn to run comes up.
They stopped after the syscall because it’s really tough to say exactly what happens after. It’s totally dependent on the OS kernel and its implementation details. | https://medium.com/better-programming/javascript-internals-what-actually-happens-when-you-call-console-log-60a4949704f8 | ['Jashan Preet Singh'] | 2020-06-23 20:26:58.760000+00:00 | ['React', 'Angular', 'JavaScript', 'Programming', 'Nodejs'] |
A Very Famous Designer Was Nice To Me and I Still Think About It | A Very Famous Designer Was Nice To Me and I Still Think About It
It said more about him than me
My first week at Twitter overlapped with his last week or two, back in 2014. I was in the corner of the design studio in the morning, unpacking my stuff and he came over. He said something along the lines of
“You know, it’s easier to leave the team when I know people like you are joining it.”
And I’m not saying this out of modesty, or trying to humblebrag, but I’m pretty sure he had no idea who I am. Really! I think he actually came over to say that to a new person on the team, because he knew it was a kind and supportive thing to say, without actually having seen my portfolio. Or being in my interview loops, or having talked to me. And I’m certainly not famous.
It was just a really awesome thing to say, and an awesome way to pass the torch. I think of that quote often, and try to come up with ways to make other people feel as welcome and supported as he did. | https://jonbell.medium.com/a-very-famous-designer-was-nice-to-me-and-i-still-think-about-it-fcc9a9eb9de1 | ['Jon Bell'] | 2020-12-10 08:09:21.594000+00:00 | ['Design', 'Twitter'] |
The inner Ape & The Rules at Play. | There’s a classic scene in the 2008 movie Felon, where Stephen Dorff, just having arrived at prison after defending his house against intruders, meets the stone hard killer cell mate (Val Kilmer) with a clear ultimatum: Fuck or fight?
In society, most of us don’t have to make hard choices like that. And what a relief! But we need to respond to a number of situations and challenges on a daily basis. How do we know what rules to play by, and the possible outcomes? Are we free to make our own choices or are our responses due to our status and our position in the hierarchy which we find ourselves in?
What are the rules at play?
For a long time I have been a big fan of decentralized organizations. Flat and fluid structures; egalitarian, teal-colored and responsive. Many have fallen in love with the idea of working in free-flowing, bossless structures inspired by visionaries like Frederic Laloux or Ricardo Semler arguing that humans have no use for hierarchies. And yet, everywhere we turn there’s hierarchy. Just because it’s not always visible doesn’t mean it is not there.
Wherever there is a weak, malfuncioning formal structure, there is a lurking unformal hierarchy. Even supposedly egalitarian structures have their inner circles based on skills, social relations and performance. Someone always holds higher esteem within the group and more power to influence. And to succeed, we need to know how those hiearchies work and what rules are at play.
This is a tricky challenge.
How great it would be to live in a world where people can always be who they are meant to be. In reality, that’s not how things work and in most cases we have to battle all kinds of bullies and layers. More often than not, we have to fight the power to earn our rights, our freedom to grow and follow our beliefs. Even if status is based on positive human traits, we may feel constrained. Life is built on hierarchy but the dynamics can be very different in various groups. Hierarchy is a game of status, based on physical strength or size, social skills or character, sometimes even empathy and love.
To understand how this works, I think it’s interesting to look at our closest relatives, the great anthropoid apes.
In every human there seem to live one loving and caring bonobo and one agressive chimp. As a species, these our most close cousins look much the same but behave in very different ways. That’s because they never have to face each other in their natural environment, separated by the great Kongo river.
Bonobos and chimps organize very differently. Chimpanzees are ruled by agressive males and forceful conflict resolution, bonobos are ruled by females with strong character, resolving conflicts mainly by having sex. If you happen to be born a chimp or bonobo, you have to follow very different rules in order to achieve status. With humans, it seems to be we are having both systems at play, and that may also be one of the reasons to our success as a species. It also makes things more complicated, since we always have to figure out what rules are at play and since most of us are part of many different groups, we have to know how to always balance these forces we carry within us.
Fuck or fight? Is it code chimp or bonobo?
Society has evolved with the urge of humans on the one hand to care for, and love one another, and on the other hand to conquer and exploit others. We may have a big frontal lobe, but underneath we are still 99% apes. We all share a history of great love and great violence. Our ancestors were more likely to die from the hands of another human, than tornados or tigers. Yet, it is our ability to communicate, to cooperate that has brought us to where we are today. Hence the ability to be both chimp and bonobo has to be something of a secret weapon.
He who intuitively senses the appropriate response has the edge and he who makes the wrong assumption might not survive for long.
A large part of our brain is constantly occupied by analyzing how others behave or respond. It is a skill we train from the moment we are born and we do it without really thinking about it. Being part of the group has always been key to survival and thus the need to belong is as crucial to us as food and shelter. Society has effectively made us autonomous as individuals, we can now easily survive on our own. Yet more people die from loneliness than ever before. We do need to belong, and this means constant adaptation to different group dynamics.
Even if we do conclude we are part of a bonobo-style structure, there might be times when we have to bring out the chimp when faced with a threat. One example where this conflict of human nature is obvious is in the ongoing struggle for the future of humans and other species. Trump and Bolsonaro vs Greta Thunberg and Naomi Klein. Extraction vs regeneration. Competition versus cooperation. Ego vs Eco. Yes, there are times when we all have to bring out the chimp. Fight the power. Or we will all suffer dearly.
How much wisdom there is in action movies!
In the original 1987 Predator, at a critical point, Arnold concludes that We make a stand now, or “there will no one left to make it to the chopper”. That’s more or less were we are now with the future of the planet. It seems there is never enough love and care for the world, despite our capabilities. Why else do we keep harming our environment in order to make a few more bucks? Why do we allow dictators to rule and wage wars on their own people? Why colonize Mars, when we are already living on the perfect planet?
Women finally got the right to vote only after decades of sacrifice from female activists. The same could be said about the French revolution and the fight for black civil rights in America. One way or another, even the most loving at some point must also decide to fight. We need to ”get to the chopper” or die. | https://medium.com/weelaborate/the-inner-ape-humans-hierarchy-674269e5210f | ['Marcus Bergh'] | 2019-12-27 17:12:57.909000+00:00 | ['Society', 'Evolution', 'Hierarchy', 'Ethics', 'Earth'] |
Make Python Hundreds of Times Faster With a C-Extension | Photo by Michael Dziedzic on Unsplash
Python is one of the most popular programming languages. It’s learned and used by students, teachers, and professionals around the world. Python provides a simple, straight forward, interpreted language that fosters creativity and freedom. Programmers have access to a community of hundreds of thousands of developers that provides an immense selection of open source packages for Python. The language manages garbage collection. memory allocation, pathnames, file descriptors, and much more that a programmer would normally need to worry about in a lower level language. Yet, that’s both an advantage and disadvantage.
Python sometimes takes care of too many things. It blurs the fine details of whats really happening under the hood. If you feel that way, this post is for you. We will go over the basics and fundamentals of making a C-extension to the Python interpreter.
Why make a C extension?
C extensions are fast, performant python libraries that can serve several purposes. Those include:
High Performance:
C extensions can perform hundreds of times faster than equivalent code written in Python. This is because c functions are natively compiled, and just a thin layer over assembly code. Additionally, some tasks can be slower to perform in Python, such as string processing. Python has no concept of a character, just strings of different lengths. While C, has a very raw and efficient string composed purely of a block of memory terminated with a \0 character. Overall, C extensions provide a way to gain a powerhouse of performance in Python.
Wrapping:
Lots of widely used software libraries are written in C. However, many application level systems, like web development frameworks, or mobile development frameworks, are written in languages like Java or Python. C functions can’t be called directly from Python, because Python does not understand C types without converting them to Python types. However, extensions can be used to wrap C code to make it callable from Python. The building and parsing of Python types will be explained later.
Low Level Tools:
In Python, the degree to which one can utilize low level and operating system level utilities is quite limited. Python uses a Global Interpreter Lock (GIL), that allows only one thread at a time to execute Python byte code. This means that although some I/O bound tasks like file writes or network requests can happen concurrently, access to Python objects and functions cannot.
With C, a program has complete and unrestricted freedom to any resources it can load and use. In a C extension, the GIL can be released, allowing for multi-threaded python work flows.
The Python C API
The Python language provides an extensive C API that allows you to compile and build C functions that can accept and process Python typed objects. This is done through writing a special form of a C library, that is not only linked with the Python libraries, but creates a module object the Python interpreter imports like a regular Python module.
Before we get into the building steps, lets understand how a C function can process Python objects as input and return Python objects as output. Let’s look at the function below:
#include <Python.h> static PyObject* print_message(PyObject* self, PyObject* args)
{
const char* str_arg;
if(!PyArg_ParseTuple(args, "s", &str_arg)) {
puts("Could not parse the python arg!");
return NULL;
} printf("msg %s
", str_arg);
// This can also be done with Py_RETURN_NONE
Py_INCREF(Py_None);
return Py_None;
}
The type, PyObject* , is the dynamic type that represents any Python object. You can think of it like a base class, where every other Python object, like PyBool or PyTuple inherits from PyObject . The C language has no true concept of classes. Yet, there are some tricks to implement an inheritance, polymorphic like system. The details of this are beyond the scope of this guide, but one way to think about it is this:
#define TYPE_INFO int type; \
size_t size struct a_t {
TYPE_INFO;
}; struct b_t {
TYPE_INFO;
char buf[20];
}; struct b_t foo;
// Fields are always ordered, this will work
((struct a_t*)&foo)->type
In the above example, both a_t and b_t share the same fields at the beginning of their definitions. This means, casting struct b_t* to struct a_t* works because the fields of a_t compose the same, prefixed portion of b_t .
Parsing Arguments
The function has two parameters, self and args . For now, think of self as the object at which the function is called from. As stated in the beginning, we will be writing our function to be called from the scope of the module.
The function parses the objects within args in this statement:
if(!PyArg_ParseTuple(args, "s", &str_arg)) {
Here, the args parameter is actually a PyTuple , the same thing as a tuple in Python, such as x = (1, 2) . In the case of a normal function call in Python, with no keyword args, the arguments are packed as a tuple and passed into the corresponding C function being called. The "s" string is a format specifier. It indicates we expect and want to extract one const char* as the first and only argument to our function. More information on parsing Python C arguments.
Returning Values
In the last part of the function, we have the following statements
Py_INCREF(Py_None);
return Py_None;
In the Python C API, the None type is represented as a singleton. Yet, like any other PyObject , we have to obey it's reference counting rules and accurately adjust those as we use it. Other C Python functions may build and return other values. For more info on building values, see here
This particular function is only meant to print, by convention those usually return None .
C Extensions Structure
Now, we can explore the structure of how we compose the extensions that Python will actually be able to import and use within the Python runtime. To do that, we need three things. First is the definition of all the methods the extension offers. This is an array of PyMethodDef , terminated by an empty version of the struct. Next is the module definition. This basically titles the module, describes it, and points to our list of method definitions. Just like in pure Python, everything in an Extension is really an object. Lastly, we have a PyInit_ method that initializes our module when it's imported and creates the module object:
static PyMethodDef myMethods[] = {
{ "print_message", print_message, METH_VARARGS, "Prints a called string" },
{ NULL, NULL, 0, NULL }
}; // Our Module Definition struct
static struct PyModuleDef myModule = {
PyModuleDef_HEAD_INIT,
"DemoPackage",
"A demo module for python c extensions",
-1,
myMethods
}; // Initializes our module using our above struct
PyMODINIT_FUNC PyInit_DemoPackage(void)
{
return PyModule_Create(&myModule);
}
Note: The name in the PyInit_ function and the name in the module definition MUST match.
This code, along with our previous print_message function should be placed in a single C file. That C file can be built into a C Extension with a special setup.py file. Below is an example, which is also included in this repo:
from distutils.core import setup, Extension # A Python package may have multiple extensions, but this
# template has one.
module1 = Extension('DemoPackage',
define_macros = [('USE_PRINTER', '1')],
include_dirs = ['include'],
sources = ['src/demo.c']) setup (name = 'DemoPackage',
version = '1.0',
description = 'This is a demo package',
author = '<first> <last>',
author_email = '[email protected]',
url = 'https://docs.python.org/extending/building',
long_description = open('README.md').read(),
ext_modules = [module1])
This setup file uses the Extension class from distutils.core to specify the option, such as definitions for the C preprocessor, or an include dir to use when invoking the compiler. C extensions are always built with the compiler from which the running instance of the python interpreter was built with. The Extension class is very similar to a CMake setup, specifying a target, and the options to build that target with.
In this repo, you will also find a MANIFEST.in file. This is to specify other files we want packaged in the distribution of our Python package. But this is not required, this is only if publishing a C extension is desired.
Building and Installing
You can then build and install the extension with the following commands. | https://medium.com/swlh/make-python-hundreds-of-times-faster-with-a-c-extension-9d0a5180063e | ['Joshua Weinstein'] | 2020-07-26 19:36:13.491000+00:00 | ['Coding', 'Software Development', 'Programming', 'Python', 'Technology'] |
4 Poems Written in Rupi Kaur’s Writing Workshops | 4 Poems Written in Rupi Kaur’s Writing Workshops
Rupi Kaur recently held two writing workshops on Instagram—here’s what I wrote and learned.
Photo & Poem by Author
I had the excellent luck to tune into both of Rupi Kaur’s writing workshops on Instagram. I’m always eager to learn from other poets, especially poets who have the eminence of being a New York Times bestseller. For the first workshop, I found it very serendipitously. I picked up my phone to see what was new on the gram during my lunch break, saw she was live, and jumped “write” in.
If you aren’t familiar with Rupi Kaur, she is arguably the most successful poet of our generation. Her two books, Milk and Honey and The Sun and Her Flowers, are some of the bestselling poetry books on the market. She is a poet, illustrator, and author. Kaur became incredibly popular on Instagram and Tumblr by sharing short visual poetry with strong emotions and messages. Her debut book, Milk and Honey, has been an NYT bestseller for over three years.
The long and short of that is that if you’re a poet and you want to publish a book someday, you probably want to pay attention to Rupi Kaur. Even if publishing isn’t on your agenda just yet, it’s still fun and enriching to write alongside an experienced poet.
One of the writing activities was to write a list poem and list 10 positive things you can share with the world. I got a little set on one track of thinking and kept each item on the list somewhat related. I also couldn’t resist linking together some lines and word choices. It felt so great just to do some free writing and not get too lost in doubt and revision.
For me, the most enriching and educational part of this prompt was trying something different. A study in the Journal of Experimental Psychology did a lab study on how writing before bed helped participants fall asleep significantly faster. Some wrote to-do lists for the next day and others wrote a simple summary of what they accomplished that day. Interestingly, those who did the to-do list were the ones who reaped the most benefits and fell asleep fastest.
While that study was focused on lists and how they can help you sleep, list poems feel like a fun, creative extension of that idea. Writing lists about things like positive things you can contribute to the world, things you're grateful for, or things that make you happy can be delightfully enriching.
The prompt here was to write a poem from the perspective of a non-living object. The goal was to answer specific questions from the object’s point of view in each line. It’s something of a sonnet, but with pentameter thrown out the window since we were free writing. (Sorry, Shakespeare! I still love you, Willy.)
A lot of people chose a book as their topic; it’s only natural, we’re writers, so books are usually close at hand. I wanted to take a different approach with the book object in this poem. I didn’t want to immediately give away that this was from the perspective of a book; I wanted to present the emotions with enough human authenticity that the reality of the perspective being a book comes as a surprise of sorts.
Thus, we have a poem by a very insecure, damaged book on the shelf at a bookstore, hoping to be bought, but knowing they probably won’t be due to those imperfections. I took this poem in a dark direction, but it felt great to write.
Let’s fast forward to Rupi Kaur’s second poetry writing workshop. Freewriting is something I don’t indulge myself in nearly often enough, so when she announced she was doing a second workshop, I put it on my calendar and made sure I was going to attend.
This is my response to the first prompt, which was to give your life story a title and explain why you gave it that title. There are a lot of things I can appreciate about the tiny, isolated town I grew up in, but when I was growing up and trying to establish myself, it was very much a prison.
I can tell you this much — it’s not easy to go from a place with almost no opportunities for writers to work in NYC. “Ambition” often comes off sounding like a bad word or having a negative connotation, but it doesn’t have to be that way. It can just refer to your determination to live the life you want to and do good things in your own way.
This prompt was enjoyable since it really makes you think about yourself and your life. It does it a specific, controlled way that lets you take the narrative in any direction you wish to. It encourages introspection, which we can all benefit from doing a bit of in a time of quarantine.
This is by far the longest poem I’ve ever posted on Instagram — scroll through and read all three parts. I wanted to split it up both to make it more readable and to add a little thematic build-up to really highlight the concluding stanza. This is my first ever attempt at performance poetry. It’s a rough first attempt and I kept my stanzas uniform because I can’t shake my papery ways completely, but it was still fun to try something different that I’m usually much too afraid to try.
The prompt was to write down something you’re struggling with and tell a story of how it started, what it’s like, what it is, and how you conquer it. For me, I chose the problem of “not being heard.” It made for a fun challenge since personifying it was with the guided opening line made things tricky. I love working with personification though, so this was a great activity for me.
As poets, it’s important for us to venture outside of our comfort zones. It’s very easy just to keep writing more and more poems about topics you’re familiar with in poetic forms you’re comfortable with, but that’s not how a person grows as a writer. Experimenting with new prompts, different forms, and learning from other poets is an incredible way to stimulate your growth as a writer. | https://medium.com/literally-literary/4-poems-written-in-rupi-kaurs-writing-workshops-2d6bde47458b | ['Leigh Fisher'] | 2020-03-28 14:26:05.179000+00:00 | ['Creative Writing', 'Essay', 'Poem', 'Rupi Kaur', 'Writing'] |
My Failed Anxiety Experiment Can’t Come to an End Fast Enough | My Failed Anxiety Experiment Can’t Come to an End Fast Enough
After the long process of getting off medication, now I can’t wait for it to kick back in and neither can my husband
Photo by Drazen Zigic. Licensed from Canva Pro
It’s been a bad time for anxiety sufferers. Hell, it’s been a bad time for everyone, with or without chronic anxiety. But for those of us who live somewhere on the anxiety spectrum, our brains help make us believe things are even worse than they actually are.
It’s as if all of our worst fears, phobias, and nightmares, the ones we’ve worried about our whole lives are coming for us in one way or another.
As an introvert, being stuck at home doesn’t bother me nearly as much as watching as American health, life, economy and society being torn to shreds. Granted — some things need to be torn up in favor of a MUCH more just society. Black Lives Matter.
But everything seems to be in tatters. The very simple fact of having no idea what is coming next, in terms of my kids going back to school — or not — is driving me to distraction. I don’t need to know everything, but I do like to know some things.
One or two things…I could live with knowing just one or two things!
I feel defeated now, but I have hope
I hung in there with the best of them for several months of this pandemic, but now, after all that, I am admitting defeat. I need to go back on medication. My husband told me so. He asked me to consider it.
Maybe “defeat” is a strong word, or has too negative a connotation. Perhaps I should say “I’m down, but I’m getting back up — with help.”
There’s no shame in realizing I need a little neuro-boost during this massive global crisis. There’s no reason to feel so down, unmotivated, restless, hopeless when I have a solution that works and may well be worth the side effects.
The next morning I took the first small dosage that will have to build up in my system over a period of weeks. Four? Six? Who knows?
Sadly, I’ll know it’s working when I can’t cry over the endless stream of tragedies occupying my news feed each day— not that I won’t feel like crying is appropriate — I just physically won’t be able to cry.
That’s what happened last time anyway. I could feel sad, but my body wouldn’t react in accordance with my mind. I couldn’t feel despondent, only distantly sad.
Maybe this wasn’t the best time for experimenting with my mental health
When I started my experiment living without medication just before the pandemic began taking hold in Western countries, I was delighted to feel more emotions, and yes, even to cry. To climax! What a rush, after several years of a somewhat flatter, less extreme existence.
But when I have days like that day — days like the day my husband said he thought I needed to go back on the meds, I realize I can’t quite do this alone.
Not right now anyway. Not without making the people around me miserable and dragging them down with me. It wouldn’t be fair to anyone.
My brain just isn’t wired in a way that lets go of things easily. I tend to ruminate. I get overwhelmed by all the things I need to do, but can’t figure out how or where to start and things snowball from there.
When things are generally ok, I can get by with CBD, mindfulness, and deep breaths. But when the world is falling apart and then I get my period, or an acquaintance rejects my small talk in a totally insignificant exchange, it doesn’t take much for my anxiety to crowd out all rational thought and replace it with negative, self-destructive garbage.
Fast forward a few weeks — my observations
It has been four weeks. I’m taking 50mg now, which is half of my normally prescribed dosage, and I am feeling a little better. Less frantic.
But that’s not the first thing I noticed. I had completely forgotten about the bizarre dreams I used to have and that stopped when I stopped taking sertraline back in January.
This morning I woke up from a kind of lucid dream with weird sexual undertones in which several friends from high school and college suddenly appeared and galavanted around with me on New Year’s Eve, in France. Oh and my dad was there, too — in a totally nonsexual way — in an Elvis jumpsuit, trying to help me find my coat after all my friends sledded away without me for no reason.
Oh, yeah…I forgot about the trippy dreams. I googled it to check the dreams were a possible side effect of sertraline — they are. Apparently, some people have nightmares. I wouldn’t describe that dream as a nightmare, but it wasn’t exactly a sweet dream either.
The other side effects I had forgotten are the night sweats and trouble sleeping — mostly from being sweaty!
Supposedly, the dreams and the sweats should subside over time. That’d be great…
When I thought about it further, I realized I have been letting go of things more easily in the last week or so. I’m not ruminating on things for hours or even days on end. Things are rolling off me a little more quickly.
Halleluja!
The real upside
Are the side effects I described above deal breakers? Just a few months ago, they were. Right now, it feels worth it to get some motivation back and a glimpse of a light at the end of the tunnel — even if the pandemic isn’t going anywhere and there are still a lot of unknowns.
I am finding more motivation to write! I have ideas and inspiration. I thought my writing frequency had declined so much because of the pandemic and having to expend all my energy on my kids all day every day, but maybe anxiety and/or depression were playing a part as well.
When I look solely at my writing output, my posting frequency decreases along with the amount of meds in my system.
Here’s to hoping I can reverse the trend! My second children’s book is in illustration and I want to get cracking on the third — strike while the iron is hot. I’m brimming over with ideas and I don’t want to lose steam.
It’s great to want to do things and not just feel like all the things I want to do are too much trouble or won’t work out anyway, so what’s the point?
Time to check in with a professional
It’s clear to me that I made the right choice to go back on sertraline. But now I’m hesitating about returning to the dosage I was at previously. It did make me a little dead inside and totally killed any thoughts of sex. Maybe I should try 75mg for a while. Maybe 100 is too much?
My psychiatrist uses a checklist and different tools like that to evaluate where I’m at, so I think it’s time I pay her a visit and see what she thinks. She doesn’t always see things the way I see them, but I’m not always seeing things clearly.
The real test will be PMS and my period — in just a few days! I usually feel the worst around the time my period starts. Hopefully I’ll see some improvement this time around.
Cue ominous music.
Aside from that unknown, I’m happier and less easily irritated. I’m happy with my decision and I’m glad my husband spoke up when he did. It was time to do something and I’m glad I listened.
Thanks for reading.
Did you know I just published my first children’s book?
It’s called Opossum Opposites and it’s the product of an era when I got a lot more things done because my children were in school. You can check it out here. You can also join my mailing list to learn all about my books, by clicking here.
You may also enjoy: | https://medium.com/narrative/my-failed-anxiety-experiment-cant-come-to-an-end-fast-enough-ac8e065be7f1 | ['Gina Gallois'] | 2020-07-01 01:53:34.600000+00:00 | ['Self Care', 'Pandemic', 'Mental Health', 'Anxiety', 'Womens Health'] |
Formulate Effective Ways of Working for Your Agile Teams | Formulate Effective Ways of Working for Your Agile Teams
A non-exhaustive but essential set of principles that you need to establish with your agile teams
Photo by Annie Spratt on Unsplash
Why Are “Ways of Working” Important?
When running your team’s day-to-day operations, having an agreed manifesto of Ways of Working will help keep the engine running as smoothly as possible. Codifying this in a readily available document makes it handy to remind the team what the agreed ways of working are.
The agreed “Ways of Working” manifesto should be readily available to remind the team how to work effectively together. It will be a living document that has to be updated continuously based on your team’s situation. Ways of Working usually are initiated during an agile team’s kickoff. Ideally, during sprint retrospectives, health checks, and team refresh sessions, the updates to Ways of Working are best discussed.
As an engineering manager or lead, you need to help your team/s in establishing ways of working early on, ensuring that the team is set up for success in their sprints — these ways of working need to align with your organization’s engineering principles and production standards.
The team in this context refers to the smallest unit of a group of people in an organization. For example, if you are using the tribes and squads model, then the squad is the equivalent of a team in this write up.
Let’s cover the essential Ways of Working principles in the next sections. | https://medium.com/better-programming/how-to-formulate-effective-ways-of-working-for-your-agile-teams-8e14aa25fe15 | ['Ardy Dedase'] | 2020-09-03 23:15:56.550000+00:00 | ['Product Management', 'Management', 'Agile', 'Software Engineering', 'Leadership'] |
There Are No Green Stars | There Are No Green Stars
When astrophysicists peer into the universe, they see a nearly perfect rainbow spectrum of stars, from cooler red ones on one side to hotter blue ones on the other. In the center of the spectrum, they should see green stars, but they don’t due to the nature of stars and the limitations of the human eye.
Astrophysicists can see virtually every color of star, except green.
Let There Be Light
A star begins its life as a cloud of hydrogen gas. Due to gravity, the cloud condenses. If the cloud is big enough, it will have enough gravity to create the temperature and pressure needed in its core to fuse hydrogen together ( along with the help of quantum tunneling). The original hydrogen atoms have slightly more mass than the helium they fuse into, and this missing mass is converted into energy, as shown by Einstein’s E=mc^2. Small amounts of mass (m) are converted into incredible amounts of energy (E) when multiplied by the speed of light squared (c^2), about 9 x 10 16m^2/s^2. This is what generates the majority of light from stars, although bigger stars can fuse helium into heavier elements and these heavier elements into even heavier ones, all of which releases energy, until it tries to fuse iron, causing a supernova event.
But what is this light? Light is simply perpendicular waves in the electrical and magnetic fields. When one goes up and down, the other goes left and right. Together they are called an electromagnetic wave, or EM wave. EM waves include the X-rays fired at you by your doctor, the microwaves heating your left overs, the radio waves bringing you your favorite tunes, the light detected by your eye, etc. It’s all the same stuff, the only differences being its wavelength, which is the distance between each crest or trough, and its frequency, which is the rate at which it oscillates.
This is the EM spectrum:
What the human eye can detect is only a tiny fraction of the entire electromagnetic spectrum. Radio waves have longer wavelengths and lower frequencies, which allows them to travel further. Gamma (γ) waves have much shorter wavelengths and higher frequencies, making them far more energetic and dangerous. (Creative Commons License)
The EM Spectrum and Black-body Radiation
The wavelength and frequency of light coming out of a star can be modeled by a black-body. Black-bodies are theoretical entities that emit electromagnetic waves, the amount of which is determined by their temperature alone, without any other complicating factors. While black-bodies are theoretical, the data they produce matches real world data fairly closely.
Below is a black-body plot, which demonstrates the relationship between wavelength/frequency and temperature. As can be seen, the higher the temperature, the shorter the wavelength and therefore the higher the frequency. For example, a 3000K object generates light mainly in the infrared part of the EM spectrum. An object like this would only be faintly visible to us, as the graph barely overlaps with the visible spectrum. The majority of light from a 4000K object is just outside of what we can see, but it produces enough within the visible spectrum to make it likely visible. A 5000K object peaks in the orange part of the visible spectrum, while a 6000K object peaks within the yellow. To us, these objects would appear orange and yellow, as the majority of light is orange and yellow.
If you follow the pattern, the peak goes up and to the left as temperature increases. This means that the hotter an object gets, the more its peak moves from right to left along the EM spectrum in the chart above. Extremely energetic objects like the accretion disk of a black hole or a neutron star can peak in the X-rays or even gamma rays.
The color of an object is determined by the wavelengths of the majority of the light it emits in the visible spectrum. (Creative Commons License)
The Universe Staring Back at Itself
“We are stardust brought to life, then empowered by the universe to figure itself out-and we have only just begun.” — Neil DeGrasse Tyson
When it comes to stars, the light we see matches closely with an ideal black-body. Blue stars appear blue because they are hotter and emit most of their light in the blue part of the visible spectrum. And red stars are red because they are cooler and emit a majority of red light.
This is demonstrated in the Hertzsprung-Russell diagram, or HR diagram. While this is not as simple to read as the black-body plot above, it still demonstrates the connection between surface temperature and color.
As a star burns up its fuel, its temperature wanes, forcing it down the main sequence. Our Sun is a little over halfway through its life, putting it over halfway down. (Creative Commons License)
There’s a glaring omission though: there are no green stars. Even our Sun is in the lighter part of the yellow spectrum, pushing into white, where green should be. What’s going on here? The answer has to do with how our eyes processes light from the stars.
The human eye has evolved to detect three colors and their many combinations. More specifically, we only have three types of cones in our eyes, each of which detect EM waves associated with red, green, or blue. Colors in between are perceived due to more than one cone being activated at the same time, as explained by the chart below. Here, we can see each cone is dedicated to either small wavelengths, medium wavelengths, or long wavelengths, yet they overlap substantially, especially the medium wavelengths/green cone and the long wavelengths/red cones.
The human eye detects light in a narrow band of the EM spectrum. The cones in our eyes are only sensitive to the primary colors of light: red, green, and blue. (Creative Commons License)
So, let’s stitch all of the above together. Imagine a star with its black-body curve peaking in the middle of the green band in the chart above. Yes, the majority of its light is green, and the green cone would detect this. However, if you look at the shape of a black-body curve, you’ll see that it would spread out and down across the chart as well. That is, a black-body curve that peaks in the green, would also encompass large amounts of both red and green light, thus triggering all three cones, which we perceive as white light.
In other words, a star that peaks in the blue or the red will appear as such because its black-body curve doesn’t overlap nearly as much with the rest of the visible spectrum. Blue stars don’t trigger the red cone, and red stars don’t trigger the blue cone. Green stars, though, trigger all three.
So, when you think about it, it’s strange that we don’t actually see the Sun in its true green because our eyes didn’t evolve the ability to do so. | https://medium.com/discourse/there-are-no-green-stars-9400ea468795 | ['The Happy Neuron'] | 2020-12-21 05:47:02.180000+00:00 | ['Astrophysics', 'Technology', 'Astronomy', 'Science', 'Space'] |
Delving into the state of data journalism | Delving into the state of data journalism
… while continuing to shape Facta
“Science is not only the vault of stories and data we enjoy talking and reading about, but also a frame of mind that can help journalists improve their approach to finding facts, to evaluating them, to either confirming or discarding them.” That’s the core of what I was discussing in my last post. It is also the core around which I am building Facta, an independent center for the Mediterranean region that applies the scientific method to journalism.
In the weeks of working on concepts, design, budgets and strategy here at the Tow-Knight Center for Entrepreneurial Journalism, I have also been spending lots of time thinking about and articulating why I do not feel satisfied with the state of journalism today.
I know, I know. There are tons of reasons the quality of journalism has been publicly discussed in the last few years. Don’t even try to make me write the buzzwords; I do not want to go there. I think it is clear there are serious issues both with verification and with the ability to put the news in context with high-quality content. Plus, some recent media dramas have resulted in public discussions and given more food for thought — well, at least within those groups of people who care about journalism, whether they work in the industry or they support it. Just look at “The Correspondent” or “The MarkUp” situations, where outstanding media projects have to face public scrutiny for very, very different reasons that might result in serious impact on trust, credibility and even readers’ willingness to support similar future ventures.
Let me be clear: I love journalism. And I have a deep, deep respect for all those journalists who work hard, struggle to bring facts and significant stories to their readers’ attention, keep working even when facing threats, when being mistreated or attacked by those in power, when they become object of hate campaigns, or when they risk or even lose their freedom or lives. On these issues, by the way, great recent readings include a post by Jay Rosen on his PressThink blog, the Italian writer Roberto Saviano’s wakeup call; the latest data on the World Press Freedom Index; and this commentary on Columbia Journalism Review. Therefore, I am not at all willing to be cynical. I am firmly convinced that press freedom is a fundamental value, and that journalism needs to be protected.
But it is precisely because I do love journalism that I am disappointed in many of its practices and routines. As it pertains to Facta, my disappointment rises from very specific reasons, which have to do mainly with how journalism has treated, and is treating, data and valuable information. I was a scientist for an important part of my life before moving into journalism. When I encountered data journalism in its early days, almost 10 years ago, I thought I had found the perfect combination. I could combine my scientific thinking with my journalistic skills to produce a type of journalism that was, above all, useful. But now, a few years down the road, I see that data often end up being used in a decorative way — a nice map or infographic to fill a page, lacking context and giving people little means to go deeper on the issues. To complicate things, media are often the source of a high degree of confusion of facts, hypotheses, theories and opinions, with scarce knowledge of the ways facts and information are validated.
I cannot understand the low commitment and incapability to take complex issues, analyze them with a solid methodology, and offer them to the readers or listeners or viewers in a way that allows them to use that information — to understand complicated problems and issues and come up with potential solutions. Maybe I’m naive, but that’s what I expect journalism to do: offer a high-quality information in the service of democracy; connect, bridge and contextualize.
So I looked up studies and research on the quality of data journalism, a practice that, as I said before, has become more and more widespread in the last 10 years. I started from a hypothesis based on experience and my own perception: with some exceptions, and very important ones, data journalism has generally not fulfilled its original promise to help read reality in a more truthful way. I’d be happy to take into account quite different points of view, should they be supported by evidence. In the meantime, here is what I found.
At the end of March, The Guardian published an informative piece collecting the voices of Caelainn Barr, Mona Chalabi and Nick Evershed, The Guardian’s data editors in the U.K., U.S. and Australia respectively. (By the way, I have met Barr a few times, between the International Journalism Festival in Perugia years ago and the Center for investigative reporting in London. She has an amazing track record in data journalism and is by far one of the best journalists on the task today.)
The Guardian took the opportunity to discuss the state of data journalism exactly on the 10th anniversary of the publication of its Datablog, launched in March 2009 by Simon Rogers, undisputedly one of the pioneers of data journalism, who is now data editor on the News Lab team at Google and director of the Data Journalism Award. In this 10-years-later piece, The Guardian’s data editors say a few things that clarify how data journalism is done today and what has changed with respect to its origin. “The Datablog paved the way for the data projects team but the work we do today is very different.” Barr says. “Over the past decade our approach has evolved and now we amplify the stories we find in data by collaborating with specialist reporters to put human voices at the center of our stories.”
At the beginning, data journalism was quite a trial-and-error process, with lots of inventiveness added to the craft, since it was virgin territory. Evershed says The Guardian is “the one publication that really got me interested in data journalism, as it had a very hacker-punk-DIY approach in the early days. This made me think it was the sort of thing I could do even though I’d had no training in programming or data visualisation beyond the little I’d learned studying science.”
To strengthen that notion of how punk-DIY the approach was, there is this popular post — Anyone can do it. Data journalism is the new punk — written by Simon Rogers back in 2012. It’s almost like a manifesto, and I still use it to introduce students to data journalism. But a fundamental difference compared with those early years is clearly that “in the early days there was more of an emphasis on making the data available,” Chalabi says. “We’d always create a Google spreadsheet with the numbers we had used to write the piece.”
This is not happening anymore, with a few exceptions. And I am really not happy about it, since I have been delving into those data for years and think that actually having the data is what makes data journalism useful beyond the storytelling.
Making the data available was indeed one of the first specific characteristics of data journalism when it took off in 2009, as Simon Rogers tells Letizia Gambini in this interview on the European Journalism Centre’s Medium blog. (By the way, the EJC has just launched a new website entirely devoted to data journalism, with lots of great resources for journalists.) “What if we just published this data in an open data format? No pdfs, just interesting accessible data, ready to use, by anyone. And that’s what we did with the Guardian’s Datablog … We started to realise that data could be applied to everything.”
In his words, Rogers conveys the hype for what was an almost pioneering moment, when data projects were setting the mark for a different type of journalism and great collaborations began between journalists and tech people to improve tools and practices for using the data. The Hacks/Hackers movement also took off in those years.
Rogers also gives his insight into the future. “We face a wider and increasingly alarming issue: Trust. Data analysis has always been subject to interpretation and disagreement, but good data journalism can overcome that. At a time when belief in the news and a shared set of facts are in doubt every day, data journalism can light the way for us, by bringing facts and evidence to light in an accessible way.” And I do agree that good data journalism can overcome distrust, or help in that direction.
But is good data journalism a widespread practice? Let’s see what Alberto Cairo, one of the most active and respected experts in data visualization and a tireless, enthusiastic data viz trainer, has to say about it. Back in 2014, Cairo wrote a post on Nieman Lab titled “Data journalism needs to up its own standards.” He talks about the then-recent buzz around “data” and “explanatory” journalism. “I’m talking about websites like Nate Silver’s FiveThirtyEight and Ezra Klein’s Vox.com,” he says, and also about new operations at traditional media, such as The New York Times’ The Upshot.
“There is a lot to praise in what all those ventures — and others that will appear in the future — are trying to achieve,” Cairo admits, soon adding, “But I have to confess my disappointment with the new wave of data journalism — at least for now.” And he lists good examples of why this is so: examples of cherry-picking and carelessly connecting studies to support an idea; examples of proxy variables used without careful analysis; the tendency to derive long-term linear predictions out of nonlinear phenomena; and some other flaws. | https://medium.com/journalism-innovation/time-to-delve-in-some-research-on-the-state-of-data-journalism-10c24f140434 | ['Elisabetta Tola'] | 2019-05-09 21:48:43.985000+00:00 | ['Data Journalism', 'Facta', 'Science', 'Review', 'Scientific Method'] |
AI in InfoSec | This is a summary of the talk Clarence Chio gave at South Park Commons Speaker Series titled “AI in InfoSec”.
Teaser Intro
Full Video
For a long time, the exceptionally low tolerance for errors and inherent difficulty in collecting data have discouraged the use of artificial intelligence away in information security. In recent years, however, technologies such as AI, machine learning, deep neural networks, and big data are increasingly becoming the hottest keywords in the security industry.
Conventional Security Solutions
The primary detection mechanism of most conventional security solutions is signature/string matching with manually crafted rulesets. The biggest drawback to this approach is that it requires a new rule for every new threat. Expert-defined heuristics were added later to allow for more preemptive defense but human intervention was still necessary to make final decisions.
Even though the need to have up-to-date rulesets and heuristics gave the security industry the ability to extract recurring payments from customers, the advent of metamorphic malware that can transform itself to avoid detection meant that a more adaptive solution was needed.
Enter Machine Learning
Typical syntactic signature matching fared miserably against adversaries that are polymorphic. Furthermore, as security solutions continued bloating to defend against thousands of different attack mechanisms, so did the system complexity and bugs, to a point where it started to exceed analytical capabilities of the maintainers. Fortunately, machine learning happens to excel at precisely this task — pattern matching, detecting anomalies, and information mining in complex space.
Unique Challenges of AI in Security
We have all used successful applications of artificial intelligence in consumer products like virtual assistants that can comprehend and respond to human voice, and photo/video apps that can recognize faces and other objects. However, the use of AI in security poses unique challenges.
Errors are extremely costly in security. Siri failing to understand your query accurately or a photo app wrongly suggesting a wrong person to tag do not have any serious repercussions. People, in fact, have come to expect such errors to happen often. However, in security, errors, such as incorrectly granting access to an attacker can have dire consequences.
Explainability is also very important in Security. It is important to know why someone was denied access or some request was authorized. This is not an issue for most consumer applications; people do not care how the a photo app concluded that the picture you uploaded shows a cute cat wearing a hat.
The lack of training data is also a huge challenge. The amount of data security researchers get to collect is a drop in the bucket compared to the millions of text messages, images and personal information companies people willingly provide to companies like Google and Facebook. If there were similar magnitude of attacks, we would be in serious trouble.
The Reliability and Safety of AI
Artificial intelligence, as implemented and available today isn’t perfect. Adversarial examples are input data that are specifically designed to trick the AI systems into making a mistake. These are like optical illusions, except these are for machines.
The following image, while it is obvious to humans that it is a stop sign, machine learning models such as those used in self-driving cars can easily be tricked into thinking that these are speed limit signs suggesting a higher speed. [1]
Machine learning-based models see this sign as 45mph speed limit signs
Imagine the potential consequences of such an attack. The same concept can be used to bypass machine learning-based security solutions.
One report claims that 70% of the researchers in cybersecurity say that attackers can bypass machine learning-driven security solutions with nearly 30% saying it is “easy”. [2]
Machine vs. Machine
Attackers, too, have started adding artificial intelligence to their arsenal. Somewhat ironically, AI-based attacks are very effective against security systems that were specifically designed to prevent automated attacks. CAPTCHAs, the annoying distorted text that you often see in online forms is one such example.
Machine learning can also be used to test and generate adversarial examples using many of the same methods used to train malware classifiers. [3] There have also been “model poisoning” attacks that manipulate the statistical models and move decision boundaries in AI-based systems by repeatedly feeding misleading data.
Machine vs. Human
Spear phishing is a targeted form of phishing attack that involves sending individually customized baits to specific individuals as opposed to sending generic baits to random people. Despite being very effective, spear phishing was not as widespread as ordinary phishing attempts because the process is highly manual; that is before machine learning. One simulated attack that used AI-generated individualized tweets sent to 10,000 twitter accounts including U.S. Department of Defense personnel had as much as 35% click-through rate. [4]
This Twitter user clicked on an AI-generated spear-phishing link
The Future of AI in Security
It appears that the cat-and-mouse game the security industry hoped to end with the help of artificial intelligence is going to stay, for now. This may be attributed to the fact that even with artificial intelligence, the general strategy in security has largely remained the same albeit more automated — pattern matching and heuristics. Many security researchers are starting to think outside the box, as we see the shortcomings of that approach.
Humans remain as the weakest link and the largest attack surface in security. People are not only careless and gullible, but we also tend to create imperfect code riddled with bugs. The future of AI in security may not be in providing defense against attacks, but in making exploits extremely rare and difficult to find. AI can correct human behavior to make people “more perfect” and less error-prone. After all, preventative care is always better than reactive care.
Maybe cyberattacks won’t be a thing when we all become cyborgs
—
[1] Evtimov, Ivan, et al. “Robust Physical-World Attacks on Machine Learning Models” Cornell University, 7 Aug. 2017, https://arxiv.org/abs/1707.08945
[2] Carbon Black, Inc. “Beyond the Hype: Security Experts Weigh in on Artificial Intelligence, Machine Learning and Non-Malware Attacks” Carbon Black, Inc., 28 Mar. 2017, https://www.carbonblack.com/wp-content/uploads/2017/03/Carbon_Black_Research_Report_NonMalwareAttacks_ArtificialIntelligence_MachineLearning_BeyondtheHype.pdf
[3] Xu, Weilin, et al. “Automatically Evading Classifiers A Case Study on PDF Malware Classifiers.” University of Virginia, 21 Feb. 2016, https://www.cs.virginia.edu/~evans/pubs/ndss2016/
[4] Seymour, John and Tully, Philip “Weaponizing Data Science for Social Engineering — Automated E2E Spear Phishing on Twitter” ZeroFox, 4 Aug. 2016, https://www.blackhat.com/docs/us-16/materials/us-16-Seymour-Tully-Weaponizing-Data-Science-For-Social-Engineering-Automated-E2E-Spear-Phishing-On-Twitter.pdf | https://medium.com/south-park-commons/ai-in-infosec-3e9e3fc9206f | ['Pete Jihoon Kim'] | 2017-08-22 19:26:00.103000+00:00 | ['Infosec', 'Artificial Intelligence', 'Information Security', 'Security', 'Machine Learning'] |
Microfiction Tips: Microfiction Defined | Microfiction is a story in 300 words or less (not including title).
This is the definition we choose to define Microfiction, there are others.
Every Story
Has a beginning, middle and end.
Has Conflict.
Has Character(s).
Has a change in the Main Character (character arc).
300 words or less
Microfiction is squeezed into 300 words or less. It is not just a short story written in fewer words. This is a severe restriction and most description can be removed.
Additional techniques
It can employ many additional techniques to pack more information in fewer words. Every word fights for existence and must advance the story in some way, so writing Microfiction needs a different skill set to writing short stories. It’s mantra is CUT, CUT, CUT. Extreme brevity is the order of the day.
No fluff. Like any skill, it takes time to acquire and master.
Title
One technique overlooked by most storytellers is choosing a good title.
The title can be used to set the scene or set the mood for zero word count.
An effective title starts the story early. For instance the title Twister! might evoke all you know about hurricanes and immediately set the scene before the story begins.
Character Naming
It is possible to use “loaded” names: Names that are pre-loaded with meaning.
Scrooge — suggests a miser.
Donald — bumbling duck or leader of the western world.
Prince Charming — think about it.
You can have a great deal of fun getting the right name for your characters.
Others
There are many other techniques utilised to get more meaning from less words.
Genesis of Story “Perfect Day” — process I employed to write microfiction
Microfiction on Medium
Medium is a collection of bite sized pieces where very short stories seem to be the ideal fiction form. If you are able to craft these stories up to 300 words known as Microfiction, you just may fit right in.
A Couple of Microfiction Stories on Medium:
Portal
Frog or Prince
Chopsticks | https://medium.com/stevieadlerteachandblog/microfiction-techniques-microfiction-defined-abdd7570fe50 | ['Stevie Adler'] | 2020-05-27 05:22:59.180000+00:00 | ['Creative Writing', 'Microfiction', 'Writing', 'Writing Advice', 'Writing Tips'] |
Ethical source at WeTransfer | At WeTransfer, we have strong opinions about the value system in which we operate. In our products, these values translate to us keeping our products simple and straight-forward. In restraining ourselves in reaching out to our user base in search of engagement. And in declining to work with advertisers whose products or practices we deem questionable. We have also actively used these values to campaign issues like gun violence and medical debt.
With our company roots in sending large files, sharing is deeply ingrained in our culture. As such, it shouldn’t be a surprise that we publish a number of internal tools and libraries as open source. We used to publish this software under the MIT License; a short and readable license that permits anyone to use our technology freely, in whatever way they see fit. As of last week, we have begun switching projects over to the Hippocratic License. The Hippocratic license is based on the MIT license, but extends it to restrict the freedom to use to only those applications that do not harm others.
This is the relevant bit:
No Harm: The software may not be used by anyone for systems or activities that actively and knowingly endanger, harm, or otherwise threaten the physical, mental, economic, or general well-being of other individuals or groups, in violation of the United Nations Universal Declaration of Human Rights.
The Open Source Initiative — which has the monopoly on deciding which licenses officially count as Open Source — takes the view that free software should give complete and ultimate freedom to anyone to use the software under license for anything, explicitly including “evil” things. It takes the perspective that technology is neutral. In today’s world of fake news, rigged political campaigns and surveillance technology, this 20 year old definition of Open Source feels terribly outdated.
Photo Pablo Garcia Saldana via Unsplash
In society, we define our freedoms in two ways: freedoms to, and freedoms from. Consider freedom of speech: the freedom to speak your mind and not be prosecuted for it, even if others do not agree. Freedom of speech is the cornerstone of modern democratic society, but it is important to note that it is not absolute. In our modern society we couple this freedom to with — for example — the freedom from racism. Our desire to protect people from racism trumps our desire to allow people to speak their minds.
How we think about technology should be no different. As a human being, I want to grant people the freedom to use our technology for good, but also ensure freedom from harm caused by misuse of our technology. Because technology is not neutral.
Consider facial recognition. It helps me unlock my phone in a split-second, and automatically categorises my family photos. But it is also used to publicly shame jaywalkers in China (and worse).
I encourage everyone to take a step back and consider the ultimate freedom your current Open Source licenses grant. Perhaps it is time for a new movement to take over: from Open Source to Ethical Source. Technology can be a wonderful and incredibly powerful force. Let us all take responsibility for the technology we bring into the world, and do what we can to ensure it is used for good. | https://medium.com/wetransfer/https-medium-com-wetransfer-ethical-source-at-wetransfer-670d3b153b96 | ['Bastiaan Terhorst'] | 2019-11-13 05:01:01.816000+00:00 | ['Software', 'Open Source', 'Startup', 'Web Design', 'Tech'] |
Silliness and Showing up in the World as a Kid | To come back to our bench, I introduced the topic and specified the difference between ‘acting childish’ and ‘being childish’: the former is the outcome of having set ourselves free from the conditioning while being a responsible and accountable individual. The latter, though, is the proof our emotional maturity stopped at the toddler’s stage despite all the adulthood ‘masks’.
Dennis had the kindness to share two videos, afterwards, of my silliness in two different contexts:
A technical issue during my dancing class special session (Halloween party): the music was interrupted several times; thus, triggering a general frustration that I was observing on people’s faces.
A professional gathering, and most precisely an awards’ ceremony where most of the audience were too serious. All that mattered was to take a picture of the award and post their ultimate proof of success on social media. I couldn’t witness it without doing something about it.
Should you be interested to see the videos, they were shared here:
“Why do you think I acted silly?”
This was the first question I asked following watching the silliness manifestations. The group folks shared some interesting feedback. One of them hit home and made me think of the ‘former emotionally imbalanced perfectionist me’. She was craving her stolen moments from the universe where she could re-connect with her inner child…
The answer was from Chris Ward, and was stipulating, “Silliness is a return to your true self, and discovery of your lost self.” The most hilarious of my stories aligned with this truth goes back eleven years, most probably.
I was in Paris on vacation. We went out to celebrate one of my friends’ birthday. We decided to eat at “Chez Léon Champs-Elysées”. It is a restaurant chain famous for its delicious variety of mussels. The waiting line was huge, and we were getting impatient and started feeling hungry. Thus, we checked the menu online so that to decide upstream and be served immediately. We were a group of ten to twelve people. An important detail to mention here is that it is common to share — or at least taste — your companions’ meal! At that time, I was very religious and never drank a single drop of alcohol (prohibited by my birth religion). After tasting my friend’s dish, he informed me it was prepared with wine. Interestingly, even if the scientific me knew undoubtedly that, when cooked, the wine’s alcohol is evaporated, the little girl somehow tricked my brain to believe I was drunk and trigger some unstoppable giggling. Dennis shared during one of the sessions the following laughter. You can multiply it by 5 times to get the picture! It was so contagious that the whole restaurant joined us, including customers and staff.
From the author’s YouTube channel
Little did I know that my reaction would reveal ten years later that my inner child was surviving in a virtual prison her whole existence, and that she needed to be harshly abused by a malignant narcissist who took care of destroying her self-esteem before leaving her in the darkest places to wake up and set herself free through re-writing her subconscious program.
Some other feedbacks that resonated the most with me, nurtured my soul and that will stay with me forever because of their accuracy and, most importantly, how seen and validated they made me feel are:
Chris Ward (again): Demonstration of being fully self-expressed.
Paula Goodman: Immediate connection exists in the smile of silliness.
Catherine Fitzgerald: I think you were silly because you are very self-confident and not worried about what others think. Also, you are very connected to the energy in the room, and that you reflected it back to all.
What was the purpose of my silliness?
The feedbacks above bring us to my answer to the question. For the dancing class context, it was implicitly inspiring my friends not to take themselves too seriously when life throws hardship at them. The technical issue was something we couldn’t control and letting it get the best of us would have done no favor.
When our program is anything but our friend, that is full of emotional scars, unprocessed feelings, trauma flashbacks, mental patterns — you name it, there is no space between the stimulus and our response. Furthermore, our inner peace is volatile since depending on the external world. Result? We are easily triggered and defined by our circumstances. That was my life up to my re-birth.
I know that it not only takes time to do the inner work (for some insecure attachment styles a lifetime especially when people are stimulated by a self-centered goal) but also tons of humility, honesty, openness, bravery, and consistency. Also, it seems to me that the pre-requisite of breaking the denial circle to access our self-awareness endowment is highly challenging for many people.
Hence, by showing up in a silly way, I am hoping to offer a break to people’s anxious mind, and it gives its fruits with the kind souls who are not judgmental, open to receive others’ light, not threatened by random manifestations of joy.
When it comes to the professional context, I was motivated by my inclination to model servant leadership: your real success is not your achievement/award that you are so eager to share with the world instantly to get some external validation. It is who you are as a person: a cheerleader who can be genuinely happy for others’ success!
Which legacy do I want to leave?
I was interviewed lately by a lovely lady going by the name of Gurpreet Dhariwal. She is a fellow writer from Medium, the author of “My Soul Rants: Poems of a Born Spectator.” Her eBook is now available at Google PlayStore, Amazon, and Kindle.
One of her brilliant questions was “Why do you hope for? Does it change something within you or around you?”
“Ahaaa! Well, it will bring us to my legacy! Contributing to reversing the imbalanced, selfish, immoral, manipulative world in which we’re living through educating the kind-hearted people, and getting them to realize they are the secret weapon whenever they decide to break the denial circle, become self-aware and pay the transformation’s price. Also, implicitly inspiring change through modeling servant leadership or even making a person’s day a little brighter is more than enough to make me feel beyond fulfilled!”
For those of you who might be interested in the whole interview, you can find it below:
And, since I mentioned the word ‘fulfilled, I would love to share that Sarah Ratekin made a fantastic point by linking silliness to our happiness hormones! I am such a fan of the topic and explored it about a year and a half ago here:
In summary
One of the numerous outcomes of rewiring the subconscious program adventure is changing what fuels our silliness manifestations:
While it is exclusively self-satisfaction (some stolen moments of reconnection with the discriminated inner child) when we are still diving into life with an invasive program, being silly is an automatic strategy that servant leaders tend to use to implicitly inspire change.
Last but not least, Dennis shared such a powerful quote during our call: | https://medium.com/illumination-curated/silliness-and-showing-up-in-the-world-as-a-kid-81fc9b1c269a | ['Myriam Ben Salem'] | 2020-12-28 09:25:51.711000+00:00 | ['Self-awareness', 'Leadership', 'Inspiration', 'Self Improvement', 'Self'] |
Why should you build a UI Component Library? | The challenge that often arises -
How to develop a user interface that can be used across platforms while maintaining the right look and feel?
By using reusable components that have become a part of so many real-world design systems.
Design systems like Airbnb Design, Shopify’s Polaris, and Google’s Material Design have been using components as the building blocks for creating a user interface and reusing them for creating applications.
Building a component library is an optimized way to reduce the overhead that comes with maintaining multiple repositories for multiple components.
A component library is a single repository or a file or folder that consists of all the styles and components used in an app, a website, or software. This includes buttons, input fields, icons, text, a UI kit, etc., and manages source code for these UI elements, often leveraging CSS and JS frameworks.
This blog will go over the benefits, and the practices to build a component library for your design system that will facilitate tighter integration between design and development.
But before that let’s understand what would a component library mean to -
Developer: A reusable set of components helps
Standardize front-end development across different projects
Provide easier onboarding for new team members
Reduce maintenance overhead
Saves time building new apps
Designer: A set of reusable master components and a predefined style guide enables consistent design. It helps a digital product to scale effectively without requiring frequent additions or rework of design assets and files while maintaining a consistent UI system.
User: A successful component design system means less confusion, better navigation of your products, warm brand-familiarity, and enhanced user experience. For your business, this would imply better results.
In essence, using a component library helps standardize UX/UI and development across multiple teams and products. This is why great teams like Uber, Airbnb, IBM, Shopify, and many others work so hard to build it. | https://medium.com/sketch-app-sources/why-should-you-build-a-ui-component-library-854656b91a96 | ['Galaxy Weblinks'] | 2020-08-12 12:59:08.679000+00:00 | ['UI', 'Design Process', 'Design', 'Best Practices', 'Component Libraries'] |
An Introduction to the Python Range Function. | An Introduction to the Python Range Function.
Let’s learn about the python range function in detail.
Photo by Drew Beamer on Unsplash
Range:
The range type represents an immutable sequence of numbers and is commonly used for looping a specific number of times in for loops.
range(stop)
range(start,stop,step)
start
The value of the start parameter (or 0 if the parameter was not supplied)
stop
The value of the stop parameter
step
The value of the step parameter (or 1 if the parameter was not supplied).
If the step is 0, it will raise ValueError.
The arguments to the range function should be integers. (either built-in int or any object that implements the __index__ special method)
Example 1:Only the stop parameter is given.
range(10)
start by default will be 0 and step by default will be 1
by default will be and by default will be stop is given as 10 .
is given as . stop value is excluded. It generates value until 9 only.
It will return a range object containing numbers starting from 0 to 9 .
. We can convert the range object to list using list() constructor.
constructor. We can also iterate using for loop
r=range(10)
print (r)#Output:range(0, 10)
print (type(r))#Output:<class 'range'>
print (list(r))
#Output:[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Example 2:Only the start and stop parameter is given.
range(1,10)
step by default will be 1
by default will be It will generate a sequence of numbers starting from 1 to 9.
r=range(1,10)
print (r)#Output:range(1, 10)
#Converting range object to list
print (list(r))
#Output:[1, 2, 3, 4, 5, 6, 7, 8, 9]
Example 3:start, stop and step parameter is given
range(1,10,2)
It will generate a sequence from 1, increment by 2, and will stop at 9.
r=range(1,10,2)
print (r)#Output:range(1, 10, 2)
#Converting range object to list
print (list(r))
#Output:[1, 3, 5, 7, 9]
Example 4:We can also decrement step by mentioning a negative number.
range(10,1,-2)
It will generate a sequence of numbers from 10 , decrement by 2 , and stop at 1 .
, decrement by , and stop at . Iterating through range object using for loop.
r=range(10,1,-2)
print (r)#Output:range(10, 1, -2)
for i in r:
print (i)
'''Output
10
8
6
4
2
'''
Example 5:
r=range(0)
print (r)#Output:range(0,0)
print (list(r))#Output:[]
r1=range(2,2)
print (list(r1))#Output:[]
Example 6: step is given as 0. It will raise ValueError.
r=range(1,10,0)
print (r)
#Output:ValueError: range() arg 3 must not be zero
Example 7: start, stop, and step can be negative numbers also.
r=range(-10,-20,-2)
print (list(r))
#Output:[-10, -12, -14, -16, -18]
Example 8: start, stop, and step is given as variables a,b,c.
a=1
b=5
c=2
r=range(a,b,c)
print (list(r))
#Output:[1, 3]
Example 9: range() function doesn’t support float numbers.It will raise TypeError. | https://medium.com/dev-genius/an-introduction-to-the-python-range-function-8dc8047161ef | ['Indhumathy Chelliah'] | 2020-08-11 15:31:53.418000+00:00 | ['Python3', 'Programming', 'Python', 'DevOps', 'Data Science'] |
The Big O End! Part 3 of 3 | The Big O End! Part 3 of 3
Wrapping Up Our Last 3 Growth Notations
3d Wireframing in Adobe Illustrator — With Gradient on Editable Text
Big O Notation is such an immense topic to learn and understand. I read an article that said ‘you don’t have to be a math genius to understand it’. They were lying. Writing these articles, though, has helped me get a better grasp on the subject and just this past week while in an algorithm meetup, I had my first AH-HA moment when someone asked me it was possible to develop the algorithm with an O(1)…before I even knew what was happening…I spouted out “No, because we will always need to iterate over the entirety of the elements, the best we can even hope for is an O(n).” I genuinely looked around to see who had said that!
This article is going to cover the last 3 growth notations I set out to cover. If you’d like to check out the other 2 parts of this set of particles, I’ll link them at the end of this post. Message me if I have missed anything, misspoken, or flat out am wrong about any of my information — love learning and love being right!
O(n²)
This notation is know as Quadratic Time. This time complexity is used to denote squaring, or multiplying a number by itself. We have a quadratic (or polynomial) time complexity every time we nest multiple iterations, or have multiple for loops. I found an easy to understand, great example of this in a link on stack overflow, and changed their Python example to Javascript:
linear example:
for (const x of Array(10).keys()) { console.log(x)} quadratic example:
for (const x of Array(10).keys()) {
for (const y of Array(10).keys()) {
console.log(x, y)
}
}
If you open up your console, command-opt-j on mac, and copy in the linear example and return, you’ll see the log returned is only 10 elements long, printing 0–9. If you then copy in the quadratic example and return, you will receive 100 elements. Because of the 2nd for loop…we end up with a logged set that logs x=0, and then the 10 values of y, and continues on in this pattern x+1 while x < 10. The input to this function is 10 and we end up with 100 iterations, a quadratic growth notation where n(the number of elements) is squared. What if the input was 100? Our iterations would increase to 10,000 because O(100²)=10,000. The effort, or time needed to run the equation directly corresponds to the number of elements as the square of the number of elements.
O(2ⁿ)
A growth notation of O(2ⁿ) is referred to as Exponential Time. In this growth notation, the number of times it takes to complete the algorithm is directly 2 squared to the n’th power, with n, as always, being the number of elements. You could imagine, that this kind of growth is figuratively, out of control. and will quickly become an immense amount of every single element added to our data set.
Exponential growth happens when the algorithm must evaluate every permutation or combination in order to complete its task. A fast example of this: maybe you are hacking a password (don’t do this obviously!), worst case scenario, you have to try every possible combination of letters/numbers in order to get the correct combination. If you started with 10 elements, you’d have 1024 combinations to work through, if I then give ask you to figure out the password, but now it has 11 elements in it, you’d have 2048 combinations to try. This is exponential growth…every element requires the equation to double itself, again. This is the recursive Fibbonnaci algorithmic example with an exponential growth¹ (although it’s space complexity is O(n)):
fib(n){
if (n <= 1) {
return 1
}
return fib(n — 1) + fib(n — 2)
}
O(n!)
Factorial Time complexity. Factorial is if you have a number and you multiply it by all positive numbers less than it. For example²:
5! = 5x4x3x2x1 = 120 20! = 20x19x18x17x16x15x14x13x12x11x10x9x8x7x6x5x4x3x2x1 = 2,432,902,008,176,640,000
The most common example of an algorithm with a factorial time complexity is permutations of a string. If you have a string of ‘fan’ and you have to find all permutations, you will have an ouput of 6, but if you have the string ‘hello’ there will be an output of 120 possible solutions!
Thank you for reading my Big O Notation series! I can not believe how much I feel like a learned, but it’s such a miniscule amount of info from the Big O picture. Have a great week y’all! | https://medium.com/the-innovation/the-big-o-end-part-3-of-3-c559d109265 | ['Osha Groetz'] | 2020-11-17 15:50:09.509000+00:00 | ['Software Development', 'Coding', 'Software Engineering', 'Programming', 'Big O Notation'] |
Design Strategy That’ll Make Your Product…Like…WOAHHH! | Design Strategy That’ll Make Your Product…Like…WOAHHH!
Digital product design is a process for creating great products that’ll satisfy your client’s business goals and the needs of their users.
Only the great will survive! It’s super important that the bones of your product are as strong as the aesthetic. In order to have an effective strategy there are some key things you need to consider during your design process:
Understand the business, technical and domain opportunities, while working with the product requirements and constraints.
Recognise the desires, motivations and needs of the people who will use these products.
Design products that have useful forms, content and behaviour.
Create products which are economically viable and technically feasible.
Acknowledge your technical restrictions from the get-go, so the designs are not too time/money consuming during development.
Design Strategy at EL Passion. Image by UI/UX Designer Ela Kumela
Successful Product
The market is competitive, and you need a great digital product that is not just beautiful, but also provides a delightful experience for its users. This means the product should be easy to use, and add outstanding value to the users life.
Not to say that visual appeal isn’t important at all, but if we compare designing an app to…say…building a house, we need to ensure that we build a strong foundation. No matter how beautiful the house, it won’t stay standing with a weak foundation, nor will it be a functional space with a poor room layout. Same goes for building a great app that will endure. Designers need to make sure that the UX design is stable, so that the beautiful visuals are not just masking a dud.
Digital Product Design Strategy
The simplest way to discuss the digital product design strategy, would be to divide it into 3 sections; UX strategy, UX design and Visual Design. As mentioned above, we need a strong foundation before we can start making it look pretty, so we need to start with UX and then move our way into the Visual Design. This process needs to be worked in this precise order, because each relies on the previous step.
UX Strategy
The first portion of design strategy involves understanding the client’s business, and understanding it well! You need to familiarise yourself with all elements of their business that will relate to the success of the product. This includes doing in-depth user-research and analysing the performance of your user experience through UX evaluation. This portion of your strategy is key, because these details will help you to determine the market fit of the product.
It’s also necessary to perform a competitive analysis in your UX strategy. Take a look at what they are doing wrong, and what they are doing right. You can use this information to make sure that your product is solving the problems that other products may already have, but also incorporating their best qualities.
UX Design
Wireframe Example
Once you’ve determined your UX strategy, you can start your UX design, which is all about functionality. The user experience includes all aspects of the end-user’s interaction with the company, its services, and its products. The building blocks for user experience are created through experience architecture — which needs to be designed with well thought-out layouts of specific screens and the intuitive navigation.
The next step would be to move to prototyping, using wireframes and interactive prototypes. Once prototypes are created, you need to perform usability testing, making sure that design decisions were valuable for real users. Here, you’re checking that you’re creating a relevant product that is enjoyable for its users.
Visual Design
Finally, once your UX strategy and design are completed, you can start on your UI design. UI design is the application of your clients brand, in a visual language, to the UX design already completed. Once the foundation is built, tested, and proves to be strong, you need to start designing the aesthetic elements of your products. The Visual Design is about making your product’s interface memorable and visually delightful. The stylistics that are proposed should be defined by decisions that were made in the UX strategy phase. The Visual Design phase is about making the client’s product stand out from the crowd, so you need to use your research to achieve success.
At this point you need to test the product again for usability, making sure that everything still functions well and that you’ve solved any issues that may have come up throughout the design process, with real users.
App Interface Designs by EL Passion Experts
Your Product Should Reflect Client and User Needs
Your designing process should create a great product that will satisfy your client’s business goals and the needs of their users.
Understanding your client’s business is an integral part of this process, as well as recognising what technical restrictions you’ll need to overcome. Recognising the desires and motives will help you to create a viable product that is functional and visually appealing.
One cannot stand without the other, so make sure you’re utilising all of your UX strategy, UX design and Visual designs and that they all work together. Finally, test and test again, to ensure the best user-experience! | https://medium.com/elpassion/design-strategy-thatll-make-your-product-like-woahhh-3af9298720e3 | ['Sona Kerim'] | 2018-09-03 08:40:50.305000+00:00 | ['Design', 'UX Design', 'App Development', 'UI', 'Strategy'] |
A Tool for useEffect Dependencies in React | A Tool for useEffect Dependencies in React
Triggers give you the power to determine execution time
Does this look familiar? Photo by the author.
React Hooks provide a convenient ecosystem for functions to run based on operations made on said function’s dependencies. In a sense, useEffect is a similar idea to a database Hook. A database Hook allows you to perform actions after a database operation.
Say you’re building a game, and whenever a player’s score updates, you also want to update the high score if necessary. You could do this before writing to the database with some regular business logic, but it arguably bloats the update function. In my opinion, it also loosely breaks several best practices. Specifically, a function should have one purpose and minimal side effects. And once written, its functionality should be extended upon but not modified.
An alternative route would be to have a post-update Hook. This Hook listens for changes to the player’s score, and if that score is greater than the high score, it performs a separate update with the new value.
How is useEffect like a database Hook? It allows us to subscribe a function to one or more variable changes that are identified in the dependency array.
Anytime score or highscore change in the previous snippet, the function inside the useEffect will run. Notice that this can actually run two times. It always runs when score changes. It will run a second time if highscore is updated. You can see now that this is conceptually similar to a database Hook.
This is a relatively simple idea, but once understood, it’s powerful! | https://medium.com/better-programming/a-tool-for-useeffect-dependencies-ca4086b085bb | ['Nick Harder'] | 2020-11-25 15:56:55.025000+00:00 | ['JavaScript', 'React', 'Reactjs', 'Nodejs', 'Programming'] |
How Entitlement is Silently Ruining Your Life | Venture capitalist and author Guy Kawasaki once wrote, “Entitlement is the opposite of enchantment.” I’ve lived this so I know it to be true.
For most of my life I’ve been in pretty good situations both socially and financially. But right after I finished my degree in 2010, my family was hit with significant money problems.
We had suffered those before but this was different. Bills went unpaid. Dinners had to be rationed. Water was lunch. Thankfully we had friends and family donate stuff for us.
While this time in my life lasted less than a year, I had to call it like I saw it: we were poor. And yet, I had never felt more alive.
I didn’t feel more alive because I was hungry more often. I didn’t feel more alive because I was worried they’d cut off the light and water. I didn’t feel more alive because I personally had debts to pay and wondered if they’d come for my kneecaps.
I felt more alive because I gave up the chase for more. I was humbled. I knew things could be worse and was more grateful for the little we had.
When the money came back, I had mixed feelings. This life that was flowing through my veins, this enchantment Kawasaki wrote about and my ability to finally embrace the present moment… what would happen to it? Would I be back to being just okay, future-oriented and thinking I deserved more and more? You bet.
I tried to fight it but I eventually lost. I said to myself, “Am I to believe that the only way I can feel truly alive is to be food insecure?” That didn’t make sense. Clearly some people are rich and obnoxious, but there are some that were humble too. How can I be like them? And that leads us to the first way entitlement is ruining your life.
1. Entitlement is Unconscious
When we think of someone being entitled, it’s the person barking at the coffee shop for their order or the person who thinks that because they are nice to another person, that person should do whatever they want, or whenever we put out little effort and expect immediate and/or big results.
And we’re right! Those are great examples of entitlement. But sometimes we fall prey to these tendencies. I for one hate when the light turns green and people are honking their horn, but one day I did it. I was in a rush and it seemed like the person in front of me was driving Miss Daisy. Usually, I’d just mutter to myself, but it’s the same entitlement!
These people who engage in these selfish behaviors are not villains. We think they are because we aren’t doing what they’re doing (at the moment). We think they’re bad and we condemn their actions but if we practice some self-awareness, we’d find that we’re guilty sometimes too. The solution to being unconscious and mindless is to be mindful and self-aware.
2. You Misunderstand Success
What I mean by this is that there’s this cross-cultural notion that successful people are entitled, arrogant pricks that believe that everyone should bend to their will.
As a result, people who want to be successful adopt this way of being before they’ve done anything of repute. This is a problem because they misunderstand success.
Yes, we have the laundry list of celebrities and rich people who abuse their power and are overall terrible but their success is not because they were entitled bastards. They were plugging away at their trade, became successful and then with the newfound power came the tendency for corruption. Also, there was a fear of losing the power, which made them even worse.
The only people who are entitled bastards who are rich were born into wealth (or were kids with parents who never told them no.)
Furthermore, if being entitled equals success, what of the celebrities and wealthy folk who aren’t entitled? They’re thriving and are not terrible. Why is that? Because being celebrated and providing the world with great stuff never had anything to do with being entitled. If anything, hard work and humility are why they made something of themselves.
3. You Have an Unhealthy Attitude with Volunteering
There are two ways one can volunteer. On the one hand, you do something for no pay but for a benefit. On the other hand, you do something for no pay and no benefit.
A lot of the times if we can’t see what’s in it for us, we won’t do it. But here’s the thing. If there was info or training or food you desperately needed and you needed someone to sacrifice some time for your benefit, you’d hope that they’d do it.
Sometimes people need help. Your help. They’re like babies in the sense that they can’t help you, they can’t give you anything in return and they may even leave a mess for you to clean up, but as a being of planet Earth who respects other beings, open your heart to the less fortunate. That could’ve been you. Actually, it was you once upon a time.
4. You Think Life Owes You
In addition to some of the rich and famous being entitled and obnoxious, there are some poor or average people that are entitled and obnoxious. How is that possible? Because they’ve been kicked around by life to the point where they think they are due a break.
I can relate to this. I was lucky to take my human rights and basic necessities for granted but socially and emotionally, I wasn’t as lucky.
There eventually came a point in high school where I figured I was due a break (or at least a girlfriend). None of these things came until I could…
5. Embrace Reality
Entitlement is thinking that you are inherently deserving of privileges or special treatment. Entitlement is essentially expectation. The antidote to expectation is humility.
When my family and I were struggling to make ends meet, my attitude changed. Life humbled me. Life didn’t humiliate me, it just grounded me. Expectation makes you fly off the handle and your attention is diverted here, there and everywhere. Humility keeps you rooted in the present moment.
It isn’t that you stop having ambitions. You just act from where you are, not where you imagine yourself to be. This is crucial because people who aren’t humble are too chummy, too flighty, too careless. When you’re humble you know you aren’t owed anything. You also embrace what is and the pros and the cons of it because there are always pros and cons to everything.
The great thing about this is that humility can be achieved by looking at the four pointers above.
You aren’t humble because you don’t know that you’re not.
You mistakenly connect arrogance with success.
You can’t give of yourself without getting something in return because you think you’re above that when you’ve literally benefited from that yourself.
And finally, either because you were born with a silver spoon in your mouth or no spoon at all, you think life owes you something when you’ve contributed nothing to life.
Now it’s time to stop worrying about what you can get and concern yourself with what you can give. | https://alchemisjah.medium.com/how-entitlement-is-silently-ruining-your-life-e3f9d03272ae | ['Jason Henry'] | 2019-08-08 04:44:11.770000+00:00 | ['Self-awareness', 'Self Improvement', 'Life Lessons', 'Self', 'Life'] |
Highlights from Product Marketing Summit New York 2019 | Macro Theme: Product Marketing as a Strategic Function
Another thread that reappeared throughout the sessions was just how strategic of a role product marketing is.
Now, it’s obvious that a lot of projects we handle as product marketers have strategic elements to them — messaging and positioning, buyer personas, market and competitive intel, etc are each broad-reaching topics. But on even deeper level, the strategic aspect comes into play through our role as guardians of the overarching story.
All day, speakers kept coming back to the fact that product marketing owns the narrative and story. Not just on a marketing level, but on a company and product level.
This involves thinking through some big picture questions — much larger than a series of campaigns, or even an integrated marketing plan. It’s about grappling with some of the most fundamental questions of identity –“Who are we? What do we stand for? And where are we heading?”
While you might think that this responsibility falls to the Strategy team, or the C-Suite, the consensus is that product marketers are an important driver here.
For instance speakers from:
Spotify explained that a big part of the role is looking at the overall narrative, and synthesizing as many views as possible of what that story is going to look like to help other groups understand the way forward
explained that a big part of the role is looking at the overall narrative, and synthesizing as many views as possible of what that story is going to look like to help other groups understand the way forward Cockroach Labs summed it up as: “Product Marketing is the process of building and delivering a core narrative. This is what we do.”
summed it up as: “Product Marketing is the process of building and delivering a core narrative. This is what we do.” Conductor argued that even the product roadmap is essence a narrative and story, and product marketing has a big role to play in helping define the “why” behind feature that’s being developed and the value it brings
It can’t be said better than the Ben Horowitz quote cited by one speaker that says: “If we have the right company story, we can take over markets.”
These questions are the fundamentals that come before anything else. Before you can execute, you have to have a starting point — what is Point A where we are today, and what is Point B that we’re trying to get to ? And does everyone understand fully understand this direction?
That’s where we bring value. Focused on the story — and seeing it as the underlying purpose of our role.
So when we wake up to a day filled with To Do Lists and booked calendars, it’s worth taking a moment to remember that the biggest contribution we may make that day is to help others see the big picture strategy. If for nothing else it might be the difference in taking over a market one day soon. | https://medium.com/we-are-product-marketing/highlights-from-product-marketing-summit-new-york-2019-e31d95981dfd | ['Rebecca Geraghty'] | 2019-03-31 14:18:56.031000+00:00 | ['Product Management', 'Marketing', 'Startup Lessons', 'Product Marketing'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.