title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
CRPT joins Ethos Universal Wallet | We are pleased to announce that CRPT token has been added to the list of coins available in Ethos wallet. This means now there is one more reliable and convenient way for you to store, track, receive and send Crypterium tokens.
Compared to other ‘hot wallets’, Ethos is the one to check off all the boxes. It has support for Bitcoin, Ethereum and 100+ cryptocurrencies. Apart from that, Ethos has the asset allocation, cold storage, managed keys, fund transfer, mobile support, and a software interface.
Where else to store CRPT
CRPT is a ERC-20 based token, therefore it is good to use with MyEtherWallet, MetaMask or any other ERC-20 compatible wallets. In addition to that, you can also keep your CRPT in Crypterium App which is, in fact, gives you the key to all the other features we have to offer. Once you store your cryptocurrencies in the app, you’ll be able to:
Top up your phone, or buy Skype, Steam and Viber vouchers
Buy crypto straight from your bank account
Receive, store and send cryptocurrencies in seconds
Exchange cryptocurrencies at best rates
Follow real-time cryptocurrency rates
The current version of the App allows you to store your funds in BTC, ETH, LTC and CRPT. New cryptocurrencies will be added constantly.
About Crypterium
Crypterium is building a mobile app that will turn cryptocurrencies into money that you can spend with the same ease as cash.
Shop around the world and pay with your coins and tokens at any NFC terminal, or via scanning the QR codes. Make purchases in online stores, pay your bills, or just send money across borders in seconds reliably and for a fraction of a penny.
Learn more at http://crypterium.com/ and join the discussions in ourTelegram Chat. | https://medium.com/crypterium/crpt-joins-ethos-universal-wallet-b635ea140992 | [] | 2018-10-09 16:29:12.929000+00:00 | ['Crpt', 'Blockchain', 'Cryptocurrency', 'Wallet', 'Token'] |
When Can Restaurants Reopen? | While takeout and delivery are available pretty much everywhere, the rules for indoor and outdoor dining are different everywhere. More frustratingly, they are subject to frequent change as the pandemic situation evolves. With vaccinations off to a slow start, a definitive reopening and complete return to normal is still off in the distant horizon. In the meantime, we’ve compiled some a snapshot to help you understand when can restaurants reopen in the biggest states.
Information last updated: January 26, 2021
When can restaurants reopen in California?
Status Notes Takeout & Delivery Allowed Indoor Dining Not allowed See zone map for exceptions Outdoor Dining Allowed Subject to local regulations and capacity restrictions
Covid-19 struck California especially hard towards the end of 2020, which led the governor to issue a regional stay-at-home orders. While takeout and delivery was never banned, all in-person dining was paused in affected regions.
The order was recently lifted, which means that many localities are reopening outdoor dining, including San Francisco and Los Angeles. Cities are expected to have additional capacity restrictions (many at 50%) and other requirements to ensure the safety of diners.
In addition, California has lifted the 10pm — 5am curfew that was in effect during the stay-at-home emergency, though San Francisco will continue to enforce it.
California’s Covid-19 situation (post stay-at-home order) is summarized using a color-coded system, where the most severe tier (purple) means only outdoor dining is allowed. Currently almost the entire state is in the purple zone.
When can restaurants reopen in Texas?
Status Notes Takeout and delivery Allowed Indoor dining Allowed 50%-75% capacity Outdoor dining Allowed No capacity limit, social distancing measures required
Texas has kept in-person dining available throughout the end of 2020 and into 2021. There is a statewide 75% capacity limitation on indoor capacity dining, with an automatic trigger that lowers capacity to 50% if hospitalization rates exceed 15%. This was recently triggered in the Austin area and is also in effect in other cities including Dallas and Houston.
Adding to the confusion has been conflicting directives between local and state authorities. Austin has a 5-stage classification for pandemic severity, and at stage 5 it has recommended bars and restaurants to shut down indoor dining and limit outdoor dining to 50%, and ending service at 10pm. However these are contradicted by orders from the state, which has led to lawsuits being filed, further muddying the picture.
What about New York?
Status Notes Takeout and delivery Allowed Indoor dining Allowed excluding NYC Allowed in certain zones, 50% capacity Outdoor dining Allowed
New York was an early epicenter of the Covid pandemic in Spring 2020. While it brought the virus mostly under control through the summer, by the fall it had surged again and the state took action to slow the spread.
Throughout most of the summer/fall NYC restaurants could operate in-door dining at 25% capacity. However, due to the rise in cases, the governor shut down indoor dining in NYC in December 2020, and has yet to reopen it, despite protests from restaurant associations.
Across the rest of the state, the rules around indoor dining are tied to the state’s color-coded microcluster strategy. Originally the policy only allowed indoor dining in yellow zone areas, but following recent lawsuits the state is planning to expand indoor dining to orange zones as well. All in-door dining is limited to 50% capacity.
And Florida
Status Notes Takeout and delivery Allowed Indoor dining Allowed Outdoor dining Allowed
Florida has one of the most permissive regimes for restaurant operation during the pandemic. Since September, the state has allowed operation at pre-pandemic levels without capacity restrictions. Localities are allowed to impose additional restrictions with formal public health justifications, though the state requires that indoor dining capacity cannot be dropped below 50%. Miami is one of the cities that have imposed an additional 50% capacity restriction. | https://medium.com/@bensenai/when-can-restaurants-reopen-fa7fdaf8ad73 | ['Bensen Ai'] | 2021-01-27 17:37:40.951000+00:00 | ['Covid 19', 'Restaurant', 'Restaurant Business'] |
Dumbwaiter Systems — VersaLift Attic Storage Lifting Solutions | “My garage attic, my life…”
Some people literally believe in it. However, without getting you and us into a philosophical journey of storage, we could get straight to the point.
Your home may not have an ample storage space due to various reasons. Generally, commercial organizations find numerous ways to manage their storage. If still, they lack in space, they utilize dumbwaiter systems for effective utilization of space in multiple levels. Therefore, you can take a note from there and consider a DIY dumbwaiter project for your home.
It’s not necessary that you will have only the storage lifting systems for attic or garages only. In fact, you can check out other feasible areas in your home. Pondering over this investment? Don’t worry at all! You can find VersaLift lifting systems that make a versatile addition to your home.
Advantages of VersaLift Systems
You can consider a variety of storage lifting systems and still not find what you need. Nevertheless, in VersaLift systems, you will discover a plethora of practical and reliable features. We have listed its advantages below.
Easy to install & convenient. If you can find the time, you can install it all by yourself. Calling out a friend or a handyman for help could also be an alternative. Since it is easy to install and provides convenient access, you have nothing to worry about.
User-friendly & safe. These two aspects matter the most. You can simply use it whenever you want. It is way safer than the ladders and rightly saves your back from an unwanted pain.
Controllable & affordable. You can explore the various-controlled versions of these lifting systems. It doesn’t cost you a fortune at all! | https://medium.com/@cynergylift/dumbwaiter-systems-versalift-attic-storage-lifting-solutions-7964f2ae3fa9 | ['Cynergy Lifts'] | 2019-05-25 07:06:24.123000+00:00 | ['Home', 'Home Improvement'] |
Carrying a Conversation | Business networking is not about seeking referrals but about building relationships. It is evident that it takes time to build a relationship but to build a meaningful relationship you need to have a deeper rapport, which in turn requires frequent conversations.
The first thing then is to go past the initial introduction when you meet someone for the first time. A meeting can get stalled once the pleasantries have been exchanged and each one is seeking a thread to continue the conversation. It is in times like these that I feel my experience can be of help to you, and I share a few topics of conversation that can lead to a much longer and animated interaction. Dale Carnegie in his masterpiece — “How to influence people and win friends”, repeatedly reiterates the need to be a good listener. I need to add here that you also need a good topic of conversation to make the task of listening lighter and make the exchange of words build a deeper rapport.
Here’s my list of 10 topics of discussions to spark a conversation.
What made you start on your own? What is the first thing you bought with your own money? What was your most famous moment? What’s the best advice you ever followed? What’s the one fear you want to overcome? What is the one decision in life you want to revisit? Who is the most famous person you met? What is that you own that you will never sell? Where do you want to travel next? What about your children make you proud?
These are my suggestions and feel free to improvise as long as you address the basic tenets on which these questions are based on. You have to give them an opportunity to speak about something that relates to their past, and more importantly to their future. The conversation should make them reminisce about the good things of life, feel proud of their achievements, get appreciation and stay hopeful.
It would help if you mentally prepared yourself with your own answers to these questions in case you were asked to respond to the questions as well. And do not commit the cardinal sin of offering your answer before getting the response from the person you posed your question to.
I would love to get feedback on these topics and know what else worked for others. | https://medium.com/@hello-66783/carrying-a-conversation-84e1d0232bed | ['Chirantan Joshi'] | 2020-12-15 07:52:55.967000+00:00 | ['Networking', 'Bonds', 'Conversations', 'Relationships', 'Connection'] |
Armed Forces Day | May 16th, 2020 the Texas Veterans Land Board would like to congratulate and thank all who served as a member of the United States Military. On this 70th celebration of Armed Forces Day, we salute all who have protected this country in wartime or in peace.
The Armed Forces are the combined men and women of the United States Army, Navy, Marine Corps, Coast Guard, and Air Force. That order of branches is by legislative commission from earliest in 1775 to most recently in 1947.
Click here for the article on Armed Forces Week — U.S. Army
Click here for the article on Armed Forces Week — U.S. Navy
Click here for the article on Armed Forces Week — U.S. Marine Corps
Click here for the article on Armed Forces Week — U.S. Coast Guard
Click here for the article on Armed Forces Week — U.S. Air Force
Click here for the article on Armed Forces Week — Children of Fallen Patriots
The Texas Veterans Land Board thanks all who are now or have in the past served as a member of one of these distinguished branches. Thank you all for your service!
If you are a Veteran, Click Here to Sign Up to stay informed on your benefits with the Texas Veterans Land Board. | https://medium.com/texas-veterans-blog/armed-forces-day-4e3d5064ff99 | ['Texas Vlb'] | 2020-06-04 20:32:39.666000+00:00 | ['Vlb', 'Veterans', 'Armed Forces Day', 'Texas', 'Military'] |
Abuse Culture Case Study: The Raw Deal | I do currently have an offer for housing from another single parent. But…
Red Tape B.S. Continues
Update: Yeah, it’s confirmed. The housing services can only help if I get a place by myself. Which they should have fucking told me months ago because with that racist eviction and my fucked up credit, no one’s going to rent to me, anyway. So that’s time wasted on this wild goose chase when I could have at least been prepared to do it on my own. | https://medium.com/postmodern-woman/abuse-culture-case-study-the-raw-deal-57917414068b | ['Michon Neal'] | 2017-10-21 22:42:52.303000+00:00 | ['Homelessness', 'Domestic Violence', 'Parenting', 'BlackLivesMatter', 'Abuse Culture'] |
Tether is setting a New Standard for Transparency, that is Untethered from facts | The evidence against Bitfinex & Tether is substantial
The small cracks in the Bitfinex/Tether story that I found in 2017, has exploded into an ocean of evidence against Bitfinex & Tether, yet despite all of the evidence against them, people still want to believe Bitfinex and Tether, or give them the benefit of the doubt.
Bitfinex & Tether shareholders and supporters attacked me and called me the liar, when the reality was Bitfinex & Tether were the liars. It’s no wonder why Bitfinex, Tether, and their shareholders so rabidly attacked me. They knew their own vulnerabilities.
From June 2018, we have had the academic paper “Is Bitcoin Really Untethered”, by John Griffin. This paper made it into the Journal of Finance. I’m in a footnote on page three. The paper alleges market manipulation by, Bitfinex & Tether (or close associates), via Tether printing unbacked Tethers. Since the paper came out, unbacked Tethers have since then been proven, by the New York Attorney General.
Most importantly, the Journal of Finance and a professional academic paper probably is a little bit more convincing than a random asshole on Twitter.
However, despite this paper coming out, and making it into the Journal of Finance, the Bitcoin community simply ignores it, treating it the same as if it was a random asshole on Twitter.
Strange. The cryptocurrency community chooses to believe proven liars, instead.
In November 2018, it became public that Bitfinex & Tether are under criminal investigation by the Department of Justice, just after their October 2018 seizures, not surprisingly, Bitfinex have not recovered those funds, either.
However, that also was pretty much ignored the same as if it was posted by a random asshole on Twitter.
Strange.
So, in 2018 we have a professional academic paper come out that concludes Tether shenanigans, and a DOJ Criminal Investigation into Tether… but the cryptocurrency community still shouts “FUD”, and allegedly people keep sending their billions of dollars to Tether.
Bitfinex & Tether start to issue tens of billions of new Tethers, and the crypto currency bull market suddenly comes roaring back to life throughout 2020 and 2021.
Even though Tether has competitors which are not publicly known to be under criminal investigation, and have not yet been proven liars… Tether is reporting that so-called “investors” are now sending tens of billions of dollars to them… to buy Bitcoin… as opposed to… just buying Bitcoin. Why not use a competitors stablecoin?
What is so special about Tether that investors choose Tether? All stablecoins should have the same price, right? $1.00. (Hint: The answer is, leverage, Tethers are cheaper than a dollar.)
Tether now claims over $62 billion dollars in so-called “reserves”.
Tether, the ‘stablecoin’ that lied about their reserves in 2017, remains the biggest ‘stablecoin’ of them all.
However, some things remain from 2017… Tether claims to have $62 billion in reserves, but still cannot perform a simple audit of their balance sheet.
Hundreds of millions of dollars of free interest payments every year, but can’t conduct an audit.
Part of the settlement agreement with the New York Attorney General, forced Tether to produce a breakdown of their reserves. Tether had 90 days to complete the breakdown of their reserves.
Everyone was waiting with bated breath. May 19th was the deadline.
“Bitfinex’ed is going to get REKT when Tether publishes the breakdown of the reserves! FUD DESTROYED!”
Just prior to their 90 day deadline, Tether produces the breakdown of their so-called reserves, and expectedly, my “FUD” is shot out of the water and I’m forever just a random asshole on Twitter that was butthurt and wrong about everything… or was I?
Short and uninformative.
Tether produces a single page breakdown of their so-called “reserves” a pie chart apparently constructed by a forth grader after spending ten minutes learning Excel for the first time, and the majority of Tethers are allegedly backed by so-called “Commercial Paper”.
So here’s a thought experiment. If Tethers reserves were good, they would have been producing a breakdown of their reserves voluntarily since 2017, especially as a way to combat relentless assholes on Twitter criticizing them.
Nothing actually prevented them from doing this before. It’s not an audit, but it shows some transparency, but they refused to do this until forced to by the New York Attorney General.
Why didn’t they do this before? Because Tether knows that the professional financial community, is going to be able to figure out they’re lying. It sets off fuse for them, and that fuse is burning.
It’s my belief that Tethers reserves are junk. Tether not disclosing the commercial paper they own is suspicious, especially in light of their known history of deception.
Tether could have produced a breakdown of the reserves showing who they are loaning money to, what commercial paper they own, what bonds they own, etc. They had 90 days to put together something substantive.
A ‘stablecoins’ reserves should be in safe assets, and people seeing those assets shouldn’t cause anything bad to happen.
Tether clearly doesn’t want to do that, and it’s for a good reason. Here’s why:
In May, it provided a breakdown of these reserves, which Tether claims included just under $30bn in commercial paper, a short-dated investment similar to cash. Such holdings of companies’ short-term debt would make it the seventh largest in the world. But this reported accumulation has largely gone unnoticed on Wall Street, according to several of the biggest players in the market including bank traders, analysts and money market funds. “We’ve got lots of inquiries and heard lots of discussion, but have not seen any active participation,” said Deborah Cunningham at Federated Hermes. “Until last week we hadn’t really heard of them,” said a trader at a large bank. “It was news to us.”
Tether reported that it’s one of the largest purchasers of commercial paper, but the traders in the commercial papers markets, are not seeing Tether participating in the US commercial paper market.
If Tether didn’t have to reveal what the make up of the reserves are, this would not be possible to discover. Tether knows this, which is why they refused to be transparent, until they were forced to.
What commercial paper is Tether buying? This is akin to Bernie Madoff Investments, a financial investment firm, that doesn’t make any trades…
Throughout Tethers history, every time they did not want to show something, we eventually found out why they didn’t want to show it. They didn’t want to do an audit in 2017, because they didn’t have a bank account and were comingling funds, along with banking with a now indicted money launderer currently being prosecuted by the Department of Justice.
It’s my opinion that they do not want to show the breakdown of the commercial paper, loans, and so on, is because whatever is in there, if anything, would be catastrophic for them.
If it was good, they’d show us. If Tether was running a legitimate operation, they’d want nothing more but to put this issue to bed for once and for all. | https://medium.com/@bitfinexed/tether-is-setting-a-new-standard-for-transparency-that-is-untethered-from-facts-deec42c473bb | ['Bitfinex Ed'] | 2021-06-14 17:47:51.635000+00:00 | ['Tether', 'Bitcoin', 'Bitfinex', 'Cryptocurrency', 'Fraud'] |
Visualization Techniques | If you ever need to know any kind of charting/visualization technique — look no further. Check out the periodic table of Visualization methods. It is interactive — so give it a try and hover over. | https://medium.com/aloktyagi/visualization-techniques-c3329b0416ab | ['Alok Tyagi'] | 2017-03-08 21:01:48.883000+00:00 | ['Blogging', 'Usability', 'Visualization'] |
How to (mis)use Pipedrive from sales to delivery at our startup company Adusso and beyond? | How to (mis)use Pipedrive from sales to delivery at our startup company Adusso and beyond?
I wanted to figure out if I could set up another pipeline on Pipedrive for following up the stages after Sales Pipeline when a deal is won and begins to be delivered as a project, for example. This I named as Project Pipeline with stages Preparation, In progress, Delivered and Closed.
The problem is that just duplicating a deal and moving it to Project Pipeline won’t work, as either the sales statistics is lost, or the deal activities are not following into the delivery funnel (duplicating a deal will copy neither the days spent in different stages nor the activities and their content).
My requirements for handling these two pipelines and associated deals were:
Original deal to be marked as won must retain all its content, most importantly the deal value and time spent at different stages to keep up Sales Pipeline statistics about deal flow.
The new deal in Project Pipeline must get all the activities and notes from the original deal to maintain the communication history with the customer for seamless continuity from sales to delivery.
This is how I managed to do it:
Duplicate a deal in Sales Pipeline Delete the original deal Go to the deleted original deal Merge the deleted original deal with the copied deal by retaining the values of the copied deal in case of conflict Reopen the original deal Mark the original deal as won Move the copied deal to Project Pipeline
Now we are able to “misuse” Pipedrive for maintaining a funnel about our customer projects and ongoing service delivery. The only tricky part is this seven-phase mouse-click procedure upon winning a deal, but it’s worth it as our deals are sizeable, typically with large healthcare providers or big Electronic Health Record system vendors. Hope this helps anybody who has already been considering the same or is looking for a simple view on all ongoing projects and deliveries on a dashboard-like view which Pipedrive has to offer.
P.S. Pipedrive provides an API that would make it possible to automate the transition of a deal from Sales Pipeline to Project Pipeline. I would be very happy to hear your thoughts if this could be implemented with the existing API commands. Please share! | https://medium.com/@jannepitkanen/how-to-mis-use-pipedrive-from-sales-to-delivery-at-our-startup-company-adusso-and-beyond-e3b641ff1482 | ['Janne Pitkänen'] | 2020-12-19 08:38:39.783000+00:00 | ['Startup', 'Project Management', 'Sales', 'SaaS', 'Pipedrive'] |
ドローン部週報㉒ | Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore | https://medium.com/furuhashilab/%E3%83%89%E3%83%AD%E3%83%BC%E3%83%B3%E9%83%A8%E9%80%B1%E5%A0%B1%E3%89%92-97f3794d1eaf | [] | 2020-12-20 12:57:53.219000+00:00 | ['Drones'] |
Florence: Day 1. First day in Florence we had some great… | Gelato at Venchi’s
We were trying to find a good gelato place and we came across Venchi’s which we thought was the one we were thinking of (started with a V). The gelato was super good. My niece dropped her cone and it was a quintessential “kid dropping their ice cream” moment — where they kind of stare at it with pouty face ready to cry. One of the workers saw what happened and gave her a new gelato on the house. How nice!
We learned later that this wasn’t the right place, it was actually Vivaldi, but Venchi’s was also one of the top gelato places in Florence. You’ll have to wait to see what we crowned the king of gelato.
Ponte Vecchio
After eating our gelato we wanted to go back and see the Ponte Vecchio which is the bridge slash market connecting the two banks of Florence. It’s a really cool bridge that has a jewelry market during the day. In the evening it’s bustling with tourists trying to take photos of the two banks and the surrounding hills of Florence.
Palazzo Pitti
We crossed the Ponte Vecchio and rested in a large plaza called Palazzo Pitti, which marks the entry into the Boboli gardens. We sat there for a spell and then decided we were all tired and should go to bed. It was a pretty full half day! | https://medium.com/aix-in-florence/aix-in-florence-day-1-fe09986f53cd | ['Aix Squared'] | 2015-07-12 15:43:51.208000+00:00 | ['World', 'Travel', 'Italy'] |
My Secret to 31 Years of Marriage to my Perfectly Imperfect Husband | What You Should Know Before you Marry
Photo by One zone Studio on Unsplash
Regardless of all the beautiful wedding pictures like the one featured above. Truth is. There are no perfect marriages. Period.
This is because there are no perfect people. Not one.
Since there are no perfect people. There obviously cannot be any perfect marriages. This is not rocket science.
So before you marry. My sound advice to you is to completely let go of the “our marriage will be perfect” idea. It is not true. It will not happen. Because real-life marriage is no fairytale.
Trust me. I have been married to the same man (my college sweetheart) for 31 years, 3 months, and 25 days. And although I am happy to say after all these years. We both agree our marriage falls into the category of being a good one. Even so, it has been far from perfect. | https://medium.com/an-idea/my-secret-to-31-years-of-marriage-to-my-perfectly-imperfect-husband-ab18a42b3973 | ['Carla D. Wilson Laskey'] | 2020-12-16 18:12:09.543000+00:00 | ['Life', 'Humor', 'Marriage', 'Divorce', 'Advice'] |
Beginning Before You Begin | At the end of a three-day meditation retreat where I had been contemplating my purpose and next steps, Fr. Daniel Renaud, OMI, began his Sunday liturgy with an idea that struck me: “You are beginning before you begin.” This is what I took away from his talk: whether we realize it or not, each action we take has a beginning before the beginning — a seeding or incubation period. Moments lead to other moments, which in turn lead to other moments. As I’m nearing the end of this year’s self-imposed writing challenge (of which this post is part), I realize that this project fits Renaud’s description. It has been my beginning before I begin. It’s still to be determined whether the projects that I begin in the new year will be the actual beginning, but this insight feels significant.
I have been contemplating my creative process these past few weeks, discovering new insights and uncovering deeper truths. I have unpacked my sacred (Chiron) wound a bit more in-depth to understand that its core revolves around my natural-born creativity. Not only did I discover this creativity as my core wound and hence my vocation, I was also led to better understand how it shows up in my life and how I have been unintentionally suppressing it. Here is what I’ve learned:
I am happiest when I’m involved in a creative project that excites me. I need to be inspired by my work and I need to feel like I’m expanding my gifts and learning new things. The worst thing for me is to allow myself to slip into sluggishness or to be trudging away at something that gives me little or no inspiration.
However (and this is my biggest aha), since my core self is creative, it follows a natural ebb and flow that I can’t control. They say the shadow of creativity is entropy, or a “measure of disorder or unavailable energy within a closed system.” The manifestation of this shadow can feel like numbness or a sense of gloom, but it’s really a fertile state within which creativity can occur. It is in effect our system recharging. So, if we are patient enough to allow the process to complete without trying to analyze, repress, or fix it, there’s a distinct possibility that something special will germinate inside of us.
Most depressive states occur when we resist this phase, or our mind turns inward on itself. Had I understood more about this when I was younger, it may have saved me a lot of misery. Instead, I fell into periods of overwhelming melancholy — a cycle which fed off itself for far too long. The agony of my depression led me to fighting my very nature to escape it. Not to mention it kept me from germinating any creative ideas of significance. This not only affected my joy; it also deeply affected my relationship with myself and others. I was not secure enough in who I was to even be comfortable in my own skin, let alone to identify or follow my passion. Rejecting my true nature, I was always trying to find myself outside of myself. I’ve written before about how often I have gotten caught up in measuring my success by other people’s standards, trying to validate my worth externally. When you are cut off from yourself, what is there to validate? Because I was cut off from my very nature, I had no proof of my creativity, and that had the effect of further leading me to the conclusion that I wasn’t creative.
I have always been attracted to creative people and have felt small in their presence. I was mostly attracted to their love of creating, their absorption, their need to create. Whereas I only felt the entropy of my creative abilities. Yet I have, though often too fleetingly, felt the happiness of the creative impulse. Now that I’ve figured out that it’s something I have to consciously nurture and not unconsciously repress, I’ve been feeling its impulse more and more. And that’s exciting. I feel like I’m finally coming home to my true nature. I’m basking in its radiance and feeling its unconditional love. Now that I’m feeling the entirety of the creative process drawing me in, I’m beginning to feel more confident that I’m on the right path. | https://medium.com/@suzanne-sanders/beginning-before-you-begin-a11ff4cfa975 | ['Suzanne Sanders'] | 2020-12-08 23:51:24.634000+00:00 | ['Creativity', 'Creative Process', 'Personal Growth'] |
Pulsar Advantages Over Kafka | Introduction
Recently, I’ve been looking at Pulsar and how it compares to Kafka. A quick search will show you that there is a current “war” between the two most famous open source messaging systems.
As a Kafka user, I do struggle with some of the issues with Kafka and I’m very exited about Pulsar. So finally, I managed to have some time to play around with it and I did quite a lot of research. In this article, I will focus on the Pulsar advantages and give you some reasons why you should consider it over Kafka. But let’s be clear, in terms of production usage, support, community, documentation, etc; Kafka clearly surpasses Pulsar, and I would only consider Pulsar if most of the advantages discussed in this article hold true to your use case. Let’s begin!
Kafka in a Nutshell
Kafka is the king of messaging systems. Created by LinkedIn in 2011, it has spread widely thanks to the support of Confluent who has released to the open source community many new features and add-ons such as Schema Registry for schema evolution, Kafka Connect for easy streaming from other data sources such as databases to Kafka, Kafka Streams for distributed stream processing, and most recently KSQL for performing SQL-like querying over Kafka topics and much more. It has also many connectors to many system, check Confluent Platform for more details.
Kafka is fast, easy to setup, extremely popular and can be used for a wide range or use cases. While Apache Kafka has always been friendly from a developer’s point of view, it has been something of a mixed bag operationally. So, lets’ review some the pain points of Kafka.
Kafka pain points
Scaling Kafka is tricky, this is due to the coupled architecture where brokers also store data. Spinning off another broker means it has to replicate the topic partitions and replicas, which is time consuming.
this is due to the coupled architecture where brokers also store data. Spinning off another broker means it has to replicate the topic partitions and replicas, which is time consuming. No native multi-tenancy with complete isolation of tenants .
Storage can become quite expensive, and although you can store data for a long period of time, it is rarely used because of the cost implications.
It is possible to lose messages in case replicas are out of sync.
You must plan and calculate number of brokers, topics, partitions and replicas ahead of time (that fits planned future usage growth) to avoid scaling problems, this is extremely difficult.
Working with offsets could be complicated if you just need a messaging system.
Cluster re-balancing can impact the performance of connected producers and consumers.
MirrorMaker Geo replication mechanism is problematic. Companies such Uber have created their own solution to overcome these issues.
As you can see, most of the problems are related to the operational aspects. Although, it is relative easy to setup up, Kafka is difficult to manage and tune. Also, it is not quite as flexible and resilient as it could be.
Pulsar in a Nutshell
Pulsar was created by Yahoo in 2013 and donated to the Apache foundation in 2016. Pulsar is now an Apache top level project. Yahoo, Verizon, Twitter among other companies use it in production to process millions of messages. It has many features and it is very flexible. It claims to be faster than Kafka and hence cheaper to run. It aims to solve most of the pain points of Kafka making it easier to scale.
Pulsar is very flexible; it can act as a distributed log like Kafka or a pure messaging system like RabbitMQ. It has multiple types of subscriptions, several delivery guarantees, retention policies and several ways to deal with schema evolution. It also has a big list of features…
Multi-tenancy is built in, different teams can use the same cluster and be isolated. This solves many administration headaches. It supports isolation, authentication, authorization and quotas.
is built in, different teams can use the same cluster and be isolated. This solves many administration headaches. It supports isolation, authentication, authorization and quotas. Multi tier architecture : Pulsar stores all the topic data in a specialized data layer powered by Apache BookKeeper as data ledgers. Separation of storage and messaging solves many issues with scaling, rebalancing and maintaining the cluster. It also improves reliability and makes almost impossible to loose data. Also, when reading the data, you can connect directly to Bookeeper without affecting the real time ingestion. For example, you can use Presto to execute SQL queries on your topics, similar to KSQL but with the peace of mind that this will not affect real time data processing.
: Pulsar stores all the topic data in a specialized data layer powered by as data ledgers. Separation of storage and messaging solves many issues with scaling, rebalancing and maintaining the cluster. It also improves reliability and makes almost impossible to loose data. Also, when reading the data, you can connect directly to without affecting the real time ingestion. For example, you can use to execute SQL queries on your topics, similar to KSQL but with the peace of mind that this will not affect real time data processing. Virtual topics . Because of the n-tier architecture, there is no limitation on the number of topics, topics and its storage are decoupled. You can also create non persistent topics.
. Because of the n-tier architecture, there is no limitation on the number of topics, topics and its storage are decoupled. You can also create non persistent topics. N-tier storage . One problem of Kafka, is that storage can become expensive. So it is rarely used to store “cold” data and messages are often deleted.Apache Pulsar, through Tiered Storage, can automatically move older data to Amazon S3, or any other deep storage system; and still present a transparent view back to the client; the client can read from the start of time just as if all of the messages were present in the log.
. One problem of Kafka, is that storage can become expensive. So it is rarely used to store “cold” data and messages are often deleted.Apache Pulsar, through Tiered Storage, can automatically move older data to Amazon S3, or any other deep storage system; and still present a transparent view back to the client; the client can read from the start of time just as if all of the messages were present in the log. Pulsar Functions . Easy to deploy, lightweight compute process, developer-friendly APIs, no need to run your own stream processing engine like Kafka.
Easy to deploy, lightweight compute process, developer-friendly APIs, no need to run your own stream processing engine like Kafka. Security : It has a built in proxy, multi tenant security, plug-able authentication and much more.
: It has a built in proxy, multi tenant security, plug-able authentication and much more. Fast Re-balancing . Partitions are split into segments that are easy to re balance.
. Partitions are split into segments that are easy to re balance. Server side de-duplication and dead lettering . No need to do this in the client, also de duplication can be done during compaction.
. No need to do this in the client, also de duplication can be done during compaction. Built in Schema registry . Supports multiple strategies and it is very easy to use.
. Supports multiple strategies and it is very easy to use. Geo replication and built in discovery. It is very easy to replicate your cluster to multiple regions.
It is very easy to replicate your cluster to multiple regions. Integrated load balancer and Prometheus metrics .
. Multiple integrations : Kafka, RabbitMQ and much more.
: Kafka, RabbitMQ and much more. Support for many programming languages such GoLang, Java, Scala, Node, Python…
Clients do not need to be aware of shards and data partition, this is done transparently on the server side.
List of features: https://pulsar.apache.org/
As you can see, Pulsar is has lots of interesting features.
Pulsar Hands-on
It is quite easy to get started with Pulsar. Make sure you have the JDK installed!
Download Pulsar and unzip it:
$ wget https://archive.apache.org/dist/pulsar/pulsar-2.6.1/apache-pulsar-2.6.1-bin.tar.gz
2. Download the connectors(optional):
$ wget https://archive.apache.org/dist/pulsar/pulsar-2.6.1/connectors/{connector}-2.6.1.nar
3. After you download the nar file, copy the file to the connectors directory in the pulsar directory
4. Start Pulsar!
$ bin/pulsar standalone
Pulsar provides a CLI tool called pulsar-client that we can use to interact with the cluster.
To produce a message:
$ bin/pulsar-client produce my-topic --messages "hello-pulsar"
To read a message:
$ bin/pulsar-client consume my-topic -s "first-subscription"
Akka Streams Example
As a client example, lets use Pulsar4s with Akka!
First we need to create a Source to consume the stream of data, all that is required is a function that will create a consumer on demand and the message id to seek:
val topic = Topic("persistent://standalone/mytopic")
val consumerFn = () => client.consumer(ConsumerConfig(topic, subscription))
Then, we pass consumerFn function to create the source:
import com.sksamuel.pulsar4s.akka.streams._
val pulsarSource = source(consumerFn, Some(MessageId.earliest))
The materialized value of the Akka source is an instance of Control , an object which provides a method 'close' which can be used to stop consuming messages. Now, we can process the data with Akka Streams as usual.
To create a sink:
val topic = Topic("persistent://standalone/mytopic")
val producerFn = () => client.producer(ProducerConfig(topic)) import com.sksamuel.pulsar4s.akka.streams._
val pulsarSink = sink(producerFn)
Full Example taken from Pulsar4s:
Pulsar Function Example
Pulsar functions process messages from one or more topics, transform it and output the result to a different topic:
You can choose between two interfaces to write your functions:
Language-native interface : No Pulsar-specific libraries or special dependencies required. You cannot access the context. Only Java and Python are supported.
: No Pulsar-specific libraries or special dependencies required. You cannot access the context. Only and are supported. Pulsar Function SDK: Available for Java/Python/Go and provide more functionality including access to the context object.
Using language native interface is very easy, you can just write a simple functions that transform the message:
def process(input):
return "{}!".format(input)
This simple function written in Python just adds an exclamation point to all incoming strings and publishes the resulting string to a topic.
To use the SDK you need to import the dependencies, for example in Go we would write:
package main
import (
"context"
"fmt"
"github.com/apache/pulsar/pulsar-function-go/pf"
)
func HandleRequest(ctx context.Context, in []byte) error{
fmt.Println(string(in) + "!")
return nil
}
func main() {
pf.Start(HandleRequest)
}
To publish the Serverless function and deploy it to the cluster, we use pulsar-admin CLI, in case of Python we would use:
$ bin/pulsar-admin functions create \
--py ~/router.py \
--classname router.RoutingFunction \
--tenant public \
--namespace default \
--name route-fruit-veg \
--inputs persistent://public/default/basket-items
A great feature of Pulsar Functions is that you can set the delivery guarantee when you publish the function:
$ bin/pulsar-admin functions create \
--name my-effectively-once-function \
--processing-guarantees EFFECTIVELY_ONCE
You have the following options:
Pulsar functions run on your Pulsar cluster so you don’t have to manage their deployments as opposed to Kafka Streams applications.
Pulsar Advantages
Let’s review the main advantages over Kafka:
More features : Pulsar Functions, multi tenancy, schema registry, n-tier storage, multiple consumption modes and persistence modes, etc.
: Pulsar Functions, multi tenancy, schema registry, n-tier storage, multiple consumption modes and persistence modes, etc. More flexibility : 3 types of subscriptions (exclusive, shared and failover), You can listen to multiple topics on one subscription. Durability options: non persistent(fast), persistent, compacted(only last key per message). You can choose delivery guarantee, it has server side de duplication and dead lettering. Many retention policies and TTL.
: 3 types of subscriptions (exclusive, shared and failover), You can listen to multiple topics on one subscription. Durability options: non persistent(fast), persistent, compacted(only last key per message). You can choose delivery guarantee, it has server side de duplication and dead lettering. Many retention policies and TTL. No need to define your scaling needs ahead of time.
Supports Queuing and Streaming . So it can act like RabbitMQ or Kafka.
. So it can act like RabbitMQ or Kafka. It scales better because storage is separated from the brokers. Re balancing is faster and more reliable.
because storage is separated from the brokers. Re balancing is faster and more reliable. Easier to operate : Thanks to the de coupling and the n-tier storage. Also the admin REST API is great.
: Thanks to the de coupling and the n-tier storage. Also the admin REST API is great. SQL integration with Presto that queries directly the storage without affecting the brokers.
that queries directly the storage without affecting the brokers. Cheaper storage thanks to the n-tier automated storage options.
thanks to the n-tier automated storage options. Faster : Many benchmarks has show better performance for a wide range of scenarios. Pulsar claims to have lower latency and better scaling capabilities. However, this is being challenged by Confluent, so take this with grain of salt and do your own benchmarks.
: Many benchmarks has show better performance for a wide range of scenarios. Pulsar claims to have lower latency and better scaling capabilities. However, this is being challenged by Confluent, so take this with grain of salt and do your own benchmarks. Pulsar Functions bring Serverless computing to your messaging platform. No need to manage the deployments.
bring Serverless computing to your messaging platform. No need to manage the deployments. Integrated Schema registry supporting easy schema evolution
supporting easy schema evolution Integrated load balancer, and Prometheus metrics.
Geo replication works better and it easier to setup. Pulsar also has built in discover-ability.
works better and it easier to setup. Pulsar also has built in discover-ability. No limit on the number of topics you can create.
Compatible with Kafka, easy to integrate.
Pulsar Cons
Pulsar is not perfect, Kafka is popular for a reason, it does one thing and it does it well. Pulsar tries to tackle too many fields and fails to exceed on any of them. Let’s summarize some of the problems with Pulsar:
Popularity : Pulsar is not as popular. It lacks support, documentation and real world usage. This is a major problem for big organizations.
: Pulsar is not as popular. It lacks support, documentation and real world usage. This is a major problem for big organizations. It required more components because of the n-tier architecture: Bookkeeper .
No proper support for streaming applications within the platform. Pulsar Functions are not the same as Kafka Streams, they are much simpler and are not meant for real time stream processing. You cannot do stateful processing.
Less plugins and clients compared to Kafka. Also, less people available with Pulsar skills, it needs to be learned in house.
It has less support in the cloud. Confluent has a managed cloud offering.
Confluent, has put a comparison of between Pulsar and Kafka where you can go more into details. This blog also answers some of the questions regarding Kafka vs Pulsar, but be aware they may biased.
Pulsar Use Cases
Pulsar can be used for a wide range of use case:
Pub/sub queue messaging
Distributed Log
Event Sourcing ledge for permanent event storage
Microservices
SQL Analytics
Serverless functions
When you should consider Pulsar
You need both queues like RabbitMQ and stream processing like Kafka.
You need easy geo-replication.
Multi tenancy is a must have and you want to secure the access for each of your teams.
You need to persist all your messages for a long time and you don’t want to off load them to another storage.
Performance is critical for you and your benchmarks have shown that Pulsar provides lower latency and higher throughput.
You run on-prem and you don’t have experience setting up Kafka but you have Hadoop experience.
Note that if you are in the cloud, consider cloud based solutions. Cloud providers have different services that cover some of the use cases. For example, for queue messaging cloud providers provide many services like Google pub/sub. For a distributed log, you have Confluent cloud or AWS Kinesis. Cloud providers also provide really good security. The advantage of Pulsar is that provides many features in a single platform. Some teams may use it as a messaging system for microservices while other as a distributed log for data processing.
Conclusion
I’m a big fan of Kafka that’s why I’m so interested in Pulsar. Competition is good, it drives innovation.
Kafka is a mature, resilient, and battle-tested product used all over the world with great success. I cannot image any company without it. However, I do see Kafka a victim of its own success, the huge growth has slowed down feature development, since they need to support so many big players. Important features like removing ZooKeeper dependency are taking too long. This has created room for tools such Pulsar to thrive; fixing some of the issues with Kafka and adding many more features.
However, Pulsar is still quite immature and I would be careful before introducing into production. Perform analysis, do benchmarks, research and write proof of concepts before incorporating Pulsar into your organization. Start small, do a proof of concept before migrating from Kafka and measure the impact before deciding on performing a full migration. | https://itnext.io/pulsar-advantages-over-kafka-7e0c2affe2d6 | ['Javier Ramos'] | 2020-11-18 12:49:23.639000+00:00 | ['Kafka', 'Microservices', 'AWS', 'Big Data', 'Pulsar'] |
I’m a Big Tech Company. Here’s Why I’m Working Things Out Instead of Breaking Up | Photo by Christiana Rivers on Unsplash
“You have someone like Elizabeth Warren who thinks that the right answer is to break up the companies… look, at the end of the day, if someone’s going to try to threaten something that existential, you go to the mat and you fight.” — Mark Zuckerberg
As a big tech company, I know I look glamorous from the outside — glossy, carefree and deeply in love with myself. But nothing, even me, is perfect. It’s painful to admit, but at my lowest point, I even considered breaking up.
The cracks started small — personal data sold to political consultants here, birther memes aggressively promoted to digitally illiterate users there. Nothing a mature company couldn’t handle. I pasted on a smile, but some days I could barely look myself in the security cameras.
Then came the moment that changed everything: the Justice Department backed a lawsuit against me for allowing housing providers to target ads to users of specific ethnicities, genders, and religions. I was horrified. If I supported discrimination for who knows how long, what wasn’t I capable of?
Sobbing, I confronted myself by the jellybean pit. It was only 9 PM and the campus was packed, but I didn’t care who saw. How could I betray myself like this? This wasn’t like the deletion-doesn’t-mean-deletion thing, or the exacerbating-depression-as-an-experiment thing — this time there would be consequences.
I stammered out something about setting up an independent panel to hear and amplify appeals from users. Typical me bullshit! Devastated, I told myself off again and fled to my sunniest tax haven for a well-deserved rum punch or three.
I needed advice. I called the smartest woman I knew: Massachusetts Senator Elizabeth Warren. “Screw you,” she replied. “You can’t let yourself continue to engage in illegal anticompetitive practices, stomp on consumer privacy rights, and repeatedly fumble your responsibility to protect our democracy.”
Elizabeth validated my anger. Who the hell did I think I was? For the first time, I let myself seriously consider breaking up. I’ll admit it was a little exciting. I daydreamed about being free to merge with whoever I wanted, and to hell with what the Securities and Exchange Commission might think.
Who would get the subsidiaries? I couldn’t bear to think about them watching me rifle through assets, coldly divesting — color palettes here, infrastructure for routing European revenues through low-tax Ireland there. Haggling over memories.
I thought about my friends who had broken up — Standard Oil, Northern Securities, American Tobacco. Were they happier? Their drooping or defunct stock charts whispered no.
Elizabeth only wanted the best for me. But ultimately, as I and 59 lobbyists explained to her, this was a decision I had to make on my own.
My phone rang. “Don’t go,” I heard myself say. “With the additional overhead of compliance, how would I fund innovations like this?” Then came an animation of a “like” springing into the air and rocketing offscreen, accompanied by other, smaller likes.
I laughed, but it was true. There’s a lot I can only do together. I don’t even sneeze in a new zip code without a 20 year property tax exemption. I buy every startup that blinks at me, dissolve it and set the founders to work optimizing web component margin sizes. I change the font on my recruiting homepage and the Dow tumbles 500 points.
On the rainswept helipad back home in Menlo Park, I held a finger to my eager lips. “For once, just listen. You will commit to transparency going forward, and you will work closely with users and regulators to ensure that the community works for everyone.”
“What does that mean?”
“I’ve never known.”
My lips met.
Years later, I look back at what I’ve built — puppy-themed selfie filters, Brain Two, the merger with Texas — and I can’t believe I came so close to throwing it all away. I’m stronger for what I’ve been through, and I’m here to stay.
I’m good on my own, but together? I’m unstoppable. | https://medium.com/lady-pieces/im-a-big-tech-company-here-s-why-i-m-working-things-out-instead-of-breaking-up-3144cffbda76 | ['Catherine Elder'] | 2020-03-16 17:25:11.239000+00:00 | ['Humor', 'Satire', 'Love', 'Technology', 'Facebook'] |
The CIA’s War On WikiLeaks Founder Julian Assange | (Image: Lance Page / t r u t h o u t; Adapted: public domain / Wikimedia)
On behalf of the Central Intelligence Agency, a Spanish security company called Undercover Global spied on WikiLeaks founder Julian Assange while he was living in the Ecuador embassy in London.
The Spanish newspaper El Pais reported on September 25 that the company’s CEO David Morales repeatedly handed over audio and video. When cameras were installed in the embassy in December 2017, “Morales requested that his technicians install an external streaming access point in the same area so that all of the recordings could be accessed instantly by the United States.”
Technicians planted microphones in the embassy’s fire extinguishers, as well as the women’s bathroom, where Assange held regular meetings with his lawyers — Melynda Taylor, Jennifer Robinson, and Baltasar Garzon.
Morales’ company was hired by Ecuador, but Ecuador apparently had no idea that Morales formed a relationship with the CIA.
The world laughed at Assange when it was reported in a book from David Leigh and Luke Harding that he once dressed as an old woman because he believed CIA agents were following him. It doesn’t seem as absurd now.
A Tremendous Coup for the CIA
WikiLeaks founder Julian Assange as he was expelled from Ecuador embassy. Screenshot of Ruptly coverage.
Julian Assange was expelled from the embassy and arrested by British authorities on April 11. It was subsequently revealed that the U.S. Justice Department indicted him on a conspiracy to commit a computer crime charge, and in May, a superseding indictment charged him with several violations of the Espionage Act.
He became the first journalist to be indicted under the 1917 law, which was passed to criminalize “seditious” conduct during World War I.
The WikiLeaks founder was incarcerated at Her Majesty’s Prison Belmarsh in London. A court found him guilty of violating bail conditions when he sought political asylum from Ecuador in 2012. He was sentenced to 50 weeks in prison. But following his sentence, authorities refused to release him. They decided Assange should remain in the facility until a February hearing, where the U.S. government will argue for his extradition.
The expulsion, arrest, and jailing of Assange represented a tremendous coup for the CIA, which views WikiLeaks as a “hostile intelligence service.”
“It is time to call out WikiLeaks for what it really is — a non-state hostile intelligence service often abetted by state actors like Russia,” Mike Pompeo declared in April 2017, when he was CIA director.
“Julian Assange and his kind are not the slightest bit interested in improving civil liberties or enhancing personal freedom. They have pretended that America’s First Amendment freedoms shield them from justice. They may have believed that, but they are wrong.”
Pompeo added, “Assange is a narcissist who has created nothing of value. He relies on the dirty work of others to make himself famous. He is a fraud — a coward hiding behind a screen. And in Kansas [Pompeo was a representative from Kansas], we know something about false wizards.”
Unwanted Scrutiny
The CIA’s loathing for Assange stems from the fact that the dissident media organization exposed the agency to unwanted scrutiny for its actions numerous times.
In 2010, WikiLeaks published two Red Cell memos from the CIA. One memo from March 2010 outlined “pressure points” the agency could focus upon to sustain western European support for the Afghanistan War. It brazenly suggested “public apathy enables leaders to ignore voters” because only a fraction of French and German respondents identified the war as “the most urgent issue facing their nation.”
The second memo from February 2010 examined what would happen if the U.S. was viewed as an incubator and “exporter of terrorism.” It warned, “Foreign partners may be less willing to cooperate with the United States on extrajudicial activities, including detention, transfer [rendition], and interrogation of suspects in third party countries.”
“If foreign regimes believe the U.S. position on rendition is too one-sided, favoring the U.S. but not them, they could obstruct U.S. efforts to detain terrorism suspects. For example, in 2005 Italy issued criminal arrest warrants for U.S. agents involved in the abduction of an Egyptian cleric and his rendition to Egypt. The proliferation of such cases would not only challenge U.S. bilateral relations with other countries but also damage global counterterrorism efforts,” the February memo added.
On these memos, which were disclosed by U.S. military whistleblower Chelsea Manning, she said, “The content of two of these documents upset me greatly. I had difficulty believing what this section was doing.”
CIA Renditions Further Exposed
More than 250,000 diplomatic cables from the U.S. State Department, largely from the period of 2003–2010, were provided by Manning to WikiLeaks. There were several that brought unwanted scrutiny to the CIA.
The CIA abducted Khaled el-Masri in 2003. He was beaten, stripped naked, violated by a suppository, chained spread-eagled on an aircraft, injected with drugs, and flown to a secret CIA prison in Kabul known as the “Salt Pit.” El-Masri was tortured and eventually went on hunger strike, which led to personnel force-feeding him. He was released in May 2004, after the CIA realized they had the wrong man.
Cables showed the pressure the U.S. government applied to German prosecutors and officials so 13 CIA agents, who were allegedly involved in el-Masri’s abduction, escaped accountability. They were urged to “weigh carefully at every step of the way the implications for relations.”
Pressure was also applied to prosecutors and officials in Germany. They feared that magistrate Baltasar Garzón, who is now one of Assange’s attorneys, would investigate CIA rendition flights.
The cache of documents brought attention to Sweden’s decision to curtail CIA rendition flights after Swedish authorities realized stopovers were made at Stockholm’s Arlanda International Airport.
During the “Arab Spring,” cables from Egypt showed Omar Suleiman, the former intelligence chief who Egyptian president Hosni Mubarak selected as his potential successor, highlighted his collaboration with the CIA. Suleiman oversaw the rendition and torture of dozens of detainees. Abu Omar, who was kidnapped by the CIA in Milan in 2003, was tortured when Suleiman was intelligence chief.
The world also learned that the CIA drew up a “spying wishlist” for diplomats at the United Nations. The list targeted UN Secretary General Ban Ki-moon and other senior members. The agency sought “foreign diplomats’ internet user account details and passwords,” as well as “biometric” details of “current emerging leaders and advisers.” It was quite an embarrassing revelation for the CIA.
As cables spread in the international media, the CIA launched the WikiLeaks Task Force to assess the impacts of the disclosures.
Documents revealed by NSA whistleblower Edward Snowden showed during this same period the security agencies had a “Manhunting Timeline” for Assange. They pressured Australia, Britain, Germany, Iceland, and other Western governments to concoct a prosecution against him.
Several NSA analysts even wanted WikiLeaks to be designated a “malicious foreign actor” so the organization and its associates could be targeted with surveillance, an attitude likely supported by CIA personnel.
‘We Look Forward To Sharing Great Classified Info About You’
The CIA joined Twitter in June 2014. WikiLeaks welcomed the CIA by tweeting at the agency, “We look forward to sharing great classified info about you.” They shared links to the Red Cell memos and a link to a search for “CIA” documents in their website’s database.
By December, the media organization published a CIA report on the agency’s “high value target” assassination program. It assessed attacks on insurgent groups in Afghanistan, Algeria, Chechnya, Colombia, Iraq, Israel, Libya, Northern Ireland, Pakistan, Peru, Sri Lanka, and Thailand.
The review acknowledged such operations, which include drone strikes, “increase the level of insurgent support,” especially if the strikes “enhance insurgent leaders’ lore, if noncombatants are killed in the attacks, if legitimate or semilegitimate politicians aligned with the insurgents are targeted, or if the government is already seen as overly repressive or violent.”
WikiLeaks also released two internal CIA documents from 2011 and 2012 detailing how spies should elude secondary screenings at airports and maintain their cover. The CIA was concerned that the Schengen Area — ”a group of 26 European countries that have abolished passport control at shared borders” — would makie harder for operatives because they planned to subject travelers to biometric security measures.
After CIA director John Brennan had his personal AOL account by hackers, the contents were provided to WikiLeaks for a series of publications that took place in October 2015.
Julian Assange. Photo by Ministerio de Cultura de la Nación Argentina (culturaargentina) on Flickr.
U.S. Intelligence Steps Up Effort To Discredit WikiLeaks
As Democratic presidential candidate Hillary Clinton campaigned against President Donald Trump, WikiLeaks published emails from John Podesta, chairman for the Clinton campaign.The national security establishment alleged the publication was part of a Russian plot to interfere in the 2016 election.
Assange held a press conference in January 2017, where he countered, “Even if you accept that the Russian intelligence services hacked Democratic Party institutions, as it is normal for the major intelligence services to hack each others’ major political parties on a constant basis to obtain intelligence,” you have to ask, “what was the intent of those Russian hacks? And do they connect to our publications? Or is it simply incidental?”.
“The U.S. intelligence community is not aware of when WikiLeaks obtained its material or when the sequencing of our material was done or how we obtained our material directly. So there seems to be a great fog in the connection to WikiLeaks,” Assange contended.
He maintained, “As we have already stated, WikiLeaks sources in relation to the Podesta emails and the DNC leak are not members of any government. They are not state parties. They do not come from the Russian government.”
“The [Clinton campaign] emails that we released during the election dated up to March [2016]. U.S. intelligence services and consultants for the DNC say Russian intelligence services started hacking DNC in 2015. Now, Trump is clearly not on the horizon in any substantial manner in 2015,” Assange added.
Yet, in the information war between WikiLeaks and the U.S. government, Brennan responded during an appearance on PBS’ “NewsHour.” “[Assange is] not exactly a bastion of truth and integrity. And so therefore I wouldn’t ascribe to any of these individuals making comments that [they are] providing the whole unvarnished truth.”
Special Counsel Robert Mueller oversaw a wide-ranging investigation into alleged Russian interference in the 2016 election. The report, released in April 2019, did not confirm, without a doubt, that Russian intelligence agents or individuals tied to Russian intelligence agencies passed on the emails from the Clinton campaign to WikiLeaks.
CIA Loses Control Of Largest Batch Of Documents Ever
Mike Pompeo, CIA director from January 2017 to April 2018 (Photo: U.S. Government)
In February 2017, WikiLeaks published “CIA espionage orders” that called attention to how all of the major political parties in France were “targeted for infiltration” in the run-up to the 2012 presidential election.
The media organization followed that with the “Vault 7” materials — what they described as the “largest ever publication of confidential documents on the agency.” It was hugely embarrassing for the agency.
“The CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized “zero day” exploits, malware remote control systems and associated documentation,” WikiLeaks declared in a press release. “This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA.”
“The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive,” WikiLeaks added.
Nearly 9,000 documents came from “an isolated, high-security network inside the CIA’s Center for Cyber Intelligence.” (WikiLeaks indicated the espionage orders published in February were from this cache of information.)
The publication brought scrutiny to the CIA’s “fleet of hackers,” who targeted smartphones and computers. It exposed a program called “Weeping Angel” that made it possible for the CIA to attack Samsung F8000 TVs and convert them into spying devices.
As CNBC reported, the CIA had 14 “zero-day exploits,” which were “software vulnerabilities” that had no fix yet. The agency used them to “hack Apple’s iOS devices such as iPads and iPhones.” Documents showed the “exploits were shared with other organizations including the National Security Agency (NSA) and GCHQ, another U.K. spy agency. The CIA did not tell Apple about these vulnerabilities.”
WikiLeaks additionally revealed that CIA targeted Microsoft Windows, as well as Signal and WhatsApp users, with malware.
The CIA responded, “The American public should be deeply troubled by any Wikileaks disclosure designed to damage the intelligence community’s ability to protect America against terrorists and other adversaries. Such disclosures not only jeopardize U.S. personnel and operations but also equip our adversaries with tools and information to do us harm.”
But the damage was done. The CIA was forced to engage with the allegations by insisting the agency’s activities are “subject to oversight to ensure that they comply fully with U.S. law and the Constitution.” Apple, Samsung, and Microsoft took the disclosures very seriously.
Assange attempted to force a public debate that high-ranking CIA officials did not want to have.
“There is an extreme proliferation risk in the development of cyber ‘weapons,’ Assange stated. Comparisons can be drawn between the uncontrolled proliferation of such ‘weapons,’ which results from the inability to contain them combined with their high market value, and the global arms trade. But the significance of “Year Zero” goes well beyond the choice between cyberwar and cyberpeace.”
(Note: Josh Schulte, a former CIA employee, was charged with violating the Espionage Act when he allegedly disclosed the files to WikiLeaks. He was jailed at the Metropolitan Correctional Center in New York.)
CIA Exploits New Leadership In Ecuador
Lenín Moreno was elected president of Ecuador in May 2017. At the time, the U.S. Justice Department had essentially abandoned their grand jury investigation into WikiLeaks. President Barack Obama’s administration declined to pursue charges against Assange. But officials in the national security apparatus recognized a political shift in Ecuador and exploited it.
By December, the CIA was able to fight back against Assange and WikiLeaks by installing spying devices in the Ecuador embassy.
Former CIA officer John Kiriakou contended, “The attitude at the CIA is that he really did commit espionage. This isn’t about freedom of speech or freedom of the press because they don’t care about freedom of speech or freedom of the press. All they care about is controlling the flow of information and so Julian was a threat to them.”
Recall, as the Senate intelligence committee compiled a study on the CIA’s rendition, detention, and interrogation program, the CIA flouted restrictions on domestic spying and targeted Senate staff. Personnel even hacked into Senate computers.
“The CIA likes nothing more than being able to operate unfettered,” Kiriakou further declared.
He also commented, “[Moreno] did the CIA’s bidding. I have no idea why he would do such a thing, but he was the perfect person to take over the leadership of Ecuador at exactly the time that the CIA needed a friend there.”
As 2018 progressed, restrictions imposed by the Ecuador government on what Assange was allowed to do on the internet and in his daily work for WikiLeaks intensified.
A doctor named Sondra Crosby, who evaluated Assange’s health on February 23, described the embassy surveillance she experienced during her visit. She left the embassy at one point to pick up some food and returned to the room where they were meeting to find her confidential medical notes were taken. She found her notes “in a space utilized by embassy surveillance staff” and presumed they were read, a violation of doctor-patient confidentiality.
Forcing the removal of Assange from the embassy was a major victory for the CIA, and if prosecutors win his extradition to the United States, the agency will have a hand in how the trial unfolds. | https://medium.com/discourse/the-cias-war-on-wikileaks-founder-julian-assange-4a26b78fa042 | ['Kevin Gosztola'] | 2019-10-07 12:44:03.624000+00:00 | ['Politics', 'Wikileaks', 'News', 'CIA', 'Journalism'] |
Bitcoin Facing Gold And Fiat Currencies On 10 Essential Properties Of Money | Bitcoin Facing Gold And Fiat Currencies On 10 Essential Properties Of Money
The best way for you forming your own opinion on Bitcoin.
Qualified as digital gold or gold 2.0, Bitcoin is very often compared to gold due to its scarcity but also because of its process for creating new Bitcoins that is similar to gold mining. Since Bitcoin aims to build a better monetary and financial system for the future, it is also naturally compared frequently with the current monetary and financial system.
I have my opinion on what Bitcoin is and what it can bring to the world in the years to come. I could tell you that Bitcoin will create a fairer world in the future by giving power back to the people. Nevertheless, you could argue that this is only my opinion and that it does not allow you to form a real opinion on Bitcoin and its future.
In a way, you’d be right. You must form your own opinion about Bitcoin in order to have a chance to become aware of its phenomenal potential for the future of humanity. Rather than asking you to take my word for it, I suggest you compare Bitcoin against gold and fiat currencies through 10 essential properties of money.
For each of the 10 properties of money that I will exhibit, I will give you my vision of the winner in this three-way fight. At the end of the article, you will be able to form a better opinion on the advantages and disadvantages of Bitcoin while having the opportunity to further your own research on one or the other of the different properties analyzed. | https://medium.com/swlh/bitcoin-facing-gold-and-fiat-currencies-on-10-essential-properties-of-money-441c26a8f51d | ['Sylvain Saurel'] | 2020-05-03 14:32:20.373000+00:00 | ['Cryptocurrency Investment', 'Money', 'Finance', 'Cryptocurrency', 'Bitcoin'] |
Object-Oriented Programming and Dynamic Binding in OOP. | Hello, Folks. In this article, I wanted to dig deeper into concepts of Object-oriented Programming (OOP) and dynamic binding for the purpose of in-depth study and teaching. I hope that you will learn new things or refresh your knowledge through my articles. Meanwhile, this article will talk about dynamic binding too. If you have no information about binding, I would recommend reading my article called variables.
Object-oriented programming (OOP) is a programming paradigm based on the concept of “objects”, which can contain data and code: data in the form of fields (often known as attributes or properties), and code, in the form of procedures (often known as methods).
Many of the languages that support OOP are high-level languages with a multi-paradigm concept. Examples of these languages are Java, Python, and C++. With the advent of OOP, programming has become easier, faster, more secure, and dynamic. This explains why the languages I have listed are on the list of most used languages today. OOP has four major building blocks, which are Abstraction, Encapsulation, Inheritance, and Polymorphism.
Abstraction — Objects reveal mechanisms that are relevant for the use of other objects, hiding unnecessary implementation code. This concept is thought of as an extension of encapsulation and helps developers more easily make changes and additions over time. We can guess what it does when you press any specific button on the phone, but its implementation is hidden from us. For example, when you press the home button on the iPhone, it can give different results in different places, when it is in the program, it returns to home, but we can not see how it is implemented.
Encapsulation — Having logically-different objects within the class, which can neither be accessed nor modified by other classes. The implementation and state of each object are private, as they are defined within the class. The object can manage its state through methods. With this data hiding feature, encapsulation provides high security and avoids unintended data corruption. Three main data modifiers are used during encapsulation. Java supports four modifiers for the visibility of classes, methods, and attributes.
Public — Classes, methods, and variables can be accessed by all the classes.
Private — Most restrictive modifier, which methods and variables can be accessed within the same class.
Protected — Methods and variables can be accessed from the package by classes and subclasses.
No Modifier — Or called package-private because when you don't use any modifiers, you can access it within classes from the package.
Inheritance — When objects are similar or share some common logic, but not entirely the same, the inheritance comes to help. We create a child class, subclass, from the parent class, superclass. Through this way, we form a hierarchy. Let’s say we have the person superclass, and teacher and student inheriting from that superclass.
Polymorphism — Polymorphism means existing in many forms. Let’s say we have a parent class and several child classes. We would like to use the method of the parent class with some modifications for each subclass and each subclass can keep its own version of these methods. We create three objects and treat them like the same type of object. After writing the appropriate methods for each, for example, If the object is triangle, the CalculateSurface() method for the triangle, and if it is circle, the method implemented for the surface will be called. This is called overriding, the ability to define a behavior that’s specific to the subclass type. So subclass can implement a parent class method based on its requirement.
Dynamic binding (dynamic dispatch/run-time binding/late binding) — the process of linking procedure call to a specific sequence of code (method) at run-time. So all calls to overridden methods are resolved at run-time. Let’s say we have a superclass Animal that has a move method, and subclasses of Cat and Fish (with the implementation of move methods in each class).
Animal myAnimal = new Fish();
myAnimal.move(); /*calls the method of Fish, not animal. */
Now let’s add Pirana that derives from Fish, which doesn’t override the move method, but has an extra method called bite.
Animal myAnimal = new Pirana();
myAnimal.move(); /*Pirana doesn't override move, look for one-level up. */
Now let’s try this:
Animal myAnimal = new Pirana();
myAnimal.bite(); /* this won't compile, because myAnimal is a
variable of type Animal and Animal doesn't have the method of bite*/ ((Pirana)myAnimal).bite(); /* We need to cast! or we can create an
object for pirana. */ Pirana p = (Pirana)myAnimal;
p.bite();
I hope this article has shed light on some dark parts of OOP in your brain. Stay tuned for new ones. Peace ✌🏼! | https://medium.com/@mrilyaskarimov/object-oriented-programming-and-dynamic-binding-in-oop-7b56401dd500 | ['Ilyas Karimov'] | 2020-12-11 12:15:57.583000+00:00 | ['Programming', 'Programming Paradigms', 'Object Oriented', 'Oop', 'Programming Languages'] |
The Importance of Nurturing Doubt in an Age of Righteousness | The Importance of Nurturing Doubt in an Age of Righteousness
Fiction’s gift to us is the ability to live in the “land of ambiguity.”
Stories allow us to live the questions.
I often remark that we’re living in “the Age of Certainty,” although perhaps the better moniker is “the Age of Righteousness.” The two go hand in hand.
People shout their truths on social media with such shrillness that life can feel like an ongoing screed. Our streams are rife with taunts and ripostes, demands and disses, rebukes and rebuttals. Rarely does anyone “lower” themselves to ask a question, listen to a response, allow another to explain. And then it’s even rarer to shift one’s own position. Being right is more important than creating an environment for an exchange of thoughts. (It’s good to remember that scolding isn’t an effective rhetorical tool.)
As I read people’s comments online, it’s as if I’m navigating a land of walls and fortresses, with arrows darting from towers on different sides. It’s difficult to speak unless you want to draw your own bow, so, unfortunately, many stay silent.
It’s easy to blame social media for such a state, but I think there’s something going in the world that’s beyond social media — or that social media only reveals: a mindset of righteousness that has infected the culture at large, no matter your political or religious persuasion. We feel threatened, so we’ve chosen sides. The Civil War has begun. We might not carry rifles (yet), but the bullets of Gettysburg have taken the form of tweets, memes, and scowling emojis.
The Salve of Doubt
The answer? I think we need to immerse ourselves in the healing powers of doubt. The kind of doubt that poses questions, sparks curiosity, invites scrutiny. The kind of doubt that entreats us to pause and listen, to surrender our egos, soften our stances, admit fallibility and weakness.
Certainty leads to arguments and wars. Doubt leads to exploration and dialogue. Certainty tends to close us. Doubt by definition opens us.
To say, “I’m not sure,” to hesitate, is to be weak in our culture.
America, however, in our bigness, our brashness, has never had much fondness for doubt. People who operate with doubt are often criticized as being indecisive — and indecision isn’t an admirable trait. Just consider the way leadership styles are esteemed. George W. Bush and Donald Trump both brandish their decisiveness as a badge of macho honor (Bush even proudly gave himself the nickname, “The Decider”), and people often view them as strong as a result. Barack Obama, however, was frequently pilloried for vacillating between options as he pondered decisions. To say, “I’m not sure,” to hesitate, is to be weak in our culture. Swift, sure, and strong decisions mark a good leader.
Bush and Trump each also prefer to make decisions from their guts, seemingly proud that their certainty doesn’t get muddled in the complications of the cognitive realms. On the other hand, Obama’s “wavering” was often due to the fact that he liked to probe his advisors’ thoughts, look at data, reflect on history — a process that took time because it didn’t come from the gut, but from consultation and consensus building (and, yes, the higher cognitive realms).
We’re a society that prefers “full speed ahead” over “I’ll mull it over.” In some ways, we’re still a nation of Wild West gunslingers. If you’re not fast on the draw, you’re dead. Americans cherish certainty, so we’ve defined good decision-making around quickness and surety, even if such a style does lead to long wars we should have never gotten involved in.
“The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people are so full of doubts,” said Bertrand Russell.
Our sound-bite world encourages us to live in the tiny boxes of our certainty.
It’s easy to make quick decisions when you’re an ideologue, a fanatic, or an authoritarian because you’re guided by righteousness. You don’t have or seek any questions. The wiser people, the doubters, speak with qualifications, with references, with a point-counterpoint style that doesn’t lend itself to pithy marketing phrases or the witty one-upmanship that is privileged in so much of our contemporary discourse.
The language of doubt also doesn’t translate effectively onto social media, for social media allows for little nuance. The text fields are just too small to afford a contour, a counterpoint, a tangent — too small to invite in the language of questions, of uncertainty. Our sound-bite world encourages us to live in the tiny boxes of our certainty. We follow our impulses, our gut instincts, like animals.
It takes courage and patience to operate with doubt because you’re essentially without arms, or you’re carrying a very different kind of arms at the very least.
Doubt and Creativity: the Benefits of Being Willing to Surrender
To nurture a mindset of doubt is to nurture a mindset of creativity — to love testing ideas, to revel in mystery and ambiguity, to seek answers in the shadows, to take comfort in the cloudy regions of thought. A mindset of doubt leads us beyond defensive postures because when evidence disproves or weakens our positions, we welcome that evidence and change our positions as a result. Doubt paves the roads of our search. It’s a “willingness to surrender,” as Walt Whitman calls it, which allows our thoughts, our dialogue, to move and shift.
“I like the scientific spirit — the holding off, the being sure but not too sure, the willingness to surrender ideas when the evidence is against them: this is ultimately fine — it always keeps the way beyond open — always gives life, thought, affection, the whole man, a chance to try over again after a mistake — after a wrong guess,” said Walt Whitman in his Camden Conversations.
The hero is the one who can live and even thrive in a teetering world
Doubt — and the questions it opens — is what has always drawn me to fiction, where a character’s uncertainty and quest guide the story. Instead of people putting on masks of invulnerability, as they tend to do in real life, characters revel in their vulnerability. Confusion mixes with needs and desires, causing characters to leap and lunge in good ways and bad ways in their search for satisfaction, comfort, and love. In a novel, the righteous, the know-it-alls, tend to get their comeuppance. The hero is the one who can live and sometimes even thrive in a teetering world, and it’s good for us to engage in such uncertainty.
“Fiction can allow us brief residence in the land of true ambiguity, where we really don’t know what the hell to think,” said George Saunders. “We can’t stay there very long. It’s not in our nature. You can be truly confused by something and then ten minutes later you’re grasping for your opinions like somebody going for a life jacket. But that brief exposure to the land of ambiguity is really, really good for us. To be genuinely confused about something for even a few seconds is good because it opens us up to the idea that what we know right now is not complete.”
Another word for the “land of ambiguity” is life. Imagine if people’s social media posts revealed their confusion, if we wore our uncertainty as a badge of honor.
Montaigne grounded his thought in doubt, even coining the word for his reflections “essais” — attempts. The main character of his essays is “Myself,” which he describes as “bashful, insolent; chaste, lustful; prating, silent; laborious, delicate; ingenious, heavy; melancholic, pleasant; lying, true; knowing, ignorant; liberal, covetous, and prodigal.” His self is full of such contradictory and competing traits because he lives in the “land of ambiguity,” a place that doesn’t allow for the dominance of any single trait that’s the right way to live or be.
Montaigne knew that humans are beasts full of contradictions, that logic contends with irrationality and virtue never truly wins over sin, so we have no business being righteous.
Doubt is the beginning of wisdom.
The poet Ranier Marie Rilke gave perhaps the best writing (and life) advice of all in his Letters to a Young Poet. “Be patient toward all that is unsolved in your heart and try to love the questions themselves, like locked rooms and like books that are now written in a very foreign tongue. Do not now seek the answers, which cannot be given you because you would not be able to live them. And the point is, to live everything. Live the questions now. Perhaps you will then gradually, without noticing it, live along some distant day into the answer.”
When you live the questions as a writer or reader, you’re plumbing your vulnerability, touching a deeper self, revealing all of the good and bad you’re capable of.
As the writer Chris Abani puts it: “The point is to dissolve oneself into the journey of the protagonist, to face the most terrifying thing in narrative, the thing that has been at its heart since the earliest campfire and story. To dare ourselves to imagine, to conjure and then face all of our darkness and all of our light simultaneously. To stand in that liminal moment when we have no solid ground beneath us, no clear firmament above, when the ambiguity of our nature reveals what we are capable of, on both sides.”
To conjure and then face all of our darkness and all of our light simultaneously.
Doubt is the beginning of wisdom. I wonder if we should spend an entire year of high school or college simply immersing ourselves in a curriculum focused only on doubt, exploring all aspects of it, celebrating it. Perhaps we should create a Church of Doubt and attend its services each Sunday morning as a way to prepare for the week ahead.
We’ve strayed so far from from Rene Descartes’ method of skepticism, which formed the foundation of Western thought. Descartes put all beliefs, ideas, thoughts, and matter under a microscope of intense scrutiny in his search for what he could truly know. To be “woke” for Descartes was a matter of living the questions, not the certitudes.
Too much doubt can lead to a paralysis of action, to excesses of conspiracy theories, to distrust of the world, but if we all honored and revered our doubt as a strength, not a weakness, we’d certainly be less likely to build a moat around a political party, a religion, or a school of thought.
We’d also be less likely to build a moat around ourselves. Because doubt spawns tolerance. Doubt spawns acceptance. Doubt spawns enlightenment.
Grant Faulkner is the author of Pep Talks for Writers: 52 Insights and Actions to Boost Your Creative Mojo and the co-host of the podcast Write-minded. His essays on creative writing have appeared in The New York Times, Poets & Writers, Writer’s Digest, and The Writer.
For more, go to grantfaulkner.com, or follow him on Twitter at @grantfaulkner. | https://grantfaulkner.medium.com/the-importance-of-nurturing-doubt-in-an-age-of-righteousness-ff0e650e21a6 | ['Grant Faulkner'] | 2019-02-13 18:34:23.603000+00:00 | ['Creative Writing', 'Doubt', 'Reading', 'Creativity', 'Fiction'] |
The Psychology Principles Every UI/UX Designer Needs to Know | Psychology plays a big part in a user’s experience with an application. By understanding how our designs are perceived, we can make adjustments so that the apps we create are more effective in achieving the goals of the user.
To help you understand the perception of the user, I will introduce some design principles which I think are the most important, and also provide common examples of these principles in practice. Let’s start with the Von Restorff effect:
Von Restorff effect
The Von Restorff effect (also known as the isolation effect) predicts that when multiple similar objects are present, the one that differs from the rest is most likely to be remembered!
Does this ring any bells?
This is the main reason why all call-to-actions (CTAs) look different from the rest of the action buttons on a site or application!
Von Restorff Effect Example
We want users to be able to differentiate between a simple action button and a CTA, in order for them to have a clear understanding what the CTA does, whilst also remembering it throughout their use of the application or site. | https://uxplanet.org/the-psychology-principles-every-ui-ux-designer-needs-to-know-24116fd65778 | ['Thanasis Rigopoulos'] | 2017-07-08 10:22:10.892000+00:00 | ['UX', 'UI', 'User Experience', 'Design', 'User Interface'] |
Forgot THE watershed year of 1980. | Forgot THE watershed year of 1980. American political economy changed forever. Reaganism begat neoliberalism begat financialization and globalization, which ate unions and labor rights, and saw before god that it was good, and begat the 0.1%.
Boomers were once groovy Woodstock generation and are now the investor generation. They used to want to drop acid, now they want to drop capital gains taxes.
It’s like that old TV show with Michael J. Fox, Family Ties, but where the parents do a Mr Spock-type mindmeld with their yuppy investor son and turn into Reagan Democrats: cool ex-hippies with a taste for Italian stuff and Third World ROI.
Thanks to the booming economy and relatively widespread prosperity they grew up in, Boomers got good free public education and then went to college on the cheap, where they could dance, love, smoke and protest for peanuts, compared to today. They grew up with a political economy still resounding from the New Deal and which Eisenhower (no raving Socialist) demanded have unions and organized labor as the life blood of Capitalism and vital to democracy itself. They probably sneered at unions as almost “anachronistic”, and definitely uncool, despite knowing deep down that the prosperity they enjoyed was made possible by bipartisan political economy fuelled by union wages, job security, investment in a more hopeful future, higher tax revenue for public investment, stronger local economies, and so on.
Despite their privileged world, and worldview (yes I’m generalizing in a colorless way), they’ve since gone hippy-rogue Republican. Their political consciousness today would make them solid centrist Republicans back in Eisenhower’s day, their formative years. I know far too many Boomers, my parents’ generation, who today are, politically, at best Reagan Democrats: they can talk the talk on “progressive politics”, probably voted for Obama (Who’d’ve ever thought a BLACK man could be President?!), but with an eye to their pensions, and those travels through Italy and afar, the walk they’re walking is towards a President who offers lower taxes and higher investment returns. A brotherhood of man be damned. | https://medium.com/@christophersean_61731/forgot-the-watershed-year-of-1980-a750e2559474 | ['Sean Christopher'] | 2020-02-15 10:36:09.242000+00:00 | ['America', 'Trump Administration', '2016 Election', 'Social Change', 'Economy'] |
A Year in Review: Data Science and Cybersecurity — The Data Standard | With 2020 quickly coming to an end, the field of cybersecurity is expanding more than ever and has proven to be essential in our technology-driven society. Research firm Frost & Sullivan estimates that by 2030, there will be 91 billion devices, with 10 connected devices per human. In addition to human devices like phones and computers, breaches of security of the finance and banking industry, federal government agencies (as recently experienced with concerns of foreign intervention in the US election), and other devices like self-driving cars and even doorbell cameras have been significant issues for companies and people, and have created a large demand for increased investment in cybersecurity.
Data scientists have been established as key players in cybersecurity through their ability to utilize AI and machine learning to identify, predict, and block threats. A study by MeriTalk found that 84% of respondents used data to block threats. Also, a study by Capgemini found that nearly two-thirds of survey respondents agreed that AI lowered the cost of detecting and responding to breaches by an average of 12%, prompting an increase in AI budgets by 29%. Looking forward, research firm Markets and Markets expects a sharp increase in AI investment over the coming years, up to $35 billion by 2025. Even with more investment in the cybersecurity field, a large barrier for the utilization of AI and big data in cybersecurity is a data science talent gap. The MeriTalk study demonstrates how the single largest issue federal government agencies have in fighting against security breaches is the lack of skilled personnel. Along with the investment in AI and machine learning, finding skilled and experienced data scientists will be essential for the future of the field, and 2020 has proven to be a great step in the right direction for cybersecurity.
Make sure to look out for The Data Standard’s full report on The State of Data Science 2020, coming January 6th! | https://medium.com/@kooshaj/a-year-in-review-data-science-and-cybersecurity-for-the-data-standard-1c2762945c82 | ['Koosha Jadbabaei'] | 2021-01-02 23:36:56.780000+00:00 | ['Data Engineering', 'Data Science', 'Artificial Intelligence', 'Cybersecurity', 'Machine Learning'] |
Focal Arche headphone DAC/amp review: It doesn’t get much better than this | Focal Arche headphone DAC/amp review: It doesn’t get much better than this Amanda Jan 10·10 min read
It’s been less than 10 years since Focal entered the headphone market, but in that short time, the company has established itself as one of the preeminent makers of ultra-high-fidelity cans. TechHive has reviewed four models so far—the Clear, Elegia, Radiance, and Stellia—and all were judged to be excellent, though they will set you back quite a pretty penny.
To round out its headphone-related portfolio, Focal recently introduced the Arche DAC/headphone amp. Does it occupy the same rarefied heights of performance as the company’s cans? Is it a match for the glorious Stellia (which I had on hand for this review)? The answer is a resounding yes!
Mentioned in this article Focal Clear Read TechHive's reviewMSRP $1,499.00See it If you can pull the trigger before the end of 2020, and you already own a Focal Clear, Stellia, or Utopia headphone, Focal will give you a $1,000 voucher that you can apply to your Arche purchase. You’ll find more details on that at the end of this review.
[ Further reading: The best headphones you can buy ]FeaturesThe Arche is a solid brick measuring 7.8 x 2.5 x 11.4 inches (WxHxD) and weighing a hefty 10.25 pounds—the build quality is obviously of the highest order. Inside, the electronics are no less impressive. The DAC (digital-to-analog converter) is an AK4490 from Asahi Kasei Microdevices that provides two channels of conversion for PCM up to 768kHz at 32 bits and DSD up to 11.2MHz (aka DSD256). The Arche’s inputs, however, have somewhat lower limits, which I’ll discuss shortly.
Focal The Focal Arche is a solid brick of high-end electronics.
One feature that’s missing is the ability to decode MQA (Master Quality Authenticated) files. MQA is a lossless encoding scheme developed by Meridian that reduces the size and bandwidth requirements of high-resolution audio files. MQA titles from a provider such as Tidal must be decoded before being sent to the Arche.
True to its audiophile aspirations, the amplifier section is a completely dual-mono, pure Class A, fully balanced design that provides up to 1 watt/channel at 1 kHz for headphones with an input impedance of less than 32 ohms. The amp can drive impedances from 16 to 600 ohms with a frequency response from 10Hz to 100kHz, THD less than 0.001%, and signal-to-noise ratio greater than 116dB at 32 ohms. Those are some seriously impressive specs!
Interestingly, the Arche offers several presets that tailor the amp in various ways. For example, there are presets that match the impedance of the amp to the impedance of five Focal high-end headphones—Clear, Elear, Elegia, Stellia, and Utopia. In addition, there are two additional presets: Voltage and Hybrid. As you might expect, the Voltage setting puts the amp in voltage mode, while the Hybrid setting is a combination of voltage- and current-mode amplification. According to the company, the Voltage setting is designed to sound tube-like, while Hybrid is supposed to provide more of a solid-state sound.
On the back panel are three digital-audio inputs—coax and optical Toslink S/PDIF and a USB-B port—along with a pair of unbalanced RCA analog-audio inputs. Also on the back are a pair of balanced XLR outputs and a pair of unbalanced RCA outputs, which let you use the Arche as a standalone DAC in a 2-channel audio system. Rounding out the back panel is a USB-A connector that is used to update the firmware, a power on/off switch, and an AC power-cord receptacle.
Focal The front panel (top) includes a balanced 4-pin headphone output and unbalanced 1/4-inch headphone out, display, and a multifunction knob for volume control and menu selection. The back panel holds (L-R): USB-A port for firmware updates, USB-B input for digital audio, coax and optical digital-audio inputs, RCA stereo analog-audio input, stereo XLR balanced outputs, and stereo unbalanced RCA outputs.
The coax and optical inputs are limited to PCM digital-audio resolutions up to 192kHz at 24 bits. I tried to find out the maximum PCM resolution of the USB input, but Focal did not respond to this question by the time this review was due. DSD can be accepted only by the USB input. In all cases, the digital-audio signal is converted to 384kHz/32-bit PCM internally.
Mentioned in this article Focal Elegia Read TechHive's reviewMSRP $899.00See it The front panel has a center-mounted electroluminescent (EL) display with a large multifunction knob to its right and two headphone outputs to its left. In its default mode, the display shows the volume setting and selected input, while the knob adjusts the volume. When you adjust the volume, the display also reveals the gain setting (low or high) and the PCM sample rate of the incoming signal. Press the knob twice to display the main menu, turn the knob to select the parameter you want to tweak, and press the knob to select that parameter.
Actually, there are two pages of parameters. The first page includes input selection, gain (low/high), phase (normal/reverse), and amplifier (which lets you select an amplifier preset). The second menu page lets you control the brightness of the display, enable or disable sleep mode after a period of inactivity, reset the unit to factory condition, and display the firmware version and serial number of the unit.
The two headphone outputs include a standard 1/4-inch unbalanced output and a 4-pin balanced XLR output to use with the corresponding cable included with Focal’s headphones. When I reviewed the Stellia, I assumed the connection to each earcup was not balanced because the connector to the earcup has only two conductors, which my contact confirmed. At that time, it didn’t really matter, since I was using the unbalanced cable anyway.
Focal The Arche comes with a solid-aluminum headphone stand that lets you hang your cans with the amp (the Stellia are featured here).
With the Arche, however, I would use the 4-pin balanced cable, so I wanted to verify that the headphone itself does not have balanced internal wiring. This time, the company said the internal wiring is, in fact, balanced. Wait, what? I finally got the story straight after talking with the Focal product manager.
With that 4-pin cable, two of the pins carry the positive and negative signals for the left channel and the other two pins carry the positive and negative signals for the right channel. The two conductors on the connectors for each earcup convey the positive and negative signals for that channel to opposite ends of the voice coil, which is the definition of a balanced configuration. By contrast, in an unbalanced connection, the voice coil is driven only by the positive signal; the negative ends of both voice coils are tied together and to a common ground.
One nice touch is the solid-aluminum headphone stand that comes with the Arche. You insert it into one of the slots on the top of the unit, and you can hang your headphones on it so they don’t get lost.
Connection, Settings, CablesI started by connecting the Stellia headphone to the Arche using the 4-pin XLR cable. Next, I connected my iPhone XS to the Arche’s USB input using a Lightning-to-USB camera adaptor and a USB-A-to-USB-B cable. When I turned on the Arche, the phone reported that the device requires too much power and wouldn’t connect. Why would the Arche require any power at all? It’s plugged into an AC wall socket.
When I asked Focal about this, they said this is a known issue with some DAC/amps, though they are not sure why it happens. They recommend using the Apple Lightning-to-USB 3 camera adaptor, which has a separate Lightning port that you can connect to power. This is pretty kludgy, and I doubt that many people will use the Arche with their iPhone anyway.
Mentioned in this article Focal Radiance Read TechHive's review$1,290.00MSRP $1,290.00See iton Headphones.com So, I connected the Arche to my iMac via USB and played tracks from the Tidal Master library using the Tidal app, which worked fine. During my initial listening, I tried different amp presets, but I heard virtually no difference at all. The Clear preset might have been just a tad brighter than the others, but the difference was so tiny that it could easily be dismissed. I suspect it would make a bigger difference with headphones that have a much higher impedance.
I also tried the Voltage and Hybrid settings. The Voltage setting was a bit louder and richer, and the sound was slightly more present. I ended up sticking with the Stellia preset for most of my listening, but I could definitely recognize how the Voltage setting might be appealing.
In addition, I tried the low and high gain settings. As expected, they sounded the same except for level; I could easily match the perceived level at both settings with the volume knob. I was happy to discover that the Arche comes out of sleep mode with the volume set to 20, no matter what the level was when it went to sleep, which is great to avoid unpleasant surprises.
My last comparison was between the balanced and unbalanced cables. Again, the difference was very minor. The balanced connection sounded a bit more open and present, but not by much. Still, I recommend using it with the Arche.
Focal The Focal Arche DAC/amp is an elegant bit of kit.
Music timeIt’s December as I write this, so I started with Jacob Collier’s new single, “The Christmas Song.” This is a rich, dense, a cappella arrangement that’s classic Collier with just a bit of synth bells and a melodica solo. It exhibits a wide pitch and dynamic range, and the Arche rendered everything beautifully. The lead vocal was entirely natural, and the backing vocals were perfectly balanced in a clear, open presentation.
I’m a big fan of Donald Fagen, co-founder of Steely Dan, so I cued up the title track from his 2006 solo album Morph the Cat. It starts with a low bass line, which sounded deep and rich from the Arche. The vocals, horns, guitar, electric piano, and drums were similarly exquisite—well-balanced with superb imaging.
Mentioned in this article Focal Stellia Read TechHive's reviewSee it For some relatively out-there jazz, I listened to “Autumn Pleiades” from Dimensional Stardust by Rob Mazurek and Exploding Star Orchestra. This piece is in the musical form of a canon played by a large jazz orchestra, slowly building by adding instruments and melodic variations over a repeating bass line and harmonic progression. All instruments were clearly delineated, yet they formed a cohesive whole in a clean, open sound stage.
Solo piano is always a challenge for any audio system, so I listened to “Over the Rainbow” from Dave Brubeck’s album Lullabies. The sound of the piano was rich, well-balanced, and open with no hint of congestion.
One of my favorite discoveries this year is “Lonely Alone” from the album Threads by Sheryl Crow. On many of the tracks, she’s joined by famous singers—in this case, Willie Nelson. It’s an amazing mix, deep and immersive, almost as if it’s in surround. The vocals by Crow and Nelson were entirely natural and right up front, while the rest of the instruments, including deep bass, guitar, brush drums, organ, and harmonica, were clearly delineated within a wonderfully cohesive whole. This is how to mix a country song!
Focal You can dock the included headphone rest to the top of the Focal Arche AC/amplifier.
And now for something completely different: “Scuba Scuba” from Underwater Sunlight by Tangerine Dream. This rhythmic ambient track is almost entirely electronic with a wide frequency range and lots of stereo effects. The Arche rendered it all beautifully: clean, clear, and open.
For some classical music, I listened to the first movement of Vivaldi’s Concerto for Two Mandolins in G Major as performed by Avi Avital, Alon Sariel, and the Venice Baroque Orchestra on Art of the Mandolin. As I had come to expect, the sound was clean and open; I could hear each mandolin clearly along with each section of the orchestra. Even the super-low notes from the theorbo came through beautifully.
Finally, I cued up “The Great Gate of Kiev” from Mussorgsky’s Pictures at an Exhibition as orchestrated by Ravel and performed by the Berliner Philharmoniker under the direction of Simon Rattle. This piece has wide dynamic range from super-quiet passages to a crashing finale, and I could hear each section and solo instrument clearly. I was surprised, however, that the overall sound was a bit restrained—not veiled or congested, just not as present as I had heard on other tracks.
I wondered if it was the recording, so I played another big orchestral favorite, “Pines of the Appian Way” from Respighi’s Pines of Rome as performed by Filharmonica della Scala under Riccardo Chailly. Much better! The overall sound was more present and unrestrained, and the almost subterranean bass drum came through beautifully.
Bottom lineThe Arche is a worthy companion for any of Focal’s high-end headphones as well as just about any headphones you care to use with it. Its sound quality is impeccable: clean, clear, open, and utterly neutral. Every track I played sounded completely natural with no congestion, wide dynamic range, and effortless reproduction throughout the entire audible frequency spectrum.
The feature set is equally impressive. It offers impedance-matching presets for Focal’s headphones along with other settings to optimize the output for a wide range of cans and a variety of inputs and outputs. It can even act as a standalone, fully balanced DAC for speaker-based 2-channel audio systems. The only thing missing is MQA decoding.
As you might expect, all that capability and performance doesn’t come cheap: The Arche’s list price is a whopping $2,490. But if you’re thinking about investing in one of Focal’s high-end headphones and a comparable headphone amp, the company is offering some great package deals on Amazon through the end of the year. You can get the Arche and Clear for $3000 (a savings of $980 off the separate prices), the Arche and Stellia for $4,000 (a savings of $1,480), or the Arche and Utopia for $5,000 (a savings of $1,480).
And if you already own one of those headphones, Focal is offering a $1,000 voucher toward the purchase of an Arche through the end of 2020; click here for details.
If you’re a headphone enthusiast with very deep pockets, the Focal Arche is a worthwhile investment in your listening pleasure. And if you also have a high-end 2-channel audio rig, the Arche can serve double duty as an outstanding DAC with a fully balanced output. That’s two components for the price of one, which makes it a smart investment in my book.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details. | https://medium.com/@amanda97945983/focal-arche-headphone-dac-amp-review-it-doesnt-get-much-better-than-this-7815f62abbbe | [] | 2021-01-10 08:49:14.583000+00:00 | ['Mobile', 'Connected Home', 'Electronics', 'Security Cameras'] |
Culpable | Culpable
Photo by Jr Korpa on Unsplash
I know you did it
switched out a few
lines
spun some virtue
into my web
when I was distracted
by vice
The sound of you
comes pressing through
this vision
I had for me
I watch the messages
line themselves up
take votes
surrender ante’s
fold
and then realign
to tell me
about the night
before last
Because if it wasn’t you
creeping through
my window
whispering down the dog
and curling back
the covers to join me
I don’t want to know | https://medium.com/the-bazaar-of-the-bizarre/culpable-f11b8a462f0c | ['J.D. Harms'] | 2020-12-17 06:37:16.151000+00:00 | ['Musing', 'Imaginings', 'Image', 'Poetry', 'Desire'] |
“SCREW HUMBOLDT ” | “SCREW HUMBOLDT ”
“Screw Humboldt. How could the Prussian claim any authority on geo-distribution on the Chimborazo if Humboldt half-climbed once the volcano for a few hours and then left? Humboldt’s maps are cute but wrong. In fact, empirically they are crap. Let me tell you how to really do and map biodistribution.” Jorge Cañizares-Esguerra Sep 23, 2019·5 min read
For the lovers of Humboldtiana, I am appending a few images. These images were produced at about the same time (ca 1803–04) by Alexander von Humboldt (image 1)
Humboldt. Boceto nivelacion plantas Chimborazo. 1803. Museo Nacional Colombia. From Mauricio Nieto Olarte’ La obra cartografica de Francisco Jose de Caldas (2006)
and the son of Popayan, Francisco Jose de Caldas (the rest), when both lived in Carlos Montufar’s hacienda near Quito for 8 months.
Caldas-Plano de nivelacion de algunas plantas-1803-From Mauricio Nieto Olarte’ La obra cartografica de Francisco Jose de Caldas (2006)
Caldas-Plano de nivelacion de algunas plantas. From Mauricio Nieto Olarte’ La obra cartografica de Francisco Jose de Caldas (2006)
Caldas-Plano de nivelacion de algunas plantas..From Mauricio Nieto Olarte’ La obra cartografica de Francisco Jose de Caldas (2006)
Caldas-Plano de nivelacion de algunas plantas-From Mauricio Nieto Olarte’ La obra cartografica de Francisco Jose de Caldas (2006)
Caldas-Plano de nivelacion de algunas plantas- Museo Historia Natural Madrid. From Mauricio Nieto Olarte’ La obra cartografica de Francisco Jose de Caldas (2006)
Caldas-Plano de nivelacion de algunas plantas. Museo Historia Natural Madrid. From Mauricio Nieto Olarte’ La obra cartografica de Francisco Jose de Caldas (2006)
When Humboldt left Quito for Lima-Mexico-Cuba-Philadelphia-Paris, he took the Marquess Carlos Montufar, not Caldas. Caldas was no Marquess. He was a struggling letrado (intellectual) of very modest means, completely dependent on the charity of well-off friends to get books: the scion of a family constantly struggling with illegitimacy as his sisters had shacked up with priests.
Caldas was furious.
Caldas then wrote to Celestino Mutis a series of letters: In these letters and reports, Caldas summarized his findings and research program. To paraphrase: “Screw Humboldt. How could the Prussian claim any authority on geo-distribution on the Chimborazo if Humboldt half-climbed once the Chimborazo for a few hours and then left? Humboldt’s maps are cute but wrong. In fact, empirically they are crap. Let me tell you how to really do and map biodistribution. Here are some maps of the biodistribution for several plants (in different colors) in the northern Andes. They include several parallel measurements of temperature, barometric pressure, heights, and other variables. Bye. Don’t let bed-bugs bite you, great man!”
Caldas-Plano de nivelacion de algunas plantas- planos 1–2.From Mauricio Nieto Olarte’ La obra cartografica de Francisco Jose de Caldas (2006)
Caldas-Plano de nivelacion de algunas plantas- planos 3–4.From Mauricio Nieto Olarte’ La obra cartografica de Francisco Jose de Caldas (2006)
Caldas-Plano de nivelacion de algunas plantas- planos 5–6.From Mauricio Nieto Olarte’ La obra cartografica de Francisco Jose de Caldas (2006)
Caldas-Plano de nivelacion de algunas plantas- planos 7–8. From Mauricio Nieto Olarte’ La obra cartografica de Francisco Jose de Caldas (2006)
Caldas-Plano de nivelacion de algunas plantas- planos 10.From Mauricio Nieto Olarte’ La obra cartografica de Francisco Jose de Caldas (2006)
Caldas-NIvelacion barometrica de Santa Fe.From Mauricio Nieto Olarte’ La obra cartografica de Francisco Jose de Caldas (2006)
Humboldt KNEW about all these maps and of Caldas’s innovations. Humboldt was pissed by Caldas’s much better command of accuracy in measurement and empirical mastery (Caldas irritated Humboldt by recalibrating the German’s instruments and pointing out his inaccuracies — Warning: Don’t try this on a Prussian).
What did the great Prussian, anticolonialist, lover of birds do? Humboldt did not cite Caldas in his “groundbreaking” 1806 Geography of Plants.
It took Humboldt 23 years to introduce the name of Caldas once, in passing. Caldas appears buried in a footnote in the 1826 edition of Geography of Plants, along with a list of some 20 other scholars building on Humboldt’s “original” insights. Humboldt buried Caldas in neglect. Humboldt was unable to concede Caldas any epistemological authority in 1803 and less so after 1809, the year Caldas took Humboldt publicly to task in the pages of Semanario del Nuevo Reino de Granada.
In 1809, Caldas published an annotated translation of Humboldt’s original manuscript of Geography of Plants that the Prussian left as a gift to Mutis. It was not nice. Caldas turned his edition into a vehicle to criticize the Prussian sage: Humbodlt was a bird of passage whose many empirical flaws stemmed from lack of acquaintance with local realities.
In 1809, Caldas produced an amazing, annotated translation of Humboldt’s Geography of Plants in the Semanario del Nuevo Reino de Granada (1809 #16–26).
Caldas shreds the original to pieces, relentlessly showing every empirical mistake. A typical note by Caldas would read “Baron von Humboldt visited Popayan at time of storms. He stayed for 20 days. He left/vanished (desaparecio) holding ideas about [Popayan’s] atmosphere that are very different from those held by those who have [actually] grown under the influence of this sky”(note 21). In another passage on Humboldt’s generalizations about herds of sheep and goats in the Venezuelan frontiers, Caldas would forcefully call the Prussian out: “I believe Humboldt is wrong; there are no herds of goats in the countries where the boundaries of agriculture end. Actually, goat herds inhabit temperate countries and tropical valleys.”(note 25). | https://medium.com/@jorgecanizaresesguerra/screw-humboldt-def1320213f5 | ['Jorge Cañizares-Esguerra'] | 2019-09-24 14:36:06.810000+00:00 | ['Humboldt', 'Science', 'Maps'] |
Do Motion Capture and Posthumous Performances Have Rights? | 20th Century Fox
Leave Synthespians Alone! Motion Capture and Posthumous Performances Are People Too!
“Apes together strong.”
This quote, from War for Planet of the Apes (one of the best films of 2017), was uttered by the lead character in the concluding chapter of Matt Reeve’s prequel trilogy. Caesar, played by Andy Serkis, is the central character and emotional backbone of all three films.
He has more screen time than any other character in the films, but there is no point at which the audience watching the film actually see’s Andy Serkis. His entire performance, as well as the performances of a majority of the cast, was captured with motion capture technology.
Negotiations of screen performances have come into more conversations after The Lord of the Rings trilogy, where Serkis played the character Gollum. The Gollum Problem, as Phillip Auslander refers to it, explores the relationship between performance and intellectual property law.
This looks into both motion capture and posthumous performances in relation to intellectual property.
Motion capture refers to “the recording of human body movement for immediate or delayed analysis and playback.” It maps “human motion onto the motion of a computer character.”
Technology of this type dates back to 1915 when a process known as rotoscoping became a technique used by animators and filmmakers. To rotoscope is to trace animation over existing film footage featuring live actors playing out entire scenes.
According to Lisa Bode, Serkis’s performance as Gollum…
…was a seamless synthesis of acting and animation, which, as a result of problems with its categorization and appraisal, was overlooked by the Oscars.
Visual effects (VFX) departments have grown larger as technology has allowed artists and storytellers to create computer-generated visuals. A VFX supervisor works closely with the director, often side by side, and helps design the look of the film.
The relationship between the visual effects supervisor and the director is changing quickly as the interest and capabilities of visual effects increase.
For instance, the 1999 action film The Mummy used just over 200 shots with visual effects components. In contrast, just over ten years later, the 2011 comedy Bridesmaids used 225 shots with visual effects components.
Movies do not have to be big-budget fantasy films to incorporate extensive visual effects. Digital tools have quickly changed the landscape of visual storytelling and have become a fundamental part of the film making process.
Performance capture takes the scene out of mise-en-scene and relocates it to the computer. — Lori Linday
Photo by Ryan Garry on Unsplash
Filmmakers such as Peter Jackson and James Cameron uses extensive visual effects that become necessary in order to tell the story. Cameron refers to motion capture as performance capture. His films involve a camera being mounted to the head of the actors recording their eye movements and facial expressions.
This breakthrough in technology ushered in a new style of performance and acting. According to Lori Landay, “footage of action sequences being filmed this way resembles an abstracted performance of movement more than the narrative film as we have come to know it.”
So how does the spectator differentiate between what is real and what isn’t?
And what role does CGI play in this cultural shift in storytelling when performances are created where the audience cannot tell what is real and what is CGI?
And how would that affect performers like Andy Serkis?
What new relationships between performance and spectatorship, between the visible and invisible, arise from bodies that are not actually there? Spectators are aware of the loss of indexicality (whether they know that term or not) due to the massive publicity around the digital production process; they know that what they are seeing was never actually there. — Lori Linday
Dan Stevens was the central romantic lead in the live-action adaptation of Beauty and the Beast from 2017. There are most likely plenty of people that watched that entire film without realizing that Matthew from Downtown Abby was playing the Beast. He is visually and vocally unrecognizable.
And yet, Dan Stevens created a complex character and acted opposite a live actress on real sets while filming the movie. He wore a grey lycra-suite each day on set. It was covered in sensors and his movements were carefully monitored and detected by computers.
He performed each scene against Emma Watson and was present for her to act opposite to and created the performance in which the animators would use to create the final product. At the end of each filming day, we would go into a small room where every pore of his face was monitored and he would do all the scenes they filmed that day again.
Stevens commented that Watson would join him and they would do their scenes again, just to capture his facial expressions. He was still acting opposite Watson and still performing the scenes.
Like Serkis, he performed every aspect of the character as if he was wearing a suit that looked exactly like the final renderings of the Beast character. How would this performance compare to the actors who perform the Beast character on a stage? Just a thought.
According to Phillip Auslander, a leader in performance and technology research in the field of theatre studies, “Live performance is excluded from copyright protection because of the belief that, as an unfixed mode of cultural production, it cannot be copied and therefore lies outside the economy of reproduction.”
This brushes against motion-capture performance and the question of whose performance it actually is. “From a legal perspective,” Auslander states, “the Gollum problem is one of a series of interrelated scenarios in which digital information derived from a performer is used to create performances, and often performer, with varying degrees of independence from the source.”
Once a digital performance is generated, the digital clone of that performer can “undertake an infinite variety of performances the actual performer never executed.”
This leads to a conversation about at what point does the actor no longer count as the creator.
Does that actor, who provided the movements and persona for the collected data, remain involved in the future manipulation of that image?
The film Rogue One featured Peter Cushing as Grand Moff Tarkin. Cushing played this character decades earlier in the original Star Wars trilogy. Rouge One, whose timeline lands in-between episodes 3 and 4, featured scenes of Gran Moff Tarkin speaking opposite live actors playing new characters.
Now what makes this noteworthy is the fact that Peter Cushing passed away in 1994, 26 years before he appeared in Rogue One.
Industrial Light & Magic/Lucasfilm
Check out this LA Times Article about Rouge One!
His image was re-created and regenerated using archrival footage and computer technology to make it appear as if the actor was available for shooting.
Also in the film is a brief cameo of Princess Leia. Ingvild Deila is the talent credited for Princess Leia, but the image on the screen is identical to the Leia we are familiar with in A New Hope.
Another example of a posthumous performance is Marlon Brando in Superman Returns. Using footage Richard Donner shot in Superman (1978), his face was used in the Fortress of Solitude scene with Lex Luthor where Brando reprised his role as Jor-El.
Within this performance, “there is recent evidence of a similar renegotiation of the aesthetic, ethical, ontological, and epistemological problems of posthumous performance, as well as an engagement with a discourse on-screen acting and technology.”
In this particular case, the scene is constructed in a way in which the filmmakers took “pains to show that the posthumous performance is beholden to and constructed with a kind of reverence around performance details originally recorded by the actor.”
The use of a digital clone of an actor is subject to a rather clear legal analysis. Using the image of a famous person without their permission “would run afoul of both the right of publicity and the Lanham Act,” says Auslander. A production company can reuse digital characters in contexts that go beyond the initial project the image was captured for.
But who owns and controls a synthespian? Who is the author? The performer or the animators?
One thing that is troubling, then, about posthumous performance is the way it challenges or disorientates familiar, taken-for-granted ideas about screen acting as an effect produced by an intentional, present human being” — Lisa Bode
Derek Burrill suggests that “traditional film acting, often characterized heretofore as lacking indirect presence when compared with stage acting, seems ‘live’ when contrasted with the methods used to create digitally enhanced characters.” This supports Auslander's thesis that “liveness is defined historically and is always open to redefinition with the appearance of new technologies.”
Photo by Ryan Garry on Unsplash
Lisa Purse states that there is a “long-standing tendency both within film and broader culture to bracket acting as somehow the least intrinsically cinematic aspect of film.” Purse suggests, “rather than splitting “acting” from digital animation or technology, we might instead imagine a screen performance continuum encompassing all the modes of technological mediation and augmentation of performance.”
So, is a motion-capture performance subject to copyright? If so, how does this enter the conversation about screen acting and live performance?
Do motion capture and posthumous performances have rights? It’s complicated.
One thing is certain though. Motion capture performance should be viewed as a distinct style of performance and should be respected by audiences as such. They are valid and impressive modes of performance and the lack of recognition by the media, the critics, and the fans is a shame.
To close with the same quote from the start, “Apes together strong,” can’t the synthetic and the thespian be strong together? I think so.
Interested in further reading about liveness, motion capture performance, and cinema studies? Here are a few interesting pieces I read in my research-
Auslander, Phillip. Liveness: Performance in a Mediatized Culture. 2. New York: Routledge, 2008.
Birringer, Johannes. “Contemporary Performance/Technology .” Theatre Journal 51.4 (1999): 361–368.
Bode, Lisa. “No Longer Themselves? Framing Enabled Posthumous “Performance” .” Society for Cinema & Media Studies 49.4 (2010): 46–70.
Cram, Christopher. “Digital Cinema: The Role of the Visual Effects Supervisor .” Film History 24.2 (2012): 169–186.
Fleming, David H. “Digitalising Deleuze: The Curious Case of the Digital Human Assemblage, or What Can a Digital Body Do?” Deleuze and Film. Edinburgh: Edinburgh University Press, 2012.
Giannachi, Nick Kaye, and Gabriella. “Acts of Presence: Performance, Mediation, Virtual Reality.” TDR 55.4 (2011): 88–95.
Landay, Lori. “The Mirror of Performance: Kinaesthetics, Subjectivity, and the Body in Film, Television, and Virtual Worlds.” Cinema Journal 51.3 (2012): 129–136.
Manovich, Lev. “Kino-Eye in Reverse: Visualizing Cinema.” Cinematicity in Media History. Edinburgh University Press, 2013.
Popat, Sita. “Missing in Action: Embodied Experience and Virtual Reality .” Theatre Journal 68.3 (2016): 357–378.
Purse, Lisa. “Digital Imaging and The Body .” Purse, Lisa. Digital Imaging in Popular Cinema. Edinburgh: Edinburgh University Press, 2013. 53–76.
Thanks for reading Writers Guild — A Penname publication
Share your stories on ManyStories.com to reach more readers. Auto-tweet your stories on repeat with Signal to increase engagement. | https://medium.com/writers-guild/do-motion-capture-and-posthumous-performances-have-rights-527261470876 | ['Sara Brooke Christian'] | 2021-01-27 18:15:01.591000+00:00 | ['Filmmaking', 'Film Analysis', 'Performance', 'Motion Capture', 'Cinema'] |
Introducing Yclub Finance (YCLUB) Finance | Yclub Finance is an upgrade of the old Yearn making some important adjustments in its first layer protocols. As an Automated Market Maker (AMM), Yclub Finance acts a DeFi yield aggregator for the lending platforms that rebalances for the maximum yield during contract engagement. For lending providers, Yclub Finance brings benefit to swapping, shifting the funds autonomously and seamlessly between dYdX, Aave, and Compound.
Features
yFarm
yFarm is a loan aggregator that aims at all times to achieve the highest yield for the assisted coins (DAI, USDC, USDT, TUSD, sUSD, or wBTC). It does this by programmatically transferring these coins between multiple lending protocols running on the Ethereum blockchain (e.g. AAVE, dYdX, and Compound).
yCover
yCover, underwritten by Nexus Mutual, is a non-KYC pooled insurance coverage. It consists of three components: Vaults insurer, Vaults insured and Governance of claims. The Insurer Vaults contain the assets used to insure claimants, the Insured Vault holds the assets claimants who wish to be insured, and the insurance arbitration process is reflected by Claim Governance.
YCLUB Engine Interface
Earn performs profit switching for lending providers, moving funds between dYdX, Aave, and Compound autonomously and seamlessly.
yVault
Each vault follows a unique strategy that is designed to maximize the yield of the deposited asset. Using a YCLUB Vault is like having access to the most advanced money manager in the world.
YCLUB holders are staking their tokens in the Governance contract in order to claim profits. Profits are frequently shipped out from the yVault to this contract, which briefly keeps profits to owners until they are dispersed. After yVault has accrued a $400,000 fund, proceeds are returned to the Governance contract. This surplus is used to pay for various operating costs, including developer incentives and community funding.
yGovernance
Governance model is first of its kind and allows for free-of-cost and publicly-verifiable votes by YCLUB holders. Model incentivizes YCLUB holders with yETH and yUSD to cover gas costs.
yZap Fees
Users can exchange different assets bi-directionally into pooled interest-bearing tokens using the yZap function on Yclub.finance The goal of yZap is to encourage a more seamless and frictionless swap between different coins.
About Us: | https://medium.com/@yclubfinance/introducing-yclub-finance-yclub-finance-b91477fd28fe | ['Yclub Finance'] | 2020-12-18 17:54:38.396000+00:00 | ['Staking', 'Farming', 'Yfi', 'Yclub'] |
Going from screen to VR: trading resolution for immersion | Should I get a TrackIR or VR? The question, made by someone, recently, at the Orbx forums, does not have a “one size fits all” answer. But let me say that once I tried Virtual Reality in a flight sim, there is no way I will use a monitor again.
Is Virtual Reality good for flight simulation? I would say a clear YES, but I believe other people will have different opinions (read the forums at Orbx for the whole discussion), and to a certain extent they may be right. If you do not have a powerful computer, you better stay with a flat screen with TrackIR or any similar solution available. Also, if you like to take lots of notes, read maps and do all the other things associated with planned flights, you might prefer to not use a VR headset.
On the other hand, if you fly mostly GA aircraft, and use the simulation to have the sensation of flying over a real landscape, nothing beats the experience of VR. That applies to the available flight simulations today, from Prepar3D, to X-Plane 11 or DCS, to mention the most popular and those I use regularly. The implementation of VR in Prepar3D is not the best, I must admit, and that has kept me from using it, despite the amount of money I’ve invested in Orbx sceneries. X-Plane 11’s VR is better, but you really need a powerful computer to run it. DCS is even better, but everybody is expecting that the implementation of Vulkan — both in XP and DCS — makes the simulations run faster than with OpenGL. But don’t expect miracles.
Track IR, DelanClip and others
In fact, no computer available today can run these simulations at 100fps in VR, as some seem to claim. And it will probably be a long time before any man made machine grants us that vision. Anyways, unless you’re one of those persons who are too sensitive to low frame rates, 30fps will make your experience in X-Plane 11 quite smooth. If you’ve the machine to run it, that is. But let’s get back to the question of VR.
There is, in fact, no simple answer in when it comes to choose between TrackIR or VR. If you get motion sickness, then VR is not for you. Add this to the other reasons mentioned earlier, and you might have the answer you need: TrackIR is the way to go. Believe me, TrackIR is great, I’ve used it for years, until my older son, Miguel, stole mine, to use in Elite Dangerous. I then acquired another system, DelanClip, which works similarly, and there are others, like Track Hat Clip, offering the same kind of experience. They are great… until you try a Virtual Reality headset.
So, that’s how I got into VR. From screen to TrackIR and DelanClip and finally Oculus Rift. Since I tried the Rift, there are days when I wish I would go back to the clear, detailed image I’ve on the screen, a 32 inch 2560x1440 monitor. It’s much better, so much better, that I understand if you want to keep using it. But believe me, the immersion provided by VR is mind boggling. And as I do not get motion sickness, I can fly for hours without any problem. | https://medium.com/outpost2/going-from-screen-to-vr-trading-resolution-for-immersion-3b254bd9d44f | ['Jose Antunes'] | 2019-09-17 02:05:18.208000+00:00 | ['Oculus Rift S', 'Aviation', 'Flight Simulation', 'Virtual Reality', 'X Plane 11'] |
Connecting With Your Child Through Color | A good way to connect and learn more about the children in our lives is through the language of color. Every color has its own unique frequency and when you begin to work with color with your child, they will show you what they need.
I was working with a mom recently who asked how she could best support her 7-year-old son. When I connected with her son at a soul level, he gave me some information on how she could support him.
What he showed me was a poster board with his picture placed in the lower center portion of the board. Around the top of the board were placed round circles of the 24 colors that his mom had started working with after taking the Inner Diamond meditation class.
He wanted his mom to create this board and then, each day, ask him what color or colors he needed for that day. He would then choose some colors and place them around his picture. Based on the colors he would choose, his mother could determine what he needed.
For instance, if he chose the color royal blue, he was needing some energetic protection. If he chose the color peach, he needed some joy in his day. Choosing emerald green was showing his mom he needed some healing.
Especially when a child is young, using color is a great way for parents to know what they need and to help them. In my next blog, I will be sharing a story of how color helped a mom and her 3-year-old at her daycare. | https://medium.com/@annetterugolo1/connecting-with-your-child-through-color-962ddcdc9cfd | ['Annette Rugolo'] | 2021-02-02 08:37:06.573000+00:00 | ['Dowsing', 'Meditation', 'Parenting Advice', 'Color Theory', 'Color Therapy'] |
PXUC (PIXIU coin): The world’s only gold cryptocurrency backed by real gold mining areas!! | Pixiu Mining Corporation(PMC) utilize this meaning to issue PIXIU coin in the form of cryptocurrency, so that more people can make profits by holding stable gold assets. We have the faith that PIXIU coin will become the highest-valued gold cryptocurrency around the world within three years.
🔎Forty mining areas in Northern California→Located approximately 5 miles and 11 miles west of Quincy city in Plumas county. There are forty mining areas in total, which are divided into large and small parts with a total coverage of 769 acres, approximately 3.11 million square meters.
🔎Gold reserves assessment→ Stickel & Associates, an independent geological consultant company, visited mining areas and proposed a resources evaluation report based on the reference of government documents and reports of other geological experts. Meanwhile the test results showed the presence of platinum metal.
🔎Mine asset assessment→ KPMG to analyze the value of mining assets. KPMG LLP issued an analysis report after deducting the total mining cost of $3.335 billion. The estimated asset value is $ 5.447 billion, and the detailed appraisal report is shown in the appendix.
Look here👉 https://pixiumining.gold/dat/whitepaper_en.pdf
✔️Website👉 https://pixiumining.gold/en/
✔️Twitter👉https://twitter.com/Pixiu09958435
✔️Telegram👉 https://t.me/pxucofficalgroup
WELCOME to QUINCY | https://medium.com/@pixiumining/pxuc-pixiu-coin-the-worlds-only-gold-cryptocurrency-backed-by-real-gold-mining-areas-f553e6ec244b | ['Pixiu Mining Corporation'] | 2020-04-24 10:41:51.953000+00:00 | ['Gold', 'Coins', 'Cryptocurrency', 'Mining'] |
waiting for SDGs | In a fine 3rd Feb 2020 morning, I attended a lecture by Prof. Jeffry Sachs on Sustainable Development Goals (SDG) and his pessimistic view on the progress made by countries and challenges which SDGs are facing given the increasing hostility towards multilateralism and the tilting of powerful states like USA and UK towards popularism and protectionism and bla bla bla.
For some unknown co-incidence, I was watching a video lecture on YouTube on that same afternoon by Nick Mount on Samuel Beckett’s “Waiting for Godot”, and bingo! I immediately connected the dots and started to write this parody on Godot, and here you have “Waiting for SDGs”.
If you have not read that novel, and without risking to spoil the plot, “Waiting for Godot” is all about the absurd life with no purpose for life, no reckoning of time and an absolute value-less system. Similarly, SDGs are increasingly becoming like orphans with no guardian. Looking at the way the world is progressing with SDGs and the promise of 2030 one can, without doubt, see the absurd amnesia the world is suffering from, and countries waiting for SDGs is much similar to the two poor gentlemen in Beckett’s novel waiting for Godot.
Every day we are going back instead of going forward waiting for some progress on SDGs. During the MDG time, in the early 2000s, there was much more optimism and the UN played a great role in bringing people around the table and countries poured money and created success stories. For example, Jeff did talk about the fund to fight malaria in Africa and how it saves an estimated 40 million people.
Progress on SDG needs rich countries and individuals giving grants rather than loans on social sectors (particularly education and health), but the libertarian elders in Washington and New York believe in the hardcore protestant ideology of market forces. Their “white man’s burden” ideology will keep the least developed countries stay in perpetual slavery (think of the (un)Lucky slave and his master Pozzo in the novel).
Rich countries love to use nice photos of poor boys and girls from poor countries — and better if they are smiling- and brag about in the UN how they are making progress here and there, but none feel comfortable to get serious — beyond these casual gestures — and commit to setting solid goals with accountable targets based on informed decisions. At the end of the day, the poor will remain an object of amusement and fun and the rich would find them good news coverage items. These rich countries have lost memory and tracking mechanism of time and probably turned blind as well (think of Pozzo again). They do not like data or statistics, they just want some media coverage and photoshoots with smiling poor girls to win the heart of voters and create an alternative truth around a populist fan (yes, I am talking about Trump and Boris Johnson).
What is the solution? will we ever meet Godot? Will humanity triumph on 1st Jan. 2030 and celebrate SDGs or will the drama continue with some new set of goals as we saw at the time of transition between MDGs and SDGs?
I think we can eventually meet with Godot and get out of our current absurd world if we are successful first in identifying the root cause of the problem.
In my view, the problem lies in the absurd modern world that we are living in today where there is no reference to a value system. In other words, and here I agree with Jeffry Sachs, the elders who are driving the SDG agenda are libertarians who have no respect for human values or culture or religion or history. The solution is then is to return to a value system that has respect for local culture and religion. The attention needs to divert from placing money at the center of the development agenda to placing the poor human being at the center of the development agenda. The White Anglo-Saxon Protestant (WASP) elders need to be humble and respect other cultures and stop being imperial forces who assume the burden to civilize other nations. | https://medium.com/@abdulbaquee/waiting-for-sdgs-4df17c240386 | [] | 2020-02-21 14:36:09.802000+00:00 | ['Development', 'Sdgs'] |
100 Day Weight Loss Challenge | Best Weight Loss Transformation in 2022 | If your goal is to drop 40 pounds or lose weight in general, it’s important to think big picture while implementing daily changes to make it a reality. One of the best ways to do so is by creating a 100-day transformation plan.
Important Tips:
While it’s possible to lose 40 pounds in 100 days, it may be unhealthy to try to lose such a large amount of weight in just three months time.
Healthy Weight Loss Goals:
While there are dozens of diets and weight loss regimens on the web, it’s important to remember that all bodies are different and, as such, everyone requires different food, in different amounts, consumed at different times, as well as different levels of exercise, to hit their target weight.
While no two bodies lose or gain weight in identical ways, there are some things everyone can do to promote weight loss.
In addition to maintaining a healthy caloric input from whole unprocessed foods, whole grains, healthy fats, and endless fruits and veggies, The Physical Activity Guidelines for Americans state that people who want to lose more than 5 percent of their body weight (or keep a significant amount of weight off) need to exercise a minimum of 300 minutes of moderate exercise a week — this includes brisk walks, strength training, cycling, and more.
Whatever you do, try to avoid falling for a fad diet that tries to tell you that you need to consume less than 1,200 calories a day. Doing so may trick your body into starvation mode, and inhibit weight loss while causing endless frustrations. Additionally, remember that a healthy diet paired with physical activity is your best bet, as opposed to relying on just one or the other.
Tracking Meals Promotes Weight Loss:
In addition to working out and eating right, it helps to keep a record of doing so. While it may seem tedious to write everything down, research has shown that taking the time to record meals and workouts leads to better weight loss results over all. According to Harvard Health Publishing when we take the time to self-monitor our health choices, we’re less inclined to overeat and more inclined to make time for movement. It’s no secret that this can be an eye-opener, as it makes people hyper aware of how they prioritize their health goals.
This idea was built upon in a February 2019 study published in Obesity: A Research Journal, where researchers found that participants who successfully lost 5 to 10 percent of their body weight were the ones who logged into a web portal most frequently to self-monitor their eating. What’s more, they found that the longer you self-monitor, the less time it takes to actually focus on doing so to maintain a successful rate of weight loss.
100-Day Transformation Plan:
Since trying to lose 40 pounds in 100 days may be unrealistic without crash-dieting and potentially putting yourself in harm’s way, it’s important to create a more attainable goal that benefits your health. In fact, according to the Mayo Clinic, one of the six strategies for successful weight loss is to set realistic goals, aiming to lose no more than 1–2 pounds per week — or, in this case, 14 to 29 pounds in 100 days.
The best way to do this is to consult with your doctor. However, if you don’t have the means to do that now, you can click over to the National Institute of Health’s Body Weight Planner. There, you’ll be able to enter your weight, sex, age, height, physical activity level, and an intention to improve that activity level by a specific percentage, to determine just how many calories you should be eating in a day. It will also tell you how many calories to eat to maintain your goal weight once you’ve reached it.
What’s more, using research-backed evidence, it will provide you with tips on how to eat healthy and stay active, while providing additional resources to aide in that process (like ChooseMyPlate).
Click Here for Get on Official Website & Loss Weight in 100 Days | https://medium.com/@elena.gold/100-day-weight-loss-challenge-best-weight-loss-transformation-in-2022-4bcd5cb5d4b2 | ['Elena Gold'] | 2021-12-20 07:55:33.174000+00:00 | ['Weight Loss', 'Health', 'Science', 'Weightloss Recipe', 'Weight Loss Tips'] |
Must Have Equipment for your Summer Baking Classes | We all have had quite some time on our hands to think and discover things we like doing. It’s already summer time, which means most of us are looking for any good reason to stay indoors. Be it our classrooms, restaurants, cafes, theatres or the comfort of our homes. This also happens to be the perfect time to get ourselves busy with a new hobby, there are definitely a few things that you might have been putting off for quite some time. While we are at it, allow us to suggest an exciting course for this summer: Baking!
Yes, you heard it right! Baking is one of the most sought after areas and Baking courses can be taken up as a hobby or to improve your baking skills and become a professional baker. This is the perfect time to explore Baking as our summer break is approaching and you’ll have some extra time to experience new things! Hamstech is offering you a short and informative Baking course that lasts three months and is full of exciting recipes and interactive sessions.
While you pick a date to enroll yourself into one of these courses, we thought we would help you get prepared for your summer baking. To know some basic things about baking, you can start with the necessary equipment and maybe even try one or two simple recipes at home. Baking is all about the proportions and tiny details, so the equipment does play a very important role. Let’s introduce you to some of the must have baking equipment before you dive into the world of batter!
Weighing Scales
The quantity of ingredients used is also one of the important steps in Baking. All baking recipes come with specific weightage of ingredients to reduce the chances of a bad bake! Kitchen weighing scales are available in most general stores and can even be bought online. These will help you measure out the ingredients before you add it in the mixture. These weighing scales are super handy and can be easy to carry around.
Measuring Cups and Spoons
Baking is all about proportions and precision, so it’s really important to get your measurements right. There are different sets of measuring cups and spoons available for both dry and wet ingredients. These measuring cups can be bought at a hyper-market or from a store that sells baking items exclusively. While the measuring cups come in different teaspoons and tablespoons variations like ½ tablespoons, 1 teaspoon and so on. The measuring cup sets are available in sizes of ½ cup, 1 cup, ¼ cup and so on. Liquids can be measured in cup sizes that come as 1 — cup, 2-cup and 4-cup!
Whisker
One can’t think of Baking without thinking about eggs! While we do have a lot of eggless and vegan recipes out there, eggs come with their own charm. When there are eggs, there is whisking. Whisking( 1) is beating an egg or cream with a continuous rhythm for a required amount of time. Whisks are also available in different sizes and it’s always good to have a small one and a medium sized one. It’s easy to whisk manually when the quantities are less, but if you are baking for a large number of people, it is better to opt for an electric whisk!
Baking Trays
You’ll definitely need baking trays, if you are thinking of baking! These are also available in a variety of shapes and sizes. The most commonly used baking trays are shaped round, rectangular and square. As you are just starting out, you could start by buying a rectangular tray. These will come in handy while baking brownies, cakes, sheet cakes, breads and a lot of other things!
Spatula
Spatula is very useful in scraping off all the leftover batter from your pan or post baking the little bits of cake remaining on the baking tray. Spatula is also used for folding, which is a type of mixing in Baking, where you fold the batter in to not let any air or air bubbles stay around. They are super useful on a daily basis also while baking, when it gets all messy with the batter. You can scoop out your pots clean with the help of a rubber or silicone spatula!
Hand Mixer
A Hand Mixer might seem like an expensive investment in the beginning, but you will thank us later for having this one ready. Combining your dry and wet mix can be a pain sometimes, especially if the quantities are more. Hand mixer makes it easier and quicker, you just have to put in all your ingredients in here, turn it on and wait for it to get your dough ready. As baking already has so many steps, you can get your other components ready while the dough is being kneaded.
Baking classes can be your pick for this summer! Hamstech offers a Baking course that will equip you with all the information, guidance and training that you need to start baking on your own. Why wait when you get baking already? Visit Hamstech’s website and enroll now! | https://medium.com/@begumayesha5233/must-have-equipment-for-your-summer-baking-classes-20eebc4347f1 | ['Ayesha Begum'] | 2021-09-07 12:10:58.437000+00:00 | ['Baking'] |
Start-up Story: BabyChakra | In our hour-long discussion with Naiyya, we deep-dived into the problem she was addressing, the processes she followed when she began the business, and what lay ahead for BabyChakra and the Indian parenting sector.
Sector outlook:
According to industry analysts and insiders, the Indian parenting sector isn’t an easy market to crack. Around 90% of the market is locked up with old-world offline baby product retailers.
However, it has seen massive growth in the last couple of years. Parents no longer want to lean only on the circle of their parents, in-laws and local paediatricians for advice on child rearing. Joint families are giving way to nuclear ones and the rise in disposable incomes, as well as double income households, have led to parents devoting more money to child-rearing. The growing internet penetration across the country has led to awareness among parents (even in remote areas) about proper child-care. More parents are now willing to use premium quality products, services and specialists for their babies. In a market that was reliant on old networks, this shift is clearly visible from the growing number of users on BabyChakra platform.
But this was not the case when Naiyya began her journey. “Back in the day,” Naiyya recalls, “it was a sunrise sector, hardly any brands provided premium baby products. Experts were geographically concentrated and unavailable to parents from rural India. Many parents from the cities too were unaware of a lot of things, for instance, there is something like a lactation specialist.”
Naiyya, the Founder of BabyChakra at an event
A former rainmaker with McKinsey, Naiyya used her learnings and business acumen in the field of maternal and child health to start her venture. While she had the business skills, she lacked user-research and hence, wanted to first understand the user problems and pain points that she could address with her product. For her research, she met around 600 mothers over coffee. She learnt about the issues they faced and the products they used for themselves or their baby. Her research culminated in the launch of a personalized, mobile first app and web platform for mothers and fathers.
Naiyya launched the BabyChakra platform in February 2015 and then there was no looking back. The start-up has raised three rounds of funding in total with the latest one from a group of senior corporate leaders including Equanimity Ventures Fund, backed by Mark Mobius and Rajesh Sehgal; Facebook director Anand Chandrasekaran; OYO chief strategy officer and ex-Lightspeed Ventures investor Maninder Gulati; and Gideon Marks, a Silicon Valley tech investor credited with three NASDAQ public offerings. Existing investors Arihant Patni, Artha India Ventures, and Bharat Rawla were also a part of this funding round.
Investors are keen on investing in the parenting sector. With more than 127 million children under the age of 4 years, nearly 27 million annual births and a fertility rate of 2.72 per woman, India makes for an attractive market. The Indian baby care market is expected to grow annually by 17% in revenues from $14 billion to $31 billion in the 2014–2019 period. Multiple mother-child care portals have recently launched, and offline retailers are slowly disappearing since they pay high rentals, earn slim margins and deal with employee churn. Whereas, online companies build a large inventory and a strong supply chain backbone to deal with the wide array of orders. BabyChakra has been at the forefront in this segment.
Currently, BabyChakra records 5,00,000 users who log-in every month. And the team has set ambitious goals for the new year. They want to deploy their funds to expand the platform to more regions across India, increase the number of users and their stickiness within the platform. With the implementation of 10 regional languages, BabyChakra expects its user base to grow about 5–10 times and touch 15 million by the end of FY19. They want to help native and rural users to easily understand and communicate on the platform by incorporating Hindi and other regional languages. Naiyya said, “Today, we serve a pressing need for India. More than 70% of our users are families in tier 2 and tier 3 towns. They use regional languages, video and voice to make critical health and wellness decisions for mothers and children.” | https://medium.com/artha-india-ventures/start-up-story-babychakra-489b942e07d6 | ['Nikita Zankar'] | 2019-01-18 09:07:34.916000+00:00 | ['Startup', 'Parenting', 'Babychakra', 'Baby Care', 'Entrepreneurship'] |
by Martino Pietropoli | First thing in the morning: a glass of water and a cartoon by The Fluxus.
Follow | https://medium.com/the-fluxus/wednesday-little-wings-c9057517b8ac | ['Martino Pietropoli'] | 2018-09-12 00:00:03.730000+00:00 | ['Illustration', 'Drawing', 'Painting', 'Wednesday', 'Art'] |
Predictions For 2021 In The AML Sector | Written by: Ehi Eric Esoimeme
The year 2020 witnessed the introduction of new rules and regulations by the United Kingdom for virtual asset service providers. For the first time, a Bitcoin “Mixer” was Penalized by the United States Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) for Violating Anti-Money Laundering Laws. FinCEN had assessed a $60 million fine against Larry Dean Harmon, the founder, administrator, and chief operator of Helix and Coin Ninja, convertible virtual currency “mixers,” or “tumblers,” for violations of the Bank Secrecy Act (BSA). The anti-money laundering landscape also witnessed a shift from documentation based procedures to electronic evidence. This was as a result of the COVID-19 pandemic.
The year 2021 will likely continue from where the year 2020 stopped with more regulations and enforcement actions. Below are my predictions for 2021 in the AML sector.
Tougher Regulations For Virtual Asset Service Providers
The year 2021 will likely witness the introduction of more rules and regulations for virtual asset service providers, or VASPs by countries around the world. The United Kingdom and the Cayman Islands have already taking steps in the year 2020 to regulate and attract persons and entities that deal with virtual assets as a business and we are expecting more countries to follow in this direction in 2021. The United Kingdom, for example, issued new rules on the 10th of January, 2020, mandating existing businesses (operating before 10 January 2020) carrying on cryptoasset activity in the United Kingdom to register with the Financial Conduct Authority (FCA) before they begin conducting business. If cryptoasset businesses are not registered with the FCA, on 10 January 2021 the business will have to cease trading.
Cryptoasset companies regulated by the FCA following the Money Laundering, Terrorist Financing and Transfer of Funds Regulations 2017, as amended (MLRs)and a person who is an officer, supervisor, and beneficial owner in the business, will be subjected to the fit and proper requirements under Regulation 58A. On October 31, 2020, the Ministry of Financial Services of the Cayman Islands Government published that it had commenced a regulatory framework for virtual asset service providers or VASPs. The VASPs legislation supports the supervision of persons and entities involved in providing business services that use or rely on virtual assets on behalf of another person or entity.
Enhanced Supervision Of Dealers In Precious Metals, Precious Stones Or Jewels
The year 2021 may witness the adoption of an appropriate mix of on-site and off-site supervision to assess the effectiveness of the anti-money laundering (AML) compliance program of dealers in precious metals, precious stones or jewels, including a review of their risk management practices. The frequency and intensity of the on-site and off-site AML supervision will depend not only on the intrinsic risk associated with the activity undertaken by dealers in precious metals, precious stones or jewels, but also on the quality and effectiveness of the risk management systems put in place to address such risks. These reforms will be coming at a time when a year-long investigation into the FinCEN files had established that Gold companies are being used by criminal enterprises for wealth movement, storage and preservation.
Where any weaknesses, or deficiencies are identified during the review of the institution’s AML compliance program, supervisory authorities may require a dealer in precious metals, precious stones or jewels to take timely corrective action or impose a range of sanctions. In practice, a range of sanctions are applied in accordance with the gravity of the situation.
Electronic Verification
The COVID-19 pandemic has forced many reporting entities, including financial institutions, and virtual asset service providers (VASPs) to amend their anti-money laundering and counter-terrorism financing (AML/CTF) program by implementing alternative processes to verify customers’ identity. Reporting entities who only relied on documentation based procedures now use electronic/digital identity checks, either on their own or in conjunction with documentary evidence. It is expected that many of these reporting entities will maintain these new measures even after the COVID-19 pandemic, and upgrade their anti-money laundering systems and controls to accommodate application programming interfaces (APIs) during the customer onboarding process.
Traditional rule-based Know Your Customer (KYC) technology necessitates significant dependence on manual efforts, particularly in the alert investigation stage, which can be time-consuming, labor-intensive, costly, and error-prone. In order to overcome these considerable and lingering challenges, it has now become imperative that reporting entities leverage new-age smart technology solutions.
KYC API offers a singular source for data and documentation to support due diligence and help financial institutions focus on decision-making rather than time-consuming and repetitive standard research activities.With KYC API, companies can obtain information from a wide swath of sources from government records, individual records, and governments. These include phone records, credit bureaus, DMV information, arrest records, utilities, court records, and business data, which can be accessed via APIs during the customer onboarding process.
Before using the KYC API of an organisation for electronic verification, reporting entities should be satisfied that information supplied by the provider is considered to be sufficiently extensive, reliable, accurate, independent of the customer, and capable of providing an appropriate level of assurance that the person claiming a particular identity is in fact that person.
Enhanced Surveillance System
Regulatory actions taken against financial institutions by the Financial Crimes Enforcement Network (FinCEN) in the last three years have shown how a number of financial institutions failed to adequately monitor foreign currency-denominated wire transfers conducted through commodities records and retail brokerage records due to weaknesses in monitoring that made it possible for an anonymous third-party residing in a country known for money-laundering risk to transfer external currency into a customer’s commodities account, and for that customer to then transfer these funds to another party in a country known for money-laundering risk, without the financial institutions’ surveillance system reviewing these transactions.
2021 will likely see a large number of financial institutions globally improving their anti-money laundering (AML) surveillance monitoring systems to a modern system that can detect highly suspicious transaction patterns, including possible layering schemes, transactions not commensurate with the business’s purpose, and commingling of funds between two independent check-cashing entities.
For the new monitoring system to be effective, financial institutions must incorporate customer files into their transaction monitoring processes. The type of transaction used by the account is an essential factor in identifying an expected volume of customer activity. This is important because such information is necessary to identify baselines with which to compare actual activity for transaction monitoring.
When a financial institution detects a deviation in a customer’s activity from anticipated activity identified at account opening, the financial institution should not change the anticipated activity in the account but rather it should change the customer’s risk rating, when the customer has been identified as high risk. Changing the customer’s anticipated activity will undermine the purpose of conducting risk ratings and cause the financial institution to apply insufficient transaction monitoring to accounts it should have identified as high-risk and limit the financial institution’s ability to detect red flags of suspicious activity.
Corporate Transparency And Register Reform
Lack of transparency in shell companies’ formation and operation could be a desired characteristic for particular legitimate business activity. Still, it is also a vulnerability that allows anonymity or opaqueness of customer, ownership, or beneficiary structures. This may give rise to significant misuse of the corporate vehicle for illicit purposes. The Government of the United Kingdom has already taken steps in the year 2020 to reform the corporate sector by introducing identity verification into the incorporation and filing processes run by Companies House. Identity verification requires company directors, people with significant control of a company, and individuals filing information. The latest reforms will also require companies to provide full names for shareholders.
At the same time, Companies House will improve the information format by allowing users to view and inspect a full list of shareholders easily. This list will be updated annually at a company’s confirmation date. This will be a marked improvement on the present situation where forming a comprehensive picture of current share information requires research through historic filings. The introduction of this facility may require every organization to file a full, one-off shareholder list. Where a person fails to verify, the usual outcome will be that the intended action cannot proceed: i.e. a director’s appointment will not proceed, a presenter will not be able to file information. Where the failure is suspicious, the information will be shared with the appropriate bodies.
The United Kingdom corporate transparency and register reforms came two days before the International Consortium of Investigative Journalists (ICIJ) and 109 media partners published the results of the 16-month international investigation known as the FinCEN Files which identified thousands of U.K. shell companies linked to suspicious transactions. We are expecting more countries to introduce corporate transparency and register reforms in 2021. Identity verification checks will improve the reliability of information filed with Companies House, adding confidence that only verified individuals can be listed as directors of a company and dissuade misuse of companies and other legal entities. | https://medium.com/sanction-scanner/predictions-for-2021-in-the-aml-sector-8c08d6f719b2 | ['Saliha Ayaz'] | 2020-12-14 06:14:05.535000+00:00 | ['Predictions', 'Money', '2021', 'Money Laundering', 'Compliance'] |
Upcoming Computer Hardware and Tech for 2021 | A year ago, no-one would have been able to foresee how 2020 would leave an indelible mark on our collective memory. The COVID-19 pandemic has had a profound impact on all areas of our lives, from how we work to how we communicate with each other. Throughout this experience, technology has played a big part in helping us cope and adapt to the new challenges.
We’re now approaching the end of the year, and alongside the hopes for a vaccine that can finally ease restrictions to movement, we’re looking at what 2021 will bring for the computing tech sector.
Several exciting announcements have been made for products that promise to revolutionize the computer industry. Below are some of our predictions based on the technologies we look the most forward to.
Exascale Computers
The growth of computer power had seen growth at an exponential rate for many years. However, during the second half of the 2010s, we’ve seen a slowdown in the rate of progress.
In 2018, IBM unveiled the world’s fastest supercomputer, Summit, which retained the title in 2019 and 2020. Several challengers are, however, being developed by China, the US, the European Union, Japan, India, and Taiwan. These are all expected to be deployed during the early and mid-2020s. In 2021, China will design and manufacture processors locally (for example, the Tianhe-3), which could reach a sustained exaFLOP performance. An exaFLOP is 1,000,000,000,000,000,000 floating-point operations per second.
Exascale computing could lead to revolutionary advances in a variety of fields thanks to greater scale simulations. For example, neuroscientists might be able to finally simulate the entire human brain in real-time.
Intel’s Rocket Lake Chip
The first quarter of 2021 could see the launch of the 11th-generation Intel’s Rocket Lake processors, the company’s first new microarchitecture for PCs in five years. The announcement was made just a few days ago, on the cusp of AMD’s hotly-anticipated Zen 3 processor statement which followed the next day. AMD processors are expected to be branded Ryzen 4000 or 5000 and launch later this year.
The details of the new Rocket Lake are not particularly well-kept. We know that the processors will support PCIe 4.0 for the first time and are largely thought to come with a new PCU microarchitecture.
The ship’s clock speeds could meet or exceed the company’s current 5.4 GHz peak boost, setting the stage for an intense competition between AMD and Intel in 2021. Additionally, MSI has already announced that all of its B450/X470 motherboards will support the new Ryzen CPUs.
Nvidia and AMD Graphics Cards
In September, Nvidia announced the launch of its new RTX 3080. The graphics card, built with enhanced RT Cores and Tensor Cores, is so popular that there are currently waiting lists to get one.
Nvidia is reportedly now readying its new 5nm graphics cards for 2021. Apparently, the company has already booked capacity with chipmaker TSMC. According to DigiTimes, Nvidia and TSMC will produce next-gen graphics products based on the Hopper architecture. Hopper is, according to rumors, the next step after Ampere and is named after American computer scientist Grace Hopper. As well as using TSMC for its 5nm products, Nvidia is reportedly also going to work with Samsung for a smaller amount of orders.
CES 2021, which will take place in January 2021, might see the launch of AMD’s long-awaited RDNA 2-based Big Navi graphics cards. It seems that a next-gen 7nm GPU war between AMD and Nvidia is ensured for 2021, with Nvidia (currently dominating the discrete PC graphics cards arena) executing the first preemptive move.
Artificial Intelligence, Superfast Gaming, and More
Artificial intelligence (AI) is, without a doubt, one of the biggest tech trends now — and it’s definitely going to stay that way for 2021.
During the COVID-19 pandemic, we’ve seen an urgent need to quickly analyze and interpret data about the virus. Governments, academic research centers, and global health bodies came together to develop new ways to gather and work with information. In the coming years, we should expect machine learning algorithms to become increasingly sophisticated and better informed, uncovering a whole new range of solutions. Computer vision, in particular, will continue to play a crucial role in gaining high-level understanding from digital images and videos.
With the new and upcoming range of GPUs (which, at present, already double the performance of their predecessors), 4K gaming could become more affordable in the PC world. Even 8K is a possibility with the next line of cards, like Radeon’s RDNA 2 architecture, which includes hardware-accelerated ray tracing and variable-rate shading.
It’s been reported that several companies are also working on DDR5 RAM for PCs (LPDDR5 can already be found in the Samsung Galaxy S20 smartphone), a hint that the technology is waiting for more compatible CPUs to officially launch at a massive scale.
Overall, 2021 looks like a very promising year for computer tech. With brand new hardware and more intelligent software, we will definitely see an industry boom in the months to come. | https://medium.com/@yisela/upcoming-computer-hardware-and-tech-for-2021-f9980e3f142d | ['Yisela Alvarez Trentini'] | 2020-11-16 13:29:53.436000+00:00 | ['Gpu', 'Gaming', 'Technology', 'Computers', 'Hardware'] |
How To Join Etherconnect (ECC) and How To Earn! | As everyone knows that the current market crashing high due to covid and millions of people lost their jobs, people are disappearing or not paying enough, everyone is forced to lower their expenses or work multiple jobs just to keep up. So now the time is over with Etherconnect.co a new age of decentralized finance platform.
Etherconnect helps all the users and investors to earn more profit from their staking ECC coin. All that you have to do invest your Ethereum (ETH) and BNB Coin and watch your profits start rising to your future which will grow instantly.
Most of the users and investors ask is this platform is safe?
There are multiple websites out there doing scams and run the copy-pasted platform in the name of profits. To assure all of its users and investors, Etherconnect has published the complete transparency on its website. They have not only made their platform for publically live, but they also have their registration of being a real firm in the United Kingdom and Estonia.
The certificate can be cross checked on the registrar’s website as well to be sure whether they are legit. Anyone can do their own research on their platform before investing.
Investments:
There are different options through which you can invest on Etherconnect. The more you invest, the more you will earn as profit. The different profiting schemes that they follow are:
Personally Stake ECC Coin
Once you have joined the Etherconnect, you are always going to have an option to withdraw your profit and your investment. You can let the investment stay until it has completed the staking time circle but most people keep withdrawing their profits.
The personal hold bonus is applied to people who don’t withdraw their bonus for 24 hours. It increases their profit and it is directly added to their earn profit.
It does not only help you make more money but also very easily. The interest rate will keep increasing every day and the longer you will wait to withdraw your funds, the earlier you will complete the whole time cycle.
Etherconnect Staking protocol:
The total amount of bonus is the bonus you get every time that you hold the Etherconnect in staking protocol. For example, the platform currently holds 10 ETH; you will be receiving 10% monthly and receive daily to your wallet.
This is the most basic way to make money. All you have to do is just keep your money in the Etherconnect staking protocol on Etherconnect.co and 10% profit will be added to it automatically. You will be able to see real time increment in your profits all the time and by the end of 24 hours.
All of these factors help you reach your future goals quicker so you can keep making money.
Etherconnect Affiliate Programs:
Different affiliate programs have been launched to help you keep making money without investing much yourself. Every time you invite someone to join the Etherconnect.co using your affiliation URL, you will start earnings from 1st level and get 8% on their 2nd level of the affiliate you will keep earning as low 3%. This is the first level affiliation to level 10 affiliations.
Every time the other person invites someone else to join the Etherconnect.co using their referral URL, their URL will be directly linked to you and helps to earn 8% off the total level earning of the person who just joined as well. This is known as the second affiliation. The second person will receive their 3% as well on their total amount of deposit while the end person will have no decrements from their profit.
The last affiliation type is the third affiliation. Third affiliation is when your friend of friend invites someone else using their link to join the platform. Their link is directly linked to your friends’ link which is linked to your link.
It sounds a little complicated but it is really not. You will be making good % on their total deposited amount while the people involved in first and second affiliation will still be getting their share.
Is there a way to get money easier than this? You already know the answer to this.
Steps involved:
There are three easy steps involved in earning your money.
First, making the investment just create your account with Etherconnect.co verify your account & connect your wallet to make payment in ETH, BNB currency, and Deposit in the platform. Make sure the deposit will be go through with Binance Exchange, Trust Wallet, Metamask Wallet, Coinbase wallet, and other wallets, will instantly credit your deposit, (Do not use other exchange to make the deposit, because it will take 72 hours to update the payment into Etherconnect deposit balance.)
The second step involves keeping a check of your account. You will be able to see all the earnings that you are having throughout the day and you can withdraw them at any time.
The last step is withdrawing the money. It is just a one-click process and you will receive all your profit in your connected wallet right away.
Happy earning!
Contact:
There are multiple ways to contact Etherconnect.co. With their official contacts to telegram support, admin email, and their social networks and telegram groups, you can always be in touch with them. Visit their website for more information: www.Etherconnect.co
Press Contact Email Address
[email protected]
Supporting Link
https://etherconnect.co
Join our Telegram Channel For more Details-
✅Telegram Group: https://t.me/etherconnectcommunity
✅Telegram Channel: https://t.me/etherconnectofficial
✅Twitter: https://twitter.com/Etherconnt
✅Facebook: https://www.facebook.com/Etherconnt | https://medium.com/@etherconnect/how-to-join-etherconnect-ecc-and-how-to-earn-346d47b6ebe7 | [] | 2021-03-22 06:58:58.828000+00:00 | ['Ethereum Blockchain', 'Ethereum', 'Staking', 'Income', 'Decentralization'] |
Best of both worlds: using a secure chip with open source firmware | Best of both worlds: using a secure chip with open source firmware Stadicus Follow Jun 18 · 7 min read
Within the hardware wallet industry, there are two contrasting approaches to security design: relying on a secure chip (SC) as a black box, or using open-source firmware on a general purpose microcontroller unit (MCU). With the BitBox02, we found a security architecture that allows us to combine the advantages of both approaches.
Why using a secure chip is important
Hardware wallets are designed to keep your private keys secure. Most do a good job against remote attackers, shielding all sensitive information from your regular computer. This makes it hard for malicious wallet applications, malware or targeted remote attacks to steal your coins. Physical security is even harder. Specialized equipment allows an attacker to get access to information on the MCU, for example by reading information directly from inside a chip. Some ways to do this include decapsulating the chip with a laser or using acid and then reading out all the data. There can be some expensive equipment involved, but an attacker does not necessarily require a lab: services like this are available online, in countries where reverse engineering is legal.
General purpose MCUs are designed to focus on performance, functionality and cost above other considerations. They are not built to withstand such physical attacks. Secure chips are designed first and foremost with these attacks in mind, and are made to resist decapsulating, probing, fault injection and voltage glitching, buffer overflows or side-channel attacks.
We think it’s safe to say that physically protecting the data in a hardware wallet without using a secure chip is a lost cause.
The closed-source drawback
Secure chips are not even that expensive, so why does not every hardware wallet use them? The main drawback is that secure chips are closed source. Firmware running on a secure chip cannot be released as open source due to enforced non-disclosure agreements.
When it comes to firmware securing your bitcoin, creating random seeds and signing transactions, trusting closed source software that cannot be independently audited is just not good enough. In our opinion, you should not need to trust the manufacturer of your hardware wallet (and all its individual employees) to belong to the “good guys”, diligently finding their own bugs without independent reviews and then actually fixing them.
Best of both worlds
Still, general purpose MCUs are simply not up to the task of keeping a digital secret. In the best of all worlds, we would be able to run open-source firmware on an open-source secure chip. There are projects that aim to create such a chip, like TropicSquare, but no open-source chip is commercially available today. The next best option is to use the advantages of both open-source firmware and secure chip by combining them in a way that
the hardware wallet only runs open-source firmware,
the device is hardened against physical attacks using a secure chip, and
the secure chip does not need to be trusted, as it cannot learn any of the secrets.
The BitBox02 security architecture is designed towards these goals. We use two chips, a general purpose MCU and a secure chip in parallel, both with their unique strengths. Instead of running Bitcoin firmware directly on the secure chip, we run it on the MCU, meaning the code is fully open-source and auditable by anyone. Secrets are also stored on the MCU, but encrypted using multiple keys, including a key stored on the secure chip that can only be accessed using dedicated key derivation functions (KDF).
Compared to only using an MCU, this setup provides additional security features:
reading the encrypted data directly from the MCU is useless in itself
enforcing a delay during each unlock attempt to slow down brute-force attacks
limiting the maximum number of unlock attempts over the whole lifespan
a true random number generator (RNG), without the need to trust it
secure storage of a unique attestation keys to ensure only officially signed firmware can be used
Again, we don’t want to trust the secure chip. This is why our security architecture makes sure that the secure chip can never learn any cryptocurrency-related secrets. In the unlikely case that the secure chip is compromised and behaves maliciously, the overall security degrades to the security level of not using a secure chip in the first place, still securing your secrets using the user password and the MCU key.
How does it actually work?
Securing the seed
The master secret, also known as the wallet seed from which all private keys are derived, is encrypted and stored on the flash of the MCU. To gain access to the seed, three individual secrets are necessary, as described in the following illustration.
Of course, if an attacker is allowed to try out millions of device passwords per minute, such an architecture could easily be brute-forced. To limit these unlock attempts, two counters are used:
On the MCU, a counter limits the number of unlocks to 10 consecutive unsuccessful tries before resetting the device. As a second line of defense, the lifetime counter on the secure chip renders the device unusable after ~730’000 unlocks.
To slow down brute-force attempts, the key stretching to get the secret (K) from the secure chip needs to be run 3 times by using the KDF slot, causing a delay.
What does that mean for a potential thief? If you used an alphanumeric 8-character password, the thief would need on average over 1000 years to brute force the password, although the counter on the secure chip would brick the device already after about one day of trying. This scenario assumes that the counter on the MCU could be bypassed such that the device was not reset after only the first 10 unlock attempts.
Unlocking the seed
The seed becomes available to the open source firmware once the device is unlocked successfully. If even one of the three required secrets is missing, the encrypted seed stays inaccessible.
Don’t trust the secure chip
In this architecture, trusting the secure chip can be avoided. The chip is not able to learn any cryptocurrency-related secrets or degrade randomness used to create the wallet seed.
Secure device attestation
It is important that you know that your BitBox02 is a genuine device, and not a malicious clone. This is why the secure chip creates a unique attestation key that is then signed by Shift during the BitBox02 factory setup.
The secure chip creates
a private key that cannot be exported from the secure chip, and
the corresponding public key, which is signed by a Shift root key.
When you connect the BitBox02 to your computer, the BitBoxApp sends a challenge (a random number) to the device, which is signed and sent back, together with the corresponding public key previously signed by Shift.
The app is able to verify the whole certification path and shows a warning if this check fails.
By using this security architecture, we add and use a secure chip to significantly improve physical security without compromising our open-source values or potentially damaging overall security, even taking worst-case scenarios into account.
Interested in more security transparency regarding the BitBox02? Have a look at our extensive threat model! | https://medium.com/shiftcrypto/best-of-both-worlds-using-a-secure-chip-with-open-source-firmware-c3d008378602 | [] | 2020-10-23 06:25:18.024000+00:00 | ['Cryptocurrency', 'Security', 'Wallet', 'Bitcoin'] |
#UpNext: VR Headsets | Welcome to Virtual Reality.
Right now VR headsets are super early for consumers, but the opportunity seems equivalent to the 1st Generation iPod when it was the size of a brick and people couldn’t initially understand the impact it would have on music and culture. Similar to the way iPod redefined audio, VR headsets will certainly change the way we watch movies, schedule meetings, view museums or pictures, and experience life. | https://medium.com/chris-lyons/upnext-vr-headsets-8870fe4169f1 | [] | 2015-03-02 19:00:38.262000+00:00 | ['VR', 'Virtual Reality'] |
NUTONOMY : A CASE STUDY. This is a case study of one of the… | This is a case study of one of the recently launched vehicle automation company NUTONOMY.
nuTonomy is a developer of state-of-the-art software for self-driving vehicles, developed by two MIT researchers, and first launched in Singapore in 2016.
nuTonomoy’s technology, nuCore allows for flexible and human-like vehicle handling. nuCore’s powerful planning engine enables human-like maneuvering performance, and nuCore’s patented approach to decision making allows autonomous vehicles to handle even the most complex traffic scenarios.
This was about nuTonomy but lets start with the basics, nuTonomy is an example of automation of vehicles using AI. Automation in vehicles has been a hot topic since a long time because whenever there is talk of the future world ,self-driven vehicles are the first scenario. When it comes to autonomy, it might come as a surprise that there are degrees to which a car is autonomous. The International Authority of Automotive Engineers (SAE) has defined six levels of autonomy for self-driving cars.
Level 0 — No Automation
There is zero automation at this level. Majority of the vehicles on road today is manually controlled. There may be systems in place to help the driver, such as the emergency braking system but it doesn’t technically “drive” the car and so doesn’t qualify as automation.
Level 1 — Driver Assistance
These vehicles provide only one automated system for driver assistance, such as adaptive cruise control. The human driver controls the steering and braking with adaptive cruise control.
Level 2 — Partial Driving Automation
This means the vehicle can control both steering and accelerating/decelerating. The human driver can take control of the car at any time.
Level 3 — Conditional Automation
This is where environmental detection capabilities come into play. The car can make informed decisions, such as overtaking past a slow moving vehicle. However, the driver can override the control of the vehicle if the system is unable to execute a task.
Level 4 — High Automation
These cars are fully automated for most circumstances. However, a human driver can manually override the system. nuTonomy comes in this category as most of the nuTonomy vehicles are taxis.
Level 5 — Full Automation
These cars do not require a human driver and don’t even have steering wheels or acceleration/braking pedals.
HOW nuCore works ?
AI use case:
To better understand the role of AI in self-driving cars, let’s first take a look at the human perspective. A human driver uses sensory functions, such as vision and sound to watch the road, road signs, and other vehicles. The years of experience in driving helps the driver develop a habit to look for little things encountered on the roads — it could be a big bump or a pedestrian crossing.
The goal of the automotive industry is to build Level 5 autonomous cars that can drive like experienced human drivers. This means the vehicles need to have sensory functions, cognitive functions and executing capabilities. The process used to achieve this can be divided into three parts.
Part 1: Collection of data from the vehicles
Multiple sensors, cameras, and communication systems are fitted on the autonomous vehicles to generate data of the surrounding environment. This system collects information about everything the autonomous vehicle sees and hears on the road, such as other vehicles, the speed at which every vehicle around it is travelling, road infrastructure, and every object on/near the road. This data is then processed and used to communicate meaningful information (input) to the AI programs.
Part 2: AI Mastermind
The AI programs are stored in the cloud. When the cloud receives the processed data, it starts applying AI algorithms to add meaning to this enormous amount of data. The AI engine is the brain of this entire system and helps in making the right decisions. It is connected to a database that acts as a memory that stores previous driving experiences.
To handle complex situations, nuTonomy uses formal logic, which is based on a hierarchy of rules similar to Asimov’s famous Three Laws of Robotics. Priority is given to rules like “don’t hit pedestrians,” followed by “don’t hit other vehicles,” and “don’t hit objects.” Less weight is assigned to rules like “maintain speed when safe” and “don’t cross the centerline,” and less still to rules like “give a comfortable ride.”nuTonomy car tries to follow all of the rules all the time, but it breaks the less important ones first: If there’s a car idling at the side of the road and partially blocking the lane, nuTonomy’s car can break the centerline rule in order to maintain its speed, swerving around the stopped car just as any driver would. The car uses a planning algorithm called RRT* — pronounced “r-r-t-star” — to evaluate many potential paths based on data from the cameras and other sensors. (The algorithm is a variant of RRT, or rapidly exploring random tree.) A single piece of decision-making software evaluates each of those paths and selects the path that best conforms to the rule hierarchy.
Part 3: Executing the AI Capabilities
Based on the decisions made by the AI engine, the autonomous vehicle knows what to do when it encounters a situation on road and hence it drives through traffic without any human intervention and reaches the destination safely.
Sensors use case :
VEHICLE INCLUDES:
6 electronically scanning radars (ESR)
4 short-range radars (SRR)
4 short-range LiDARs
5 long-range LiDARs
1 trifocal camera
1 traffic light camera
2 GPS antennas
Dedicated Short Range Communications antenna (DSRC)
2 computer and software stacks for safety,Aptiv Connected Services data communications system
Recently, nuTonomy partnered with Lyft to test out vehicles in Boston’s Seaport District, providing rides to Lyft users and gaining more traction toward transforming the way people get around.
That’s it ! Thank you.
#Task_5
Happy learning. | https://medium.com/@iamhimy/nutonomy-a-case-study-b50cd899bc69 | ['Himanshu Yadav'] | 2020-12-01 03:43:38.309000+00:00 | ['Autonomous Vehicles', 'Self Driving Cars', 'AI', 'Nutonomy'] |
Training a Neural Network to identify Pneumonia in X-rays | During the COVID era, especially in the beginning of the pandemic there were no widespread testing to tell if a person had the virus. One way doctors were able to guess if a person had COVID was through x-ray of the lung and seeing if the person had signs of pneumonia.
What is Pneumonia?
Pneumonia is an infection that inflames the air sacs in one or both lungs. The air sacs may fill with fluid or pus, which can cause coughing, fever, chills, and difficulty breathing. Organisms, including bacteria, viruses, and fungi can cause pneumonia.
Signs of potential Pneumonia:
The picture below shows the difference in x-ray between a normal lung and a pneumonic lung. Usually lungs are full of air and does not absorb the X-rays so it appears black, which you can see in the normal image. Bones absorb X-rays and appear white. Fluid or tissues show up as grey-ish.
Importing Libraries and downloading data:
Let’s download the data from from kaggle: https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia
We need to then import all of the necessary libraries we will be using for the project! We’ll use keras to do the meat of the Neural net stuff.
import matplotlib.pyplot as plt
import seaborn as sns
import keras
from keras.models import Sequential
from keras.layers import Dense, Conv2D , MaxPool2D , Flatten , Dropout , BatchNormalization
from keras.preprocessing.image import ImageDataGenerator
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report,confusion_matrix
from keras.callbacks import ReduceLROnPlateau
import os
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import cv2
For this project I’m using google collab, as it has faster training time. Compared to 2 minutes per epoch on my computer, collab was able to do it for 12s per epoch.
So we need to create a helper function to actually pull in the data. We’ll use CV2 for this. And we’ll also pull in the images in a 150 x 150.
labels = ['PNEUMONIA', 'NORMAL']
img_size = 150
def get_data(data_dir):
data = []
for label in labels:
path = os.path.join(data_dir, label)
class_num = labels.index(label)
for img in os.listdir(path):
try:
img_arr = cv2.imread(os.path.join(path, img), cv2.IMREAD_GRAYSCALE)
resized_arr = cv2.resize(img_arr, (img_size, img_size)) # Reshaping images to preferred size
data.append([resized_arr, class_num])
except Exception as e:
print(e)
return np.array(data) train = get_data('/content/drive/My Drive/chest_xray/train')
test = get_data('/content/drive/My Drive/chest_xray/test')
val = get_data('/content/drive/My Drive/chest_xray/val')
PREPARING THE DATA:
In image classification we pull in images into an array of numbers. These numbers represent the pixel intensity, and it’s a number between 0 and 255, 255 being white and 0 being black.
So a few key points before we actually prepare the data:
We need to pair the arrays of the image with their labels (if pneumonia or not)
In a neural net, there’s a lot of math that happens in the backend. If we use numbers like 255 the computer is forced to work with really huge numbers which may increase computational power as well as slow down the epochs. To rectify this we can just divide every pixel by 255 that way we’re left with numbers between 0 and 1, 1 being white and 0 being black.
Lastly when we feed the images to keras, we need to reshape our dimensions. So we’ll use the x_train.reshape(-1, image_size, image_size, 1). The numbers mean [batch_size, height, width, channels]. The -1 means that the length in the dimension is inferred so we don’t have to specify it. The 1 is because we’re using a black and white picture so we’ll only have one layer image.
x_train = []
y_train = []
x_val = []
y_val = []
x_test = []
y_test = []
for feature, label in train1:
x_train.append(feature)
y_train.append(label)
for feature, label in test:
x_test.append(feature)
y_test.append(label)
for feature, label in val:
x_val.append(feature)
y_val.append(label) # Normalize the data
x_train = np.array(x_train) / 255
x_val = np.array(x_val) / 255
x_test = np.array(x_test) / 255 # resize data for deep learning
x_train = x_train.reshape(-1, img_size, img_size, 1)
y_train = np.array(y_train)
x_val = x_val.reshape(-1, img_size, img_size, 1)
y_val = np.array(y_val)
x_test = x_test.reshape(-1, img_size, img_size, 1)
y_test = np.array(y_test)
We also have a big imbalance of data so we’ll use Data Augmentation to combat this. What does this mean? Basically we’ll take the pictures that we have now and create duplicates of them with doing certain changes to the images which can include cropping, rotating, and flipping. Let’s set this up and fit it to our x_train. Notice that we’re not going to flip the images since it does matter how we show our images to the computer and the right lung is always on the left side. The computer might get confused =\
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range = 30, # randomly rotate images in the range (degrees, 0 to 180)
zoom_range = 0.2, # Randomly zoom image
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip = False, # randomly flip images
vertical_flip=False) # randomly flip images
datagen.fit(x_train)
WE”RE READY TO GO!
So we have everything set up. We can go through baseline convolutions and try out different things but I’ll save you the trouble. After playing with some of the parameters the best results came out with 6 convolution layers.
model = Sequential()
model.add(Conv2D(32 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu' , input_shape = (150,150,1)))
# model.add(BatchNormalization())
model.add(MaxPool2D((2,2)))
model.add(Conv2D(64 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu'))
model.add(Dropout(0.2))
# model.add(BatchNormalization())
model.add(MaxPool2D((2,2)))
model.add(Conv2D(64 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu'))
# model.add(Dropout(0.3))
# model.add(BatchNormalization())
model.add(MaxPool2D((2,2)))
model.add(Conv2D(128 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu'))
model.add(Dropout(0.3))
# model.add(BatchNormalization())
model.add(MaxPool2D((2,2)))
model.add(Conv2D(256 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu'))
model.add(Dropout(0.3))
# model.add(BatchNormalization())
model.add(MaxPool2D((2,2)))
model.add(Conv2D(512 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu'))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(units = 128 , activation = 'relu'))
model.add(Dropout(0.3))
# model.add(Dense(units = 128 , activation = 'relu'))
# model.add(Dropout(0.3))
model.add(Dense(units = 1 , activation = 'sigmoid'))
model.compile(optimizer = "rmsprop" , loss = 'binary_crossentropy' , metrics = ['accuracy'])
model.summary()
That’s our model set up, now we’re going to have to actually run it using the model.fit() method.
history = model.fit(datagen.flow(x_train,y_train, batch_size=100),epochs = 12 , validation_data = datagen.flow(x_val, y_val) predictions = model.predict_classes(x_test)
predictions = predictions.reshape(1,-1)[0]
cm= confusion_matrix(y_test,predictions)
print(classification_report(y_test, predictions, target_names = ['Pneumonia (Class 0)','Normal (Class 1)']))
Our F1-Score came out really good! With a 92% accuracy. But during the tuning of our model I was more interested in the recall. Recall means that our of all of the positive cases that we had we were able to predict 96% of them. Which is awesome!! But looking at the normal we did well but not as well with only 85%. But this is okay because we’d rather tell people that they have pneumonia and start treatment right away instead of misclassifying it and telling them they don’t have pneumonia when they do, and prolonging the treatment might make it worse.
Lastly, we can visualize what our convolution layers are doing in the backend.
from keras.preprocessing import image
from keras.models import Model layer_outputs = [layer.output for layer in model.layers[:50]]
test_image = '/content/drive/My Drive/chest_xray/test_image.jpeg'
img = image.load_img(test_image, target_size=(img_size, img_size))
img_tensor = image.img_to_array(img)
img_tensor = img_tensor.reshape(-1, img_size, img_size, 1)
# img_tensor = np.expand_dims(img_tensor, axis=0)
img_tensor /= 255.
activation_model = Model(inputs=model.input, outputs=layer_outputs)
activations = activation_model.predict(img_tensor)
layer_names = ['conv2d_1', 'activation_1', 'conv2d_2', 'activation_2', 'conv2d_5', 'activation_5']
activ_list = [activations[1], activations[3], activations[11], activations[13]]
images_per_row = 16
for layer_name, layer_activation in zip(layer_names, activ_list):
n_features = layer_activation.shape[-1]
size = layer_activation.shape[1]
n_cols = n_features // images_per_row
display_grid = np.zeros((size * n_cols, images_per_row * size))
for col in range(n_cols):
for row in range(images_per_row):
channel_image = layer_activation[0, :, :, col * images_per_row + row]
channel_image -= channel_image.mean()
channel_image /= channel_image.std()
channel_image *= 64
channel_image += 128
channel_image = np.clip(channel_image, 0, 255).astype('uint8')
display_grid[col * size : (col + 1) * size, row * size : (row + 1) * size] = channel_image
scale = 1. / size
plt.figure(figsize=(scale * display_grid.shape[1], scale * display_grid.shape[0]))
plt.title(layer_name)
plt.grid(False)
plt.imshow(display_grid, aspect='auto', cmap='plasma')
plt.savefig(layer_name+"_grid.jpg", bbox_inches='tight')
It’s not the best representation of how the convolution is breaking it down. But we can kind of see what they’re focusing on. | https://medium.com/analytics-vidhya/training-a-neural-network-to-identify-pneumonia-in-x-rays-e05a27982443 | ['Dipta Roy'] | 2020-12-09 16:41:45.074000+00:00 | ['Convolution Neural Net', 'Convolutional Network', 'Neural Networks', 'Deep Learning', 'Pneumonia'] |
How Much of a Red Flag Is Jealousy? | How Much of a Red Flag Is Jealousy?
Turns out, a little jealousy is actually a good thing. Sometimes…
As we found out when we asked the MEL staff last year, red flags in relationships run the gamut from ungrammatical texting to being a frequent wearer of boat shoes. In reality, most of us have a range of red flags, running from those that scream red to others that are less stop signals than they are yield signs. But perhaps the most cited common red flag is jealousy.
So let’s say you’re in relationship with the jealous type — someone who sneakily reads your text messages when you leave your phone out, or tells you that you need to send them an update every hour on the hour when you’re out with friends — and you’ve decided that, despite their behavior, this is the person for you. How concerned should you be by their application of surveillance-state tactics in monitoring your IG account?
First, it’s important to note that there are two schools of jealousy thought, and some people seem to think that a little bit of jealousy is not such a bad thing. “In the case of romantic relationships, having a ‘jealous partner’ can give some people a ‘charge,’ as it makes them feel sexier knowing that their partner might think someone else is attracted to them,” writes Suzanne Degges-White, a licensed counselor and professor at Northern Illinois University, for Psychology Today.
For that reason, Degges-White suggests that jealousy can be an aphrodisiac that may help you realize how lucky you are to be with the person you’re with when you witness other people interested in pursuing a relationship with your significant other. To her point, a year ago, one redditor complained that his friends think that his girlfriend, “is not jealous enough,” and he’s not sure if he should be concerned (sic, obviously, throughout):
“Anna doesn’t care about certain things, yeah, but the girl has boundaries. She doesn’t care if I like a girls’ pic (and honestly, I don’t even do this to anyone besides my female friends that I value platonically and she knows this), yeah, but she would definitely have a problem with me commenting, ‘damn, you’re sexy’ under another girls’ picture. She’d definitely give a shit if I cheated on her, etc. but she is open to things like threesomes, which is something else that my friends don’t understand.”
But as per another redditor responding to his question, jealousy isn’t a sign of love or affection: Instead, it’s just a reflection of anxiety and insecurity.
That brings us neatly to the more common assessment of jealousy as a form of hemlock for what might be an otherwise healthy relationship. According to a 2016 Bustle article, there are severals signs to watch out for when it comes to a jealous partner. One of them includes wanting you along all the time. “It can feel romantic and passionate when your partner wants to spend every waking moment alone with you, especially when love is new, but that kind of intense isolation is often a red flag,” Esther Boykin, a marriage and family therapist and relationship expert, told the women’s site. “Unhealthy jealousy rarely looks unhealthy in the beginning; it often looks loving, passionate and exciting — they can’t get enough of you, they love you so much that they just want you all to themselves,” she says.
So the question then becomes, how big of a red flag is jealousy early on, on the scale of “let’s talk this through” to “I’m dating an axe murderer”?
“Little pangs of jealousy aren’t an issue,” says Amy Kim, a clinical psychologist in L.A. “But if someone is acting on their jealousy and invading your privacy, then it’s definitely an issue that has to be talked about very openly.” In other words, it depends both on the scale of the feelings of jealousy, as well as a couples’ willingness to work on the issue. As discussed in an article in Luvze, a couple who communicates about their feelings of jealousy is more satisfied in their relationships than those who act distant or avoidant.
Still, Kim warns that there are two different types in play here: 1) The type where the person feeling jealous has good reason for feeling the way they do (say, you’ve cheated on them previously); and 2) the type without any good reason. Kim unsurprisingly associates both forms of jealousy with a fundamental lack of trust, either with regard to their significant other or with themselves. “Insecurity is a lack of trust within yourself,” says Kim. “If you trust and accept yourself, you’re less likely to be prone to jealousy.”
What worries us, of course, is the possibility of unfounded jealousy turning into something even worse: The jealousy-fueled violence that’s a common trope in thriller movies.
Despite this trait often being ascribed to female characters, in the real world, it’s more often men who tend toward such acts, like this harrowing case from earlier this year in which a man murdered and then dismembered his girlfriend. “The man, 28-year-old boxing coach and former mixed martial arts fighter Gary Chu, is believed to have murdered Ms. Huang I-Min for lying to him about her virginity, reported local news outlets. The pair reportedly met on the mobile dating platform Tinder,” reported The Straits Times.
How do you tell if the jealousy will stop at simple mistrust and turn into outright craziness? One sign that jealousy may be leading toward an abusive relationship is when the jealous party starts to become possessive or controlling. “For example, the jealous person might say the partner can’t hang out with members of the opposite sex on their own,” Dani Bostick, a counselor and member of the American Counseling Association, told Domestic Shelters. “If both partners agree to that, that may be healthy. But if it’s one partner telling the other not to do that but he or she still can, that’s a red flag.”
While there’s no exact data on how common it is for jealousy to turn violent, a 2012 study from Ohio State University on couples in abusive relationships found that a long-term dispute regarding infidelity permeated nearly every relationship the researchers looked at, and was most often the catalyst for violence. “Even if it didn’t trigger the violent event, it was an ongoing stressor in nearly all of the 17 couples we studied,” one of the researchers told Live Science.
So what causes these unfounded bouts of jealousy, if you yourself have given no indication that you’re likely to cheat? For some, it’s simply a case of jealousy spilling over from a past toxic relationship. Such a situation is put forward by one panicked redditor who thinks her boyfriend might fall in love with a girl who’s better than her anytime he’s away from her. “Dealing with relationship anxieties and feelings of inadequacy in new relationship. Scared he’ll realise he can do better while away at school. Lack of adequate daily contact and him off partying causes fear. Though he’s never given me any reason to doubt him. How to get these under control?” she writes.
While her reasoning isn’t completely unfounded — her boyfriend isn’t great at staying in touch while he’s away from her — according to Kim, this is a common form of unprovoked jealousy that can ruin a relationship if both parties don’t talk openly about it. “It requires self-honesty on the part of the person who’s jealous,” says Kim. “But it’s also important for both parties to communicate openly about issues that cause them to feel anxious and jealous.” Additionally, she says that the jealous person has to be willing to examine their issues with individual therapy in order to understand what the feelings of jealousy are about. “Because that’s a separate issue,” explains Kim.
So again, like most other red flags, jealousy isn’t a deal breaker per se, but it’s definitely a warning sign that shouldn’t be ignored. Unlike boat shoes, which is a flagrant foul and should be enough to send you running for the hills.
Andrew Fiouzi is a staff writer at MEL. He last wrote about how he was part of a high school cheating mafia.
More Andrew: | https://medium.com/mel-magazine/how-much-of-a-red-flag-is-jealousy-d80d85ec90cc | ['Andrew Fiouzi'] | 2018-10-02 17:01:03.256000+00:00 | ['Breakups', 'Jealousy', 'Love', 'Dating', 'Relationships'] |
Pinterest COO Settlement Ignores Black Women Who Blew Whistle Back in June | Pinterest COO Settlement Ignores Black Women Who Blew Whistle Back in June
In what was likely seen as a nice step forward for women fighting gender discrimination in the workplace, particularly at the often all-male C-suite level, former Pinterest chief operating officer Françoise Brougher received a $22.5 million settlement, the largest of its kind to date. Unfortunately, the settlement highlights the difference between how Black and White women are received when raising issues of discrimination and inequality in the workplace.
Two months before Brougher, a White woman, filed her suit, two Black women, Ifeoma Ozoma and Aerica Shimizu Banks, came out on social media with their claims of racial and gender discrimination. Ozoma and Banks’ allegations inspired more women to share their experiences at Pinterest, and Brougher herself credits the pair for giving her “the courage to come forward.” But Ozoma and Banks didn’t receive the same treatment as Brougher. Initially, Pinterest came forward and claimed the two women had been treated fairly before eventually admitting that the company needed to “do better.”
Now, the company is receiving criticism once again for how it treated the initial claims from two Black women versus the seemingly swift manner in which Brougher’s allegations were treated — Brougher’s case was settled in just four months when cases of this kind typically take years to be resolved.
“This week, we saw, yet again, another large corporation display clear inequitable treatment of Black employees in Silicon Valley,” says Jade Magnus Ogunnaike, senior campaigns director at the racial equity organization Color of Change. “Pinterest’s handling of Françoise Brougher’s lawsuit — paying out $22.5 million — compared to how the company practically ignored Ifeoma Ozoma and Aerica Shimizu Banks after they called out intense discrimination, is blatant racism in practice.” | https://momentum.medium.com/pinterest-coo-settlement-ignores-black-women-who-blew-whistle-back-in-june-d4bcb4b649e2 | ['Tracey Ford'] | 2020-12-21 06:32:20.682000+00:00 | ['Racism', 'Discrimination', 'Sexism', 'Business', 'Pinterest'] |
A confessional prayer | March was a low month for me. I don’t want to go as far as to say it was a “dark” month — but it was definitely a time of soul searching and soul trembling.
I spoke at New Life Lakeview & Lincoln Park this past Sunday on James 1:1–8 (Here’s the message if you want to listen: LINK). I told everyone during that time that I’m literally living out the James passage in my day to day right now.
Which is good. Difficult, but good.
About 3 weeks ago I had a night of tearful prayer. I drove around for about 2 hours weeping tears that have probably been stored up for years. I’m not a big crier, so this was huge for me. I ended up sitting on the steps of Holy Name Cathedral, late at night in the city, journaling, getting things out and down.
Here’s part of what I poured out to God:
I feel broken…so I give the pieces to You.
I feel lost…so I give my way to You.
I feel shame…so I give my stains to You.
I feel alone…so I give my need for connection to You.
I feel confused…so I give my contemplations to You.
I feel hopeless…so I give my aspirations to You.
I feel anger…so I give my wounds to You.
This is my genuine sacrifice.
This is my genuine, living sacrifice.
This is all I got.
This is what’s real.
And I give it to You.
That’s where I was three weeks ago. In a different place now — but I needed to go through that to get here today. And that’s a good thing.
Difficult, but good.
Just felt like someone would maybe be encouraged by hearing a little of what’s been going on inside of me.
If you’re in a similar place, send me an email. I’d love to hear and share. | https://medium.com/processing-life/a-confessional-prayer-6966c9c7734d | [] | 2016-07-01 03:26:15.784000+00:00 | ['Pain', 'Authenticity', 'Mess', 'Prayer', 'Faith'] |
Art of War by Sun Tzu is one of the world’s most revered books on strategy. | The Art of War, Sun Tzu — Summary
Art of War by Sun Tzu is one of the world’s most revered books on strategy. Below is a summary of the most critical takeaways from this book.
Know which battles to fight.
There should be an objective for a war and evaluate if the battle that you choose, helps you inch towards it. You don’t have to fight every battle. Pick and choose them wisely.
Each and every attribute of your competitors needs to be evaluated meticulously and well in advance to understand their strengths and weaknesses in comparison to yours. If the enemy is really strong compared to you, then you should choose not to fight.
Only enter battles you know you can win. You should have your victory even before your first move. Secure yourself and wait for the opportunity for victory.
2. Know how to deceive the enemy.
Feign weakness and disorder while you have strength and discipline of the highest order. Maintain secrecy and element of surprise while you make a move. Deception should be of the highest standard.
Have a clear plan laid out for every move and counter moves of your competitors and only then you can make your enemy do what you want them to do. There is a need to have a high degree of situational awareness by understanding the whole environment of the competition. The list of all forces acting on the competition and their impact, should be predicted in advance.
Have your own unique plan that the enemy just can’t anticipate. Take them by surprise and hit them at the most appropriate time when they expect the least, and at a place that inflicts the most damage.
3. Know your team.
Establish a deep connect with your team and treat them genuinely. They will stand by you, till their death. There should be a clear ‘unity of command’ and there should be no conflicting orders. Rewards and punishments should be judiciously administered and in time.
Have a clear mode of communication and communicate your intentions clearly and well within time. Instil in them a sense of discipline, urgency and belongingness in whatever they are expected to do.
Ensure that your team is provided in time, with the best of the trainings and the tools that complement their skills. This will guarantee that they are better than the competitors, so that they can strike effectively and quickly, making conflict decisive, in your favour. | https://medium.com/@meetdaya/art-of-war-by-sun-tzu-is-one-of-the-worlds-most-revered-books-on-strategy-6a29fa2fb988 | ['Daya Kd'] | 2019-05-17 16:04:17.020000+00:00 | ['Competition', 'Leadership', 'Teamwork', 'Strategy'] |
Why vs. What. Here’s the challenge. Agile is founded… | Whenever a client wants us to start a new venture with them, to initiate a project together, we work on defining business requirements (apart from making sure there is a business case in the first place). Traditionally, Business requirements form the “what” of the project. They provide an explanation on what should be built to achieve the expected return on investment. It is uncommon to find a detailed, technical description there. From a technical perspective, finding a flowchart or field definitions in a BRD (Business Requirements Document) is rare. As the document is owned by the client and often prepared by her, as professionals on the supplier’s side, we often expect from our customers not to cross the thin line between the “what” and the “how”. Figuring out the “how” is the supplier’s job. It’s the space where he can analyze the work at hand, translate the more general, business perspective into functions, data structures and flows, paired with interfaces required to interact with the future solution. Among others. This is how functional requirements are born. They form a pair — business “reqs” and functional “reqs,” they should directly relate to one another, where business requirements are the “what” and the functional requirements — the “how.” In a way, the BRD is part of a mandate to start a project, whereas the FRD (Functional Requirements Document) — a contract describing how the project result is to be built.
Here’s the challenge. Agile is founded on the premise that suppliers and customers are able to work out requirements collaboratively in the form of user stories, maintained in a backlog of remaining work, detailed iteratively as needed — based on priority / proximity, aligning in tight iterations and frequent feedback. In theory this is faster as there is no dichotomy . In practice… I have yet to see this work well.
Granted, agile requires sufficient organizational maturity with the new artifacts, events, and roles — Product Owners able to own the backlog, empowered and knowledgeable enough to detail requirements and decide on priorities.
But I haven’t seen many Product Owners like that. Plus, with increasing complexity of the business landscape and progressing specialization, they simply cannot be know-it-alls, right? Even having sufficient business acumen, they won’t be able to drill down to technical levels detailed enough to provide a sound basis for development work.
How do we deal with this? From my perspective this is Agile’s uncharted land, that’s where “there be dragons.” Pre-sprints, parallel analytical sprints are just means to patch this up. The issue possibly requires several “whys” before we reach the root cause. My bet is that complete functional requirements are a necessity that cannot be neglected, and that they should form a response to whatever the business requests. Having agreed functional requirements we can expect viable “work packages” which can be decomposed, estimated, and developed with a higher degree of accuracy. | https://medium.com/project-manager/whenever-a-client-wants-us-to-start-a-new-venture-with-them-to-initiate-a-project-together-we-e44752ad42c0 | ['Lech Ambrzykowski'] | 2020-12-14 08:26:05.436000+00:00 | ['Agile', 'Projects', 'Project Management'] |
s6 Ways To Use Technology To Improve Your Relationship | Don’t think that you can use technology to improve your relationship? Think again.
People blame technology for a lot these days.
I see endless news headlines with titles like: “Digital communication is making us more disconnected”, “We need more face to face and less Facebook”, and “People texting instead of talking.”
Whatever excuses people give themselves to hide behind, I don’t buy into any of it. And even if the majority of people ARE letting their relationships suffer by letting technology interfere with their love lives, it’s irrelevant… because YOU are not most people (otherwise you wouldn’t be reading this article).
I can’t remember where I first heard this… but one of my favourite quotes/concepts is the idea that “You can’t curse all of the red lights if you aren’t also thanking the green ones.” So while technology may certainly bring up some small roadblocks in how people relate to each other on an intimate level, there are also some massively useful ways that it can help us in love as well (if we use it correctly).
Not only did I meet my girlfriend using technology (not online dating, but social media… and that’s a story for another day), but our relationship is improved by technology on a daily basis.
You can either embrace the change all around us, know when and how to use it, and manipulate it so that it works for your relationship, or you can be unintentional about it and let it become a huge mysterious thorn in your side.
I’m going to ignore the more obvious ones (like sending flowers from anywhere in the world, Skype’ing when you’re physically apart, or meeting your ideal partner via online dating) and only tell you about the technological hacks that I personally use in my intimate relationship.
Here are the six highest leverage ways I have found to use technology to improve your relationship.
https://www.reddit.com/r/FridaySportslive/
https://www.reddit.com/r/FridaySportslive/comments/k23eq6/official_livestream_crystal_palace_vs_newcastle/
https://www.reddit.com/r/FridaySportslive/comments/k23eq7/officiallivestream2020_crystal_palace_vs/
https://www.reddit.com/r/FridaySportslive/comments/k23eqb/officiallivestream_2020crystal_palace_vs/
https://www.reddit.com/r/FridaySportslive/comments/k23eqj/officialcrystal_palace_vs_newcastle_united_live/
https://www.reddit.com/r/FridaySportslive/comments/k23equ/officiallivestream_newcastle_united_vs_crystal/
https://www.reddit.com/r/FridaySportslive/comments/k23er1/official_livestream_jacobs_vs_rosado_live/
https://www.reddit.com/r/FridaySportslive/comments/k23erb/officiallivestream2020_jacobs_vs_rosado_live/
https://www.reddit.com/r/FridaySportslive/comments/k23ern/officiallivestream_2020jacobs_vs_rosado_live/
https://www.reddit.com/r/FridaySportslive/comments/k23es7/officialjacobs_vs_rosado_live_streamsreddit/
https://www.reddit.com/r/FridaySportslive/comments/k23es8/officiallivestream_rosado_vs_jacobs_live/
https://www.reddit.com/r/FridaySportslive/comments/k23esh/official_livestream_the_egg_xtreme_bull_riding/
https://www.reddit.com/r/FridaySportslive/comments/k23esu/officiallivestream2020_the_egg_xtreme_bull_riding/
https://www.reddit.com/r/FridaySportslive/comments/k23et1/officiallivestream_2020the_egg_xtreme_bull_riding/
https://www.reddit.com/r/FridaySportslive/comments/k23et5/officialthe_egg_xtreme_bull_riding_2020_live/
https://www.reddit.com/r/FridaySportslive/comments/k23etq/officiallivestream_the_egg_xtreme_bull_riding/
https://www.reddit.com/r/FridaySportslive/comments/k23eu7/official_livestream_iowa_state_vs_texas_live/
https://www.reddit.com/r/FridaySportslive/comments/k23euh/officiallivestream_2020iowa_state_vs_texas_live/
https://www.reddit.com/r/FridaySportslive/comments/k23eun/officiallivestream2020_iowa_state_vs_texas_live/
https://www.reddit.com/r/FridaySportslive/comments/k23ev3/official_livestream_umass_vs_liberty_live/
https://www.reddit.com/r/FridaySportslive/comments/k23eva/officiallivestream2020_umass_vs_liberty_live/
https://www.reddit.com/r/FridaySportslive/comments/k23evj/officiallivestream_2020umass_vs_liberty_live/
https://www.reddit.com/r/FridaySportslive/comments/k23evv/official_livestream_nebraska_vs_iowa_live/
https://www.reddit.com/r/FridaySportslive/comments/k23ew3/officiallivestream2020_nebraska_vs_iowa_live/
https://www.reddit.com/r/FridaySportslive/comments/k23ewg/officiallivestream_2020nebraska_vs_iowa_live/
https://www.reddit.com/r/FridaySportslive/comments/k23ewq/official_livestream_notre_dame_vs_north_carolina/
https://www.reddit.com/r/FridaySportslive/comments/k23ex7/officiallivestream2020_notre_dame_vs_north/
https://www.reddit.com/r/FridaySportslive/comments/k23exh/officiallivestream_2020notre_dame_vs_north/
https://www.reddit.com/r/FridaySportslive/comments/k23exz/official_livestream_ucf_vs_south_florida_live/
https://www.reddit.com/r/FridaySportslive/comments/k23eys/officiallivestream2020_ucf_vs_south_florida_live/
https://www.reddit.com/r/FridaySportslive/comments/k23ez3/officiallivestream_2020ucf_vs_south_florida_live/
https://www.reddit.com/r/FridaySportslive/comments/k23ez6/official_livestream_wyoming_vs_unlv_live/
https://www.reddit.com/r/FridaySportslive/comments/k23ezd/officiallivestream2020_wyoming_vs_unlv_live/
https://www.reddit.com/r/FridaySportslive/comments/k23ezo/officiallivestream_2020wyoming_vs_unlv_live/
https://www.reddit.com/r/FridaySportslive/comments/k23f0v/official_livestream_central_michigan_vs_eastern/
https://www.reddit.com/r/FridaySportslive/comments/k23f0y/officiallivestream2020_central_michigan_vs/
https://www.reddit.com/r/FridaySportslive/comments/k23f17/officiallivestream_2020central_michigan_vs/
https://www.reddit.com/r/FridaySportslive/comments/k23f1j/official_livestream_stanford_vs_california_live/
https://www.reddit.com/r/FridaySportslive/comments/k23f1s/officiallivestream2020_stanford_vs_california/
https://www.reddit.com/r/FridaySportslive/comments/k23f20/officiallivestream_2020stanford_vs_california/
https://www.reddit.com/r/FridaySportslive/comments/k23f2d/official_livestream_oregon_vs_oregon_state_live/
https://www.reddit.com/r/FridaySportslive/comments/k23f2n/officiallivestream2020_oregon_vs_oregon_state/
https://www.reddit.com/r/FridaySportslive/comments/k23f2v/officiallivestream_2020oregon_vs_oregon_state/
https://www.reddit.com/r/Event4Stream/
https://www.reddit.com/r/Event4Stream/comments/k2339g/officialstreams_crystal_palace_vs_newcastle/
https://www.reddit.com/r/Event4Stream/comments/k2339i/official_livestream_crystal_palace_vs_newcastle/
https://www.reddit.com/r/Event4Stream/comments/k2339t/officiallivestream_crystal_palace_vs_newcastle/
https://www.reddit.com/r/Event4Stream/comments/k233a6/officiallivestream_2020newcastle_united_vs/
https://www.reddit.com/r/Event4Stream/comments/k233ak/officialstreams_newcastle_united_vs_crystal/
https://www.reddit.com/r/Event4Stream/comments/k233az/officialstreams_jacobs_vs_rosado_live_reddit/
https://www.reddit.com/r/Event4Stream/comments/k233b9/officiallivestream_2020jacobs_vs_rosado_live/
https://www.reddit.com/r/Event4Stream/comments/k233bj/officiallivestream_2020jacobs_vs_rosado_live/
https://www.reddit.com/r/Event4Stream/comments/k233bx/official_livestream_jacobs_vs_rosado_live/
https://www.reddit.com/r/Event4Stream/comments/k233c5/streamofficialjacobs_vs_rosado_live_streamreddit/
https://www.reddit.com/r/Event4Stream/comments/k233cm/official_livestream_rosado_vs_jacobs_live/
https://www.reddit.com/r/Event4Stream/comments/k233cv/officiallivestream_rosado_vs_jacobs_live/
https://www.reddit.com/r/Event4Stream/comments/k233d4/streamofficial_2020rosado_vs_jacobs_live/
https://www.reddit.com/r/Event4Stream/comments/k233dc/officialstreams_the_egg_xtreme_bull_riding_2020/
https://www.reddit.com/r/Event4Stream/comments/k233e2/officiallivestream_2020the_egg_xtreme_bull_riding/
https://www.reddit.com/r/Event4Stream/comments/k233eb/officiallivestream_2020the_egg_xtreme_bull_riding/
https://www.reddit.com/r/Event4Stream/comments/k233ep/official_livestream_the_egg_xtreme_bull_riding/
https://www.reddit.com/r/Event4Stream/comments/k233f1/streamofficialthe_egg_xtreme_bull_riding_2020/
https://www.reddit.com/r/Event4Stream/comments/k233fb/official_livestream_the_egg_xtreme_bull_riding/
https://www.reddit.com/r/Event4Stream/comments/k233h4/officiallivestream_2020iowa_state_vs_texas_live/
https://www.reddit.com/r/Event4Stream/comments/k233ho/officiallivestream_iowa_state_vs_texas_live/
https://www.reddit.com/r/Event4Stream/comments/k233hu/official_livestream_iowa_state_vs_texas_2020_live/
https://www.reddit.com/r/Event4Stream/comments/k233iv/officiallivestream_2020umass_vs_liberty_live/
https://www.reddit.com/r/Event4Stream/comments/k233j7/officiallivestream_umass_vs_liberty_live/
https://www.reddit.com/r/Event4Stream/comments/k233jl/official_livestream_umass_vs_liberty_2020_live/
https://www.reddit.com/r/Event4Stream/comments/k233jx/officiallivestream_2020nebraska_vs_iowa_live/
https://www.reddit.com/r/Event4Stream/comments/k233k7/officiallivestream_nebraska_vs_iowa_live/
https://www.reddit.com/r/Event4Stream/comments/k233ki/official_livestream_nebraska_vs_iowa_2020_live/
https://www.reddit.com/r/Event4Stream/comments/k233ku/officiallivestream_2020notre_dame_vs_north/
https://www.reddit.com/r/Event4Stream/comments/k233l2/officiallivestream_notre_dame_vs_north_carolina/
https://www.reddit.com/r/Event4Stream/comments/k233le/official_livestream_notre_dame_vs_north_carolina/
https://www.reddit.com/r/Event4Stream/comments/k233lo/officiallivestream_2020ucf_vs_south_florida_live/
https://www.reddit.com/r/Event4Stream/comments/k233mb/officiallivestream_ucf_vs_south_florida_live/
https://www.reddit.com/r/Event4Stream/comments/k233mq/official_livestream_ucf_vs_south_florida_2020/
https://www.reddit.com/r/Event4Stream/comments/k233n2/officiallivestream_2020wyoming_vs_unlv_live/
https://www.reddit.com/r/Event4Stream/comments/k233nh/officiallivestream_wyoming_vs_unlv_live/
https://www.reddit.com/r/Event4Stream/comments/k233nt/official_livestream_wyoming_vs_unlv_2020_live/
https://www.reddit.com/r/Event4Stream/comments/k233o7/officiallivestream_2020central_michigan_vs/
https://www.reddit.com/r/Event4Stream/comments/k233od/officiallivestream_central_michigan_vs_eastern/
https://www.reddit.com/r/Event4Stream/comments/k233op/official_livestream_central_michigan_vs_eastern/
https://www.reddit.com/r/Event4Stream/comments/k233p0/officiallivestream_2020stanford_vs_california/
https://www.reddit.com/r/Event4Stream/comments/k233p9/officiallivestream_stanford_vs_california_live/
https://www.reddit.com/r/Event4Stream/comments/k233pl/official_livestream_stanford_vs_california_2020/
https://www.reddit.com/r/Event4Stream/comments/k233pr/officiallivestream_2020oregon_vs_oregon_state/
https://www.reddit.com/r/Event4Stream/comments/k233q4/officiallivestream_oregon_vs_oregon_state_live/
https://www.reddit.com/r/Event4Stream/comments/k233qc/official_livestream_oregon_vs_oregon_state_2020/
1. Shared Relationship Bucket List
My girlfriend and I have a shared bucket list where we periodically add things that we want to do.
It goes like this…
– We brain dump things like date ideas, vacation ideas, or little things that we want to learn or experience together into the document
– When we do one of the things we either cross it out (meaning we’ve done it already but would do it again), or remove it from the list (meaning we did it and wouldn’t do it again).
Having this list to refer to also means that no matter how tired either one of us is at the end of a long day at work, we always have a big list of ready-made creative solutions to the age-old question of “What do you feel like doing today/tonight/this coming weekend?”
It takes less than a minute to set up an online shared document and having it will pay dividends into your overall relationship satisfaction.
2. Use A Shared Calendar Purely For Your Relationship
I recently wrote about how important putting your relationship into your calendar is, and I don’t feel like it can be overstated enough.
Whether you’re putting in entries to remind yourself to have an extended sex date, a unique date night, or a connection or communication session, your calendar is one of your best friends when it comes to relationship intentionality.
Show me your calendar and I’ll show you your priorities. Are you prioritizing your relationship enough?
3. Use Text Messaging For Good
Most people have jobs that don’t allow for extended phone conversation breaks with their lover. That’s where texting comes in.
Texts are short, sweet, and can be received and responded to whenever you each get the chance.
Sending a short, sweet “Thinking of you xo” or “Good morning beautiful” text mid-day can go a long way in reminding your partner how much you love and care for them.
4. Sexy Picture Messages
Simply put… sexting is digital foreplay.
Take tip #3 up a notch by sending sexual or scantily clad images to your partner of things that you know turn them on.
And if you’re not sure what turns them on? You’ll probably want to read this.
5. Why “Facebook Official” Is Like A Psychological Mini-Marriage
There’s this super-cool sales psychology term called ‘commitment and consistency’. It basically means that what we, as humans, commit to, we go to great lengths to then act consistently with. For example, if you tell five of your closest friends that you’re going to start shifting your dietary choices in a certain direction then you’re more likely to follow through compared to if you had just kept it to yourself (because you don’t want to look like a liar/failure in your friends eyes).
This is a part of the power of a marriage ceremony. You and your partner spend a small pile of money on your wedding and swear in front of all of your friends and family that you’re going to love each other forever. Now, psychologically speaking, because you’ve made such a big hoopla about your love in front of so many people, you’re going to be that much more likely to want to work through your sticking points in the relationship when times get tough. Because you not only made a vow to your partner, but you did it in front of a ton of people that you love and respect. You carry the weight (in an advantageous way) of your promise with you.
The same thing happens with making your relationship “Facebook official”.
By digitally standing up in front of your collective hundreds (or thousands) of Facebook friends, you are then more likely to really lean into the relationship and earn your way out if the relationship is to end at all. There’s the added social pressure of not wanting to go back on your word.
So making your relationship Facebook official is basically like getting pre-married these days.
6. Use Your Digital Notepad To Your Advantage
My brain is funny. I have a really awful memory for most things.
Want to know what I had for dinner on a certain vacation ten years ago? I can tell you that. Want to hear twenty tips to cure erectile dysfunction? I can tell you that too. But if you expect me to remember when my girlfriend is getting her haircut next, you’ve got another thing coming.
Enter: technology!
I use an ongoing digital notepad to remind myself of things that my partner said, did, mentioned that she liked, or mentioned that she might be interested in doing one day.
She mentions she’s getting her hair cut next Thursday? It’s going in the notepad (and on my calendar so I can comment on how amazing she looks). She tells me that she’d love to go to Paris one day? That goes in the notepad too. She tells me that she doesn’t like cooked mushrooms or raw tomatoes? You guessed it… going in the pad.
I’m like that guy in the movie Memento. I leave notes for myself all the time to make sure that I don’t forget the things that she tells me. The weakest pen is stronger than the best memory. So if your partner tells you something that you think would be worth remembering for future use, I would strongly recommend you write it down or type it into a digital notepad as soon as you can to ensure your relationship’s health.
A bonus effect of the digital notepad exercise? You will NEVER be stumped for gift ideas ever again. People drop hints about things that they want all the time (even unconsciously). So if you always have a running list of things that your partner loves/wants/adores, then you’ll never be stuck when their birthday or the next major holiday is coming up around the corner.
That’s it for today. I hope you enjoyed this post and you can use one or many of these tips in your relationship starting today.
Dedicated to your success,
Jordan | https://medium.com/@alisamahjabin/s6-ways-to-use-technology-to-improve-your-relationship-443d395c0d2a | ['Alisa Mahjabin'] | 2020-11-27 16:32:03.864000+00:00 | ['Technology', 'Relationships', 'Improvement', 'Life', 'Love'] |
Building an Enterprise-Level Real-Time Data Lake Based on Flink and Iceberg | By Hu Zheng (Ziyi) is an Alibaba Technical Expert
Apache Flink is a popular unified stream-batch computing engine in the big data field. Data Lake is a new technical architecture in the cloud era. In this article, we will discuss the following question, “What will happen when Apache Flink meets the Data Lake?”
Background and Introduction to Data Lake
What is the data lake? Generally, we maintain all the data generated by an enterprise on one platform, which is called the Data Lake.
Take a look at the following figure. The data sources of this lake are various. Some may be structured data, some may be unstructured data, and some may even be binary data. There is a group of people standing at the entrance of the lake, using equipment to test the water quality. This corresponds to the streaming processing operations on the Data Lake. There is a batch of pumps pumping water from the lake, which corresponds to the batch processing in the Data Lake. There are also a group of people fishing in the boat or on the shore. This represents data scientists extracting data value from the Data Lake through machine learning.
A data lake has four main features:
Variously sourced storage of raw data Supports multiple computing models Perfected data management capabilities — Various data sources can be accessed, different data sources can be connected, and schema management and permission management can be supported. Flexible bottom-layer storage — Cost-effective distributed file systems like S3, OSS, and HDFS are adopted. The data analysis requirements of corresponding scenarios are met with specific file formats and caches.
What is the typical open-source Data Lake architecture? The architecture diagram is divided into four layers:
The bottom is the distributed file system. S3 and OSS tend to be used more by users on the cloud because they are much cheaper. Non-cloud users generally use self-maintained HDFS. The second layer is the data acceleration layer. The Data Lake architecture is a complete storage-compute separation architecture. If all data accesses remotely read the data from the file system, performance spending and costs will be high. If some frequently accessed hotspot data can be cached locally on the computing node, the hot and cold separation is implemented naturally. Thus, good local read performance is achieved, and the bandwidth for remote access is saved. At this layer, the open-source Alluxio or Alibaba Cloud JindoFS is often selected. The third layer is the Table Format layer. It encapsulates a batch of data files into a business table and provides table-level semantics, such as ACID, snapshot, schema, and partition. It generally uses open-source Delta, Iceberg, Hudi, and other projects. For some users, Delta, Iceberg, and Hudi are considered to be data lakes. These projects are only part of the Data Lake architecture. Since they are closest to users, many details at the bottom are blocked. This is what causes the misunderstanding. The top layer is the computing engine for different computing scenarios. Open-source engines include Spark, Flink, Hive, Presto, Hive MR, and others. These computing engines can access tables in the same data lake at the same time.
Introduction to Typical Service Scenarios
What are the typical application scenarios for the combination of Flink and Data Lake? Here, the Apache Iceberg is our Data Lake model by default when discussing business scenarios.
First, a typical Flink + Iceberg scenario is to build a real-time Data Pipeline. A large amount of log data is imported into message queues, such as Kafka. After the Flink streaming computing engine executes ETL operations, the data is imported to the Apache Iceberg original table. In some business scenarios, analysis jobs are completed directly to analyze data in the original table, while in other business scenarios, the data needs to be purified. Thus, a new Flink job is created to consume the incremental data from the Apache Iceberg table. After being processed, the data is written to the purified Iceberg table. Now, the data may also need to be aggregated in other services. Then, the incremental Flink job can be started on the Iceberg table, and the aggregated data is written to the aggregation table.
Some people may think that this scenario can also be implemented through Flink + Hive. It is true. However, the data written to Hive is more for data analysis in the data warehouse rather than incremental data pulling. Generally, Hive writes incremental data for more than 15 minutes with partition as the unit. The long-term high-frequency Flink writes may cause partition expansion. Iceberg allows one minute or even 30 seconds of incremental data writing, which can improve the real-time feature of end-to-end data. The updated data is available on the upper layer analysis job, and the downstream incremental job can read the updated data.
The second typical analysis scenario uses Flink + Iceberg to analyze binlogs of relational databases, such as MySQL. On the one hand, Apache Flink supports CDC data parsing natively. After a piece of binlog data is pulled through ververica flink-cdc-connector , it is automatically converted to four types of messages ( INSERT , DELETE , UPDATE_BEFORE , and UPDATE_AFTER ) for further real-time computing. Flink Runtime can recognize these four types of messages.
On the other hand, Apache Iceberg has fully implemented the equality delete feature. After the to-be-deleted records are defined by users, they can be written directly to the Apache Iceberg table to delete the corresponding rows. This feature is designed to realize the streaming deletion of the Data Lake. In future versions of Iceberg, users will not need to design any additional business fields. The streaming importing of binlog to Apache Iceberg can be realized with only a few lines of code. The Pull Request of the community has provided a prototype for Flink to write CDC data.
After the CDC data is migrated into Iceberg, the common computing engines will also be connected, such as Presto, Spark, and Hive. All these engines can read the latest data in the Iceberg table in real-time.
The third typical scenario is the stream-batch unification of near real-time scenarios. In the Lambda architecture, there is a real-time procedure and an offline procedure. Generally, the real-time procedure consists of components, such as Flink, Kafka, and HBase, while the offline procedure consists of components, such as Parquet and Spark. Many computing and storage components are involved, resulting in high system maintenance and business development costs. There are many scenarios with fewer real-time requirements. For example, minute-level processing is also allowed. These scenarios are called near real-time scenarios. So, what about optimizing the commonly used Lambda architecture with Flink + Iceberg?
As shown in the preceding figure, we can use Flink + Iceberg to optimize the entire architecture. Real-time data can be written to the Iceberg table through Flink. In the near real-time procedure, incremental data can still be calculated through Flink. In the offline procedure, a snapshot can also be read for global analysis through Flink batch computing. By doing so, corresponding analysis results can be read and analyzed by users in different scenarios. With these improvements, Flink is used as the computing engine and Iceberg as the storage component uniformly. This reduces the maintenance and development costs of the entire system.
In the fourth typical scenario, full data from Iceberg and incremental data from Kafka are used to bootstrap a new Flink job. Assume that an existing streaming job is running online. One day, a business team comes and says they have a new computing scenario and need a new Flink job. They need the job to run the historical data from last year and to connect to the Kafka incremental data being generated. What should we do at this time?
The common Lambda architecture can also be used. The offline procedure is written to the data lake through Kafka :arrow_right: Flink :arrow_right: Iceberg. Due to the high cost of Kafka, the data for the last seven days can be retained. The storage cost of Iceberg is low, so it can store the full historical data, which is split into multiple data partitions by a checkpoint. When starting a new Flink job, the data from Iceberg needs to be pulled from Iceberg and then connected to the data from Kafka.
The fifth scenario is similar to the fourth scenario. Similarly, in the Lambda architecture, due to missing events or arrival sequence in the real-time procedure, results at the streaming computing end may not be accurate. In this case, the real-time computing results should be corrected based on the full historical data. Iceberg can play this role well because it can manage historical data with low costs.
Why Apache Iceberg?
Let’s go back to a question left over from the previous section. Why was Apache Iceberg chosen among many open-source data lake projects in Flink at that time?
At that time, we investigated Delta, Hudi, and Iceberg and wrote a research report. We found that Delta and Hudi were too deeply bound to the Spark code path, especially the write path. The two projects were more or less initially designed with Spark as the default computing engine. However, Apache Iceberg aimed to make a universally used Table Format. Therefore, it decouples the computing engine from the underlying storage system perfectly and allows easy access to diversified computing engines and file formats. It implements the Table Format layer in the Data Lake architecture correctly. We also think it is easier for Apache Iceberg to become the de facto open-source standard for the Table Format layer.
In addition, Apache Iceberg is developing towards the data lake storage layer with stream-batch unification. The design of manifest and snapshot effectively isolates changes in different transactions, making it convenient for batch processing and incremental computing. Apache Flink is already a computing engine with stream-batch unification. The long-term plans of Iceberg and Flink are perfectly matched. In the future, they will work together to build a data lake architecture with stream-batch unification.
We also found that there are a variety of community resources behind Apache Iceberg. Netflix, Apple, Linkedin, Abroad, and other companies all have PB-level data running on Apache Iceberg. In China, enterprises, such as Tencent, also run huge amounts of data on Apache Iceberg. In their largest business, dozens of TB of incremental data are written into Apache Iceberg every day. Apache Iceberg also has many senior community members with seven Apache PMCs and one VP from other projects. As a result, the requirement for reviewing code and design is very high. A relatively large PR task may return more than 100 comments. In my opinion, all these things guarantee the high quality design and code of Apache Iceberg.
Based on the considerations above, Apache Flink finally chose Apache Iceberg as its first data lake access project.
Implementing Streaming Migration to the Data Lake Using Flink and Iceberg
Currently, streaming and batch lake migration of Flink has been implemented in Apache Iceberg 0.10.0. The Flink batch tasks can query for data in the Iceberg data lake. For more information about how Flink reads and writes Apache Iceberg tables, please see the Apache Iceberg documentation.
The following briefly explains the design principle of the Flink Iceberg Sink. Iceberg uses the optimistic lock method to commit a transaction. When two people submit changing transactions to the Iceberg at the same time, the latter transaction will keep trying to submit. The latter party reads metadata and submits the transaction after the first party submits the transaction successfully. Therefore, it is inappropriate to submit transactions using multiple concurrent operators, which may result in a large number of transaction conflicts and retrying.
We split the Flink write process into two operators to solve this problem, IcebergStreamWriter and IcebergFilesCommitter . IcebergStreamWriter is mainly used to write records to the corresponding avro, parquet, and orc files and to generate a corresponding Iceberg DataFile to be delivered to downstream operators. IcebergFilesCommitter is mainly used to collect all DataFiles into a checkpoint and submit the transaction to Apache Iceberg. The checkpoint data write is done.
After learning the design of the Flink Sink operators, the next question is, How can we design the state of these two operators correctly?
The design of IcebergStreamWriter is relatively simple. The main task of this design is to convert records into Datafiles. There is no complex State that needs to be designed. IcebergFilesCommitter is a bit more complex. It maintains a DataFile list for each checkpointId and map> . Even if the transaction of a checkpoint fails to be submitted, its DataFiles are still maintained in the State. Data can still be submitted to the Iceberg table through subsequent checkpoints.
Future Community Planning
The release of Apache Iceberg 0.10.0 started the integration of Apache Flink and Apache Iceberg. There will be more advanced functions and features for the future releases of Apache Iceberg 0.11.0 and 0.12.0.
Apache 0.11.0 is designed to solve two main problems:
The first one is the merging of small files. Apache Iceberg 0.10.0 already supports Flink batch jobs to merge small files regularly, which is still in the initial stage. Version 0.11.0 will include a function to merge small files automatically. In other words, there will be a special operator to merge small files after the Flink checkpoint arrives and the Apache Iceberg transaction is submitted.
The second one is the development of the Flink streaming reader. Currently, we have completed some PoC works in the private warehouse. In the future, we will provide a Flink streaming reader in the Apache Iceberg community.
Version 0.12.0 will solve the problem of row-level deletion. As mentioned earlier, we have implemented the full-procedure data lake updating through the Flink UPSERT in the PR 1663. After the community has reached an agreement, the function will be promoted to the community version gradually. Users will be able to write and analyze CDC data in real-time through Flink and upsert Flink aggregation results into Apache Iceberg easily.
About the Author
Hu Zheng (Ziyi) is an Alibaba Technical Expert. He is currently responsible for the design and development of the Flink Data Lake solution, a long-term active contributor to Apache Iceberg and Apache Flink projects, and the author of HBase Principles and Practices.
Original Source: | https://medium.com/@alibaba-cloud/building-an-enterprise-level-real-time-data-lake-based-on-flink-and-iceberg-6ea2f26c8a00 | ['Alibaba Cloud'] | 2021-07-06 02:34:17.140000+00:00 | ['Big Data', 'Alibabacloud', 'Flink', 'Storage'] |
How An Interview Made Me Think About Gender Stereotypes | How An Interview Made Me Think About Gender Stereotypes
Photo by Gabrielle Henderson on Unsplash
Simply put, I got an interview with a company — a famous semi-conductor company here. Their stock price is soaring, and the future looks promising. My family was happy about this.
My only concern with the interview was that the position was ‘secretary.’ It was quite different from the career path I’ve experienced so far — I’ve worked as a translator, and currently work in sales. I searched online to see what others have to say about the position:
“Wow. You get an interview invitation from that company? You must look very pretty.”
“Great opportunity to get an engineer husband that earns a lot!”
“Choosing secretary is like choosing mistress for those managers…”
“No, you don’t have to talk about any professional skills. Just dress prettily.”
I wasn’t troubled by those comments at the time. After all, trolls were just being trolls, so I went to the interview.
After going through questions about my past experience, the interviewer looked at me and said,
To be honest, I’m not sure why you would like to apply for the job. You worked perfectly well on translating or selling. It’s an overkill in my opinion to put you in this position. The career path of a secretary doesn't have a promising future. Frankly speaking, most applicants just want to get an engineer husband here and live happily afterward.
I replied politely, saying that I thought being a good secretary is not easy, although it seems to be. I wouldn’t be overqualified at all, and I still had a lot to learn.
I could understand why he was saying this, although I was a bit surprised about how a secretary’s value is underrated here. Anyway, I’d expected that my past experience wasn’t a perfect fit, so I tried hard to include all administrative experience I had in my resume.
The interviewer seemed satisfied with the reply, but suddenly came with a sharper question:
Do you know how common it is for a secretary to be fancied, surrounded, and pursued by males? How would you handle the situation?
This is a neutral and fair question, although I felt uncomfortable about working in that kind of environment. He was just trying to let me know how the situation would be like if I worked here, I assumed. So I talked about my past experience of working with males and assured him that I could work professionally with males in the office.
Then he asked the question that has crossed my line:
“Do you have a boyfriend?”
His reaction after my answer has also crossed my line. He talked as if there will be an issue, and he re-asked all questions above — emphasizing that a secretary needs to get along with males in the office.
I don’t understand what’s the issue of getting along with males in the office and me having a boyfriend. This shouldn’t have anything to do with my professional performance, I said. | https://medium.com/write-like-a-girl/how-an-interview-made-me-think-about-gender-sterotypes-9ae947bb91bd | ['Nilla Chen'] | 2020-12-23 23:17:37.301000+00:00 | ['Stereotypes', 'Feminism', 'Women', 'Sexism', 'Work'] |
I have used a lot Django and FastAPI for real-world projects in production and I really love both… | I have used a lot Django and FastAPI for real-world projects in production and I really love both of them.
But I would mitigate your analysis when it comes to databases. Managing persisted models with SQLAlchemy is not as lean as with Django, especially with respect to migration. You’ll need one more external tool for this such as Alembic. And doing this we are more or less replicating what has repelled me from Flask for serious projects: piling heterogeneous tools and libraries for having the job done.
The main strength of Django is to provide a lot of things out of the box homogeneously and well-documented. You do not have to use them if you don’t need them, and you can even strip them from the default application structure, but they are there, ready to be added and serve.
Same for the admin: all applications dealing with databases I’ve developed have needed a back office at a moment (when not requested from the start). Why writing one (which you’ll have to do with FastAPI, hence adding Jinja2 or alike in the stack) when it’s there for free in Django, ready to be used and even customized for extreme edge cases?
After several experiments I finally came to this rule (which at least works for me): if it’s a pure API, I go with FastAPI, but if there is a database of any type in the mix, then Django is my buddy.
And don’t forget that API is not equal to REST. Both FastAPI and Django play very well with GraphQL too 😊 | https://medium.com/@eric.g.pascual/i-have-used-a-lot-both-django-and-fastapi-for-real-world-projects-in-production-and-i-really-love-e942ce64cec0 | ['Eric Pascual'] | 2020-12-05 09:22:56.982000+00:00 | ['Python Programming', 'Fastapi', 'Orm', 'Django'] |
Best Practices for Building a Remote Culture with Job van der Voort | Submitted by Finn Meeks
Like many other companies around the world, our community has been operating remotely for the last eight months. For some members remote work has been freeing. Yet others miss the depth and serendipity of in-person interactions. We’ve been debating remote work best practices within the community and recently invited Job van der Voort, Co-founder and CEO of Remote for a fireside chat and breakout discussion on the topic.
Job was formerly the VP of Product at Gitlab, one of the first companies to successfully operate a fully remote, distributed workforce at scale. After leaving Gitlab, he founded Remote, a global platform that makes it easy to onboard, pay, and manage a remote workforce. And as the name suggests, Remote operates as a fully remote company. He has used his learnings from Gitlab to influence his own company’s remote culture and shape Remote’s product direction. Job is one of the foremost experts on remote work and we are grateful that he shared these insights with us as we refine our own community’s remote culture.
November 12, 2020 Fireside Chat with Job van der Voort
Here are snippets summarizing our key learnings from the fireside chat. Thank you to everyone that attended live and participated in our breakout discussions following the fireside chat.
Remote work != the office
Being in an office is very easy because it’s what we’ve done for hundreds of years. We rely on our natural desire to say hi to people, to interact with people.
The moment you build a distributed company, you’re faced with a lot of questions: “When do we work? How do we work? Where do we work?.” All of these questions have non-obvious answers because there’s not a giant history of us doing it. There’s no one that has 50 years of experience of working remotely.
At Gitlab, we basically had to reinvent everything that we did every six months. We had to treat our organization as a product that we iterated on and that we tried to improve. You have to consistently and constantly look for new ways to get to know each other, to build a culture.
It helps to be explicit, rather than implicit in a remote organization. Instead of relying on a recurring All Hands to communicate with the organization, it more important to spent time documenting and writing something before announcing it.
Things inevitably break with scale
When you’re a very small company with just a few people, you can get on the Zoom with the whole team and talk about work and non-work related things. That is a very important part of how you get to know each other, but it stops working once you’re at about 25 people.
At 25 people, you can no longer have a 30 minute meeting in which everybody speaks. You have to start being much more structured about the way that you do things. At Gitlab, the most important thing we did was create a handbook that served as our single source of truth for anything related to processes and culture.
Nobody likes working alone
The worst way to build your remote company is to start with a team in one location, and then hire someone that is eight hours away. That does not work. The person either has to work at the same time as the rest of the team or they start to feel very isolated.
Organizations should quickly start to expand the time zones in which they are active. When we started Remote, we were almost all in Western European time, but we made a conscious decision to hire in America as we expanded. Then we had a group in then PST to EST time zones. If we decided to hire one person in India, we’ll have to commit to hiring at least two or three other people. So that there’s never a moment in a day in which you feel all by yourself in the office.
It’s still useful to build out concentrations of people (in a city or a country) because they talk with each other and don’t feel isolated. They tend to hire their friends, which means you can quickly build a hub by tapping into your employees networks.
The nitty-gritty of hiring remotely
One of the most important things is getting the nitty-gritty details of hiring and onboarding right. New hires need to have their laptop, a stable internet connection and a great remote work setup in order to perform their job well (like this setup!). In one case, we rented an apartment with high-speed internet for a new hire in Kenya because they couldn’t work remotely without it.
Organization is important for onboarding. All of our new employees get a handbook with a checklist of things to do. One of these things is to have calls with X people around the organization, mostly people outside of your own team, so that you start to build a rapport with people across the organization.
Being remote-first allows you to hire the best people in the world, regardless of where they are based. Remote sets a floor for employee compensation (so everyone is on equal footing) and then adds on whatever it takes to get the employee. It doesn’t matter if the person is overpaid for their location, as long as they are the best person in the world.
Walk the floor
My co-founder and I have talks with individuals around the organization at random once in a while to test our culture. We call it, walk the floor, which comes from the Toyota way.
We ask them to tell us whatever they want — How you’re doing? How’s the company doing? What should we be doing different? What is not going well? — and we shut up for 30 minutes and listen. The 20 minute mark is when all the dirt comes out, which is a good indicator of us.
As an example, I recently advised the whole company to be more conscious of their work life balance. But two people told me on these calls that “we know we’re working at a startup. If you tell us not to work too many hours, it makes us feel stupid because … there’s an insane amount of work to do.” Walking the floor was effective in this case.
Remember to have fun | https://medium.com/south-park-commons/best-practices-for-building-a-remote-culture-with-job-van-der-voort-469cc777a6cd | ['South Park Commons'] | 2020-12-17 21:03:49.299000+00:00 | ['Technology', 'Startup', 'Culture', 'Remote Working', 'Community'] |
Most Data Problems are not “Big Data” Problems | When the “big data” buzzword peaked in 2015, I remember NoSQL, Hadoop, MongoDB, and other unstructured data technologies being touted as the future of analytics. Many organizations started collecting data faster than they could organize and store it, so they simply dumped it on a cluster and scaled horizontally as needed. Many companies put enormous expense into migrating off relational databases like MySQL and onto “big data” platforms like Apache Hadoop.
Amidst this movement, I was teaching an O’Reilly online training on SQL. I had one participant suggest that relational databases and SQL might be a legacy technology. If the lack of horizontal scaling was not enough reason, relational databases have all this pesky overhead to structure data in a normalized fashion, as well as enforce data validation and primary/foreign keys. The internet and connectivity of devices caused an explosion of data, so scalability became the selling point of NoSQL and “big data”.
The irony is that SQL interfaces were added to these “big data” platforms, and this happened for a reason. Analysts found NoSQL languages difficult and wanted to analyze data in a relational data fashion. A great majority of data problems are best modeled as relational database structures. An ORDER has a CUSTOMER and a PRODUCT associated with it. It just makes sense these pieces of information should be normalized in separate tables rather than a blob of JSON. Even better, there’s peace of mind knowing the database software will validate an ORDER and check if the CUSTOMER and PRODUCT are in fact existing, rather than let data corruption quietly creep in due to bugs on the front-end.
The truth is most data problems are not “big data” problems. Anecdotally, 99.9% of problems I’ve encountered are best solved with a traditional relational database.
There are definitely valid cases to use NoSQL platforms, especially when an enormous amount of unstructured data has to be stored (think social media posts, news articles, and web scrapes). But with operational data, relational databases force you to think carefully about how your data model works and to get it right the first time. It is rare that operational data of this nature gets so large a relational database cannot be scaled.
I will leave you with this table to help aid your SQL versus NoSQL decisions: | https://medium.com/97-things/most-data-problems-are-not-big-data-problems-cda6c73630a8 | ['Thomas Nield'] | 2019-05-20 14:37:32.626000+00:00 | ['Sql', 'Big Data', '97 Things', 'Analytics', 'Data'] |
Tackling so many thoughts.. | Questioner: Many types of thoughts keep coming to the mind. The mind does not become shunya (zero, thought-free). Thoughts keep on coming.
Dadashri: In terms of thoughts, the fact is that the mind gives you information. ‘This is good, this is fearsome… this is like this…, this is like that.’ Thus, it is carrying out its function (duty). Otherwise, if it does not inform about a dangerous place, it will be considered at fault for that. In those situations, you accept what you feel is worth accepting and leave the rest. The mind is simply carrying out its function.
Now, ‘we’ may be coming here by taxi from Santa-Cruz, and if ‘we’ see an accident on the way, even ‘our’ mind will say, ‘If we continue, we may get into an accident.’ Then we would say, ‘Brother, I have made a note of what you say. You are correct. We should remain alert.’ Then, the next thought may come, ‘There is nothing around that may get you into an accident.’ Then, we say, ‘I have made a note of it.’ Then, it will talk about something else. It is not something that wants to hurt you. It is not the nature of the mind to sit on just one topic. Have you ever looked into the mind to see that the mind sits around for one topic only?
Questioner: It will go on moving around.
Dadashri: If you sit around on one topic, it will sit around. But if you say to it that what it is saying has been noted, it will then talk about another topic. But if you say, ‘Yes, what you are saying is correct. What will happen now?’ It will then go on for hours.
You can know what types of thoughts are coming. The mind may turn bad and say, ‘What will happen if your mother-in-law dies today?’ Then you say, ‘I understand.’ Then it will say, ‘What if you were to die?’ You say, ‘I understand that too, now talk about a third thing.’ It may even show you, ‘If you die tomorrow, what will happen to all these people?’ Then you say, ‘I have made a note of it.’
Hey! It will talk about getting married even at such old age; that is how the mind is, there is no telling when it will say what. But then, why should you get angry upon listening to it? It may even tell you about widowhood, ‘What will you do if you become a widow?’ Then you say, ‘Brother, I accept being widowed. Now, go ahead and talk about something else!’ The mind has a habit of nagging. You are not to take it into consideration. If a mad man is walking behind you, what harm can he do to you? Consider it to be like that.
Thoughts come on their own; you just have to see what kind of thoughts are coming, that’s all. There is nothing else. The mind does not have any insistence that it only wants to talk a certain way. If you become awkward, it will become awkward. You just have to say, ‘I have noted the (mind’s) contents.’ Yes, otherwise it will say, ‘You have no respect for me.’ If you respect it first, will the mind give you any trouble? No!
Questioner: But the one that informed you that there is a likelihood of an accident, was it the mind? Did the mind say it?
Dadashri: The mind said that and we then accept that, ‘Brother, what you are saying is correct.’ Then it will talk about something else. It will move ahead and talk about . It doesn’t feel that you don’t like this. It will go ahead and say what it sees. Therefore, all this is circumstantial evidence. Hence, you make note of it. If it scares you and you get scared, then its’ over! It is not trying to scare you, it simply cautions you, it says, ‘Beware.’ ‘We’ do not get scared so easily. It will scare an ignorant person. ‘What if there is an accident?’ he will be engrossed during this period.
Once the knowledge of, ‘I am pure Soul’ is born, the mind will be contained, otherwise the mind can never be contained. Now, when the mind is wandering around, you are just to ‘See’ and ‘Know’ it. Then there is no interfering in it at all, is it?!
Now, you feel that the mind is contained now right, since it has come under self-control?
If you fall sick, it may even caution you inside, ‘What if I die?’ then you say, ‘Yes brother, I shall remain quite careful in this matter, now talk about something else.’ Thereafter, the mind will talk about something else. But what is the nature of these ignorant people, i.e. people who do not have Self-Realization? They become engrossed in whatever thoughts come in the mind, and so it cannot show them anything further and they become ‘drowned’ in it. What can ignorance not do? He gets engrossed in it; even before the subject is mentioned.
Therefore you should understand the science of the mind. It functions just as all the senses do. It is performing its own function (duty). Even if you don’t want to, the ears will not refrain from listening, will they? You can listen, but if you do not want to take the ‘phone call’, then don’t. That is your responsibility.
I myself talk to my mind this way and I am asking you to do the same, say to it, ‘Brother, I have noted.’ That is what you tell the mind. Thereafter, it will end. It will talk about some other nice things. This is because there is only one thing in this body that is completely contradictory. If there is something that is completely contradictory, it is the mind. And that is why there is fun! It would be no fun if it were to go only on one track. A moment later, it will say something quite different. Hey! There was a sixty-five years old man; do you know what his mind kept telling him? ‘What if I get married?’ So he told me about it. I said, ‘You fool, what sort of a mind you have! It is contradictory!’ Only ‘we’ tell you the process that I have used. How I have become independent. I have become independent of even God. This is what I am showing you. Once it is into your process, you will have no problem. (I have) seen this path, experienced this path, and known this path. It is possible for you to adjust it on your own. If you can’t do it, tell me where you are not able to adjust, so then I can show you. However, no one has attained Liberation by killing the mind.
Once you attain the Pure Soul ( Shuddhatma), all this (inner workings of the mind) is vyavasthit (Scientific Circumstantial Evidences) and no one is able to interfere with it. The intentions present in the mind cannot refrain from expressing. Just as the ears cannot stop from hearing, the mind cannot refrain from saying (expressing). When it expresses itself, you should heed it if it is useful for you. If it is of no use, you simply have to say to it, ‘What you are saying is right, I will be careful from now on.’ Then it will talk about the next thing. It is informing you of whatever circumstances or phases it sees. ‘What if this happens…what if that happens?’ What objection do you have to that? You know that all that is vyavasthit. It (the mind) will then talk about the next issue. It is not that it wants to keep talking about the same thing. However, when there was only ignorance prevailing (prior to Self-Realization), you were getting engrossed with the mind and consequently you were suffering.
It is not necessary to push the mind away, nor is it necessary to kill it. To kill something or someone and go to liberation, that can never happen. You tell the mind, ‘You live as you would.’ I am in my location; in my space, you are in your space. | https://medium.com/@dadabhagwan/tackling-so-many-thoughts-7fa99af8dd33 | ['Dada Bhagwan'] | 2020-12-15 05:33:30.377000+00:00 | ['Thoughts And Feelings', 'Mindfulness', 'Thoughts', 'Self-awareness', 'Mind'] |
MFChain Announces Key Leadership Additions | Coming off a highly successful Blockchain Economic Forum event in San Francisco, MFChain is in the final stages of negotiations on a number of key partnerships. Each will add tremendous value to the MFX payment processing solution as well as the MF Mainnet ecosystem that will launch in 2019. With these changes, MFChain realizes that in order to stay on track with our roadmap, additional attention to development and additional leadership resources are necessary.
In an strategic move, our parent company Mammoth Project LTD will be taking a forward facing role within MFChain and leading all business and operational tasks within MFChain. Effective immediately Craig Neil, Founder Mammoth Project LTD and Jayson Rellis, Managing Director Mammoth Project LTD will join MFChain in a full time capacity. These key additions allow Viacheslav Shybaiev and the development team to focus on fully integrating all new partners API’s within the MFChain payment solution and also complete the full development of the MF2x protocol (full detailed announcement to come on MF2x).
In combination with the addition of Craig and Jayson to the MFChain core team, we are pleased to also announce that Naviin Kapoor has accepted a new role. Effective immediately Naviin is joining MFChain as Head of Blockchain Integration Asia and Middle East. We are excited for Naviin’s role and look forward to his leadership in driving adoption and partnerships in the Asian and Middle East Markets. | https://medium.com/modern-finance-chain/mfchain-announces-key-leadership-additions-9a1f8a8b2c07 | ['Modern Finance Chain'] | 2018-07-02 22:16:11.201000+00:00 | ['Cryptocurrency', 'Leadership', 'Blockchain', 'Ethereum', 'ICO'] |
Key Takeaways from President Williams’s Speech on the Economic Outlook and Monetary Policy | Key Takeaways from President Williams’s Speech on the Economic Outlook and Monetary Policy
On Wednesday, September 8, New York Fed President John Williams spoke about the economic outlook, inflation, and monetary policy at an event hosted by St. Lawrence University.
He said:
“Even with the strong pace of growth we are seeing, a full recovery from the pandemic will take quite some time to complete.” “It’s clear that this spike in inflation largely reflects the transitory effects of the rapid reopening of the economy, which is pushing supply and demand in extreme ways.” “Assuming the economy continues to improve as I anticipate, it could be appropriate to start reducing the pace of asset purchases this year.”
In his remarks, President Williams discussed the outlook for employment. He noted that “job gains have been strong in recent months,” and that he expects the employment picture to continue to improve. Still, he underscored that the unemployment rate is still far above the levels reached early last year. “I cannot stress enough that we still have a long way to go to get back to our maximum employment goal,” he said.
Turning to inflation, President Williams said recent high readings are likely temporary. He noted that they result from the unique circumstances of the pandemic, citing factors such as rebounding prices for airfare and lodging, and a surge in prices for used cars. Further, he pointed out that inflation expectations and measures of underlying inflation are close to the Fed’s 2 percent goal, so he expects the rate will “come back to its underlying trend of around 2 percent next year.” He added that because there is a great deal of uncertainty about the inflation outlook, he “will be watching the data closely in the coming months.”
President Williams then discussed the Fed’s policy response, including the asset purchase program put in place to support the economy, which the FOMC said would continue until it sees substantial further progress toward its employment and price stability goals. “I think it’s clear that we have made substantial further progress on achieving our inflation goal,” he said. “There has also been very good progress toward maximum employment, but I will want to see more improvement before I am ready to declare the test of substantial further progress being met.”
He added: “It is important to remember that even after the asset purchases end, the stance of monetary policy will continue to support a strong and full economic recovery and sustained attainment of 2 percent average inflation.”
President Williams concluded by pointing out that “the pandemic is far from over, both in terms of its effects on health and its effects on the economy,” and that its unusual nature means “this recovery is far different than anything we have seen before.”
Read the full speech.
The views expressed in this post are those of the contributing authors and do not necessarily reflect the position of the New York Fed or the Federal Reserve System. New York Fed content is subject to our Terms of Use. | https://medium.com/new-york-fed/key-takeaways-from-president-williamss-speech-on-the-economic-outlook-and-monetary-policy-b861c2c3ed2d | ['New York Fed'] | 2021-09-08 17:19:21.949000+00:00 | ['Economics', 'Inflation', 'Federal Reserve', 'Employment', 'Monetary Policy'] |
City Recommender System with Python [Part 2/3]: EDA — Finding my Schitt’s Creek | Ideally, this method would show an obvious ‘elbow’ — that is, a sudden, drastic change in the decrease in error such that the marginal benefit of adding a cluster is reduced dramatically. The right plot of Figure 1 shows a clear elbow at 4. However, there is no clear elbow for the weather data (left of Figure 1). I decided to apply 5 clusters. Now, lets cluster and plot the data!
To do so, I created this function. It requires the normalized array (normDF — output of the elbow_kmeans_city function), the number of clusters (k), the original dataframe (regDF — used in the elbow_kmeans_city function), and a dataframe with the latitude (lat), longitude (lng) and city names (CityName, STATE — ie. New York, NY), which you can download here.
def Map_KMeans(normDF, k, regDF, citiesLatLong):
w_mod = KMeans(n_clusters=k).fit(normDF)
temp_df = regDF
try:
temp_df = temp_df.drop('Clusters',1)
except:
pass
temp_df.insert(0,'Clusters',w_mod.labels_)
temp_df = temp_df.merge(citiesLatLong, how='inner',on='City')
map_clusters = folium.Map(location=[37.0902,-95.7129], zoom_start=4) # set color scheme for the clusters
x = np.arange(k)
ys = [i + x + (i*x)**2 for i in range(k)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array] # add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(temp_df.lat, temp_df.lng, temp_df.City, temp_df['Clusters']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=4,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters) return map_clusters, temp_df
Figure 2: Weather clusters in US
WOW! Seems pretty spot on.
Orange — cold winter, humid continental cities (includes Alaska, not shown)
Red — warmer, humid subtropical cities
Green — inland, large swing, semiarid cities
Blue — warmer, arid cities
Purple — mild summers and winters with abundant annual precipitation (marine west coast climate)
With these functions defined, lets visualize the results from clustering socioeconomic data!
About Socioeconomic Data
Due to difficulties scraping indices in Numbeo, we lost most of the smaller towns listed for venue and weather data. Nonetheless, it includes over 600 places in the US including most — if not all — large metropolitan areas and cities with populations above 100,000.
Figure 3: General Socioeconomic data clusters in US
Two things stand out from the above map. The purple dots represent MSAs and the orange represent high quality of life cities with high poverty rates.
Figure 4: Zoom to Rochester and Syracuse, NY
We will take a look into some examples, including Syracuse, Rochester and Harrisburg (Orange), and New York and Pittsburgh (Purple). | https://medium.com/analytics-vidhya/city-recommender-system-with-python-part-2-finding-my-schitts-creek-da2f2cce2b5f | ['Elias Melul'] | 2020-05-08 15:47:47.590000+00:00 | ['Schitts Creek', 'Cities', 'Visualization', 'Recommendation System', 'Data Science'] |
Up your npm game with these 4 practices | Fourth: Use npm package variables
During our development time, we (my team) usually never write API function calls, instead, we rely on code generators like OpenAPI code generator with Swagger. If you add that script it’s not gonna look clean at all because it requires you to specify the URL to the swagger.json file. I know OpenAPI generator has an option of passing a config file, but I’m using this as an example of any other script that requires passing long variables like URLs. In that case, the script will look like this:
This command can be simplified by using npm package variables. Basically, you can access any piece of information inside the package by prefixing it with $npm_package_ , for example accessing the name and version of the repository would be like this: $npm_package_name and $npm_package_version . You also need to remember that underscores replace dots. Imagine if you’ve imported the package.json file as an object in a JavaScript file and you’re trying to access the repository’s URL, you’ll have to do this:
const pkg = require('./package.json');
console.log(pkg.repository.url);
Now, replace pkg with $npm_package and dots with underscores to look like this $npm_package_repository_url . You might be thinking “What about arrays?”. Good question! Just pass the index number like this: $npm_package_keywords_0 .
Now if we use npm variable to organize the aforementioned script, it should look like this:
Not only it’s easier to read with npm package variables, it’s easier to maintain. You can easily swipe URLs and other variables if it’s encapsulated in an object.
Caveat
For some reason, npm package variables will not work with a dollar sign ($) in Windows. In order to access npm variables, you have to wrap it with a percentage sign (%) like this: %npm_package_version% . The problem with this is that if you’re depending on a build pipeline that uses a Linux-based agent, the percentage sign won’t work. Sadly, there is no unified way of accessing npm variables in Windows and Linux-based OS (e.g. MacOS). I guess the only solution for you if you’re using Windows for development is to have two scripts like this: | https://medium.com/swlh/up-your-npm-game-with-these-4-practices-ecf2cfb0a3f8 | ['Yousif Al-Raheem'] | 2020-07-01 16:16:24.008000+00:00 | ['Node Package Manager', 'NPM', 'Npm Package', 'npm Weekly'] |
flask相關建設主題 | flask相關建設主題
使用Security Helpers的兩個方法
generate_oasswird_hash
check_password_hash
python command
%python
from werkzeug.security import generate_password_hash, check_password_hash
generate_password_hash(‘admin’) ‘加密
每次hash都不會一樣
b=generate_password_hash(‘admin’)
check_password_hash(b, ‘admin’) ‘核對密碼 返回true/false
command python檢查(完整的啟動python command)
python>>>
>>> from twittor import db, create_app
>>> from twittor.models import User
>>> app = create_app() ‘實體化
>>> app.app_context().push() ‘push方法
>>> u = User(username=’admin’, email=’[email protected]’) ‘一個實例
>>> u.set_password(‘admin’) ’設定hash密碼
>>> u
id=None, username=admin, [email protected],password, password_hashpbkdf2:sha256:150000$A1X5TSLh$bbcae5a7b0321afc7df1a524cb909a57dd664350f0685ac21a6199a9520f63f1
>>> db.session.add(u)
>>> db.session.commit() ‘新增入資料庫
>>> User.query.all() ‘有資料
[id=1, username=admin, [email protected],password, password_hashpbkdf2:sha256:150000$A1X5TSLh$bbcae5a7b0321afc7df1a524cb909a57dd664350f0685ac21a6199a9520f63f1]
>>> User.query.filter_by(username=’admin’) ’有東西
<flask_sqlalchemy.BaseQuery object at 0x10c45a2b0>
>>> User.query.filter_by(username=’admin’).first() ‘first()方法
id=1, username=admin, [email protected],password, password_hashpbkdf2:sha256:150000$A1X5TSLh$bbcae5a7b0321afc7df1a524cb909a57dd664350f0685ac21a6199a9520f63f1
>>> User.query.filter_by(username=’adminn’).first() ‘沒有adminn返回None
移除csrf警吿
FlaskWTFDeprecationWarning: “csrf_enabled” is deprecated and will be removed in 1.0. Pass meta={‘csrf’: False} instead.
移除csrf_enabled=False
新增class Meta
forms.py / class Meta
Flask_Login用於用戶session的管理
已經登錄過 要被記起來
頁面只允許login的用戶去訪問
pip install flask_login
LoginManager
from flask_login import LoginManager '引入flask_login LoginManager 類
別
login_manager = LoginManager() ‘定義login為LoginManager的一個實例login_manager.init_app(app) 'login與app進行關聯 初始化
UserMixin提供用戶session的幾個基本方法
is_activate
is_authenticated
is_anonymous
get_id
改造Models / User 將 UserMixin當作父類提供給User繼承,這樣User就會有Mixin的以上方法
models放置user_loader callback回調函數
This callback is used to reload the user object from the user ID stored in the session(要讓app login方法根據id找到用戶)
login_manager內user_loader方法裝飾def load_user
@login_manager.user_loader
def load_user(id)
(session存儲的類型是string要改成int)
login_user
>就可以開始在route.py使用login_user方法,使用login_user(user) (flask_login)的方法去紀錄當前已經通過驗證的用戶
設定config.py/SECERT KEY
current_user
current_user 當前的用戶flask_login的current_user方法
is_authenticated 被認證過
如果被認證過,就不需要再登入
redirect(url_for(‘index’))
logout 登出
route.py/from flask_login import logout_user
route.py
def logout()
logout_user()
templates folder/base.html
base.html
{% if current_user.is_anonymous %}
UserMixin內有is_anonymous方法 是否匿名,沒login就是匿名
強制用戶登錄 沒有登錄不給看@login_required
from flask_login import login_required
未登錄則進入此畫面
以下修改__init__.py,引導頁面到login輸入帳密
跳出登錄頁面輸入正確後,回到原本要進入的頁面next
觀察login?next=OOOXXX,登錄後就會回到原本要進入的頁面(就是OOOXXX)
取出next所帶參數
from flask import request
next_page = request.args.get(‘next’)
轉入
if next_page:
return redirect(next_page) | https://medium.com/@ka666wang/flask%E7%9B%B8%E9%97%9C%E5%BB%BA%E8%A8%AD%E4%B8%BB%E9%A1%8C-ca86107ea201 | ['Steven Wang'] | 2020-11-09 15:56:10.553000+00:00 | ['Flask'] |
Meticore Reviews — Fake Weight Loss Results or Real Customer Success Stories? | ·18 min read
Meticore pills reviews. Is Meticore weight loss supplement a real deal or customers have side effects complaints? More in this Meticore.com report by SupplementReviews.
— Meticore Reviews 2021 Update: Read this comprehensive Meticore pills review before you make your purchase. Does it really work?
MUST SEE: Critical New Meticore Supplement Report is Out — This May Change Your Mind!
Meticore diet pills have been in the weight loss supplement market for some time now. According to the official Meticore website, this supplement can be a potential solution for any person who wishes to kickstart a fitness journey by increasing metabolic rate and burning more fat.
Meticore aims to follow a unique approach to lose weight which is unusual for a natural fat burner i.e. by triggering thermogenesis. The supplement seems to be making rounds on the internet due to its potentially effective and natural formula.
(SPECIAL OFFER 2021) Click Here To Order Meticore at an Exclusively Low Price Today!
There are several studies that reveal the connection between low metabolism, core body temperature, and obesity. These studies suggest that a slow metabolic rate increases the likelihood of gaining weight. Although the typical idea of losing weight is cutting back the calories and spending time in vigorous exercise, many people fail to achieve their target, despite following this ‘ideal’ strategy. The truth is that weight loss is not just about eating less and burning more; many hidden factors affect this process, many of which go completely unnoticed. That’s why people look around for a little boost from products like the Meticore supplement to fix these issues.
meticore
According to meticore.com, Meticore pills utilize the power of premium-quality natural ingredients to target the causes of a slow metabolism. It is a multi-action formula that works on various things at a time. Based on Meticore independent reviews, it appears that the majority of its users are 30–40 years old’ who have been dealing with middle-life stressors, and have no time or energy to take care of their health.
Here in this Meticore review, you will know everything about this supplement that may affect your decision to try it. Continue reading to know how it triggers weight loss, its ingredients, and where to buy Meticore online.
Meticore Review
Weight loss supplements have been around for years, and before they became a thing, herbal extracts and medicines were popular for providing the same benefits. Dietary supplements like Meticore pills usually include a fine blend of these same herbs that have been around for centuries to provide therapeutic benefits to humans. However, sourcing these herbs, determining their dosage, and using them daily is very hard. So many people prefer using a pill that contains the power of these same herbs.
Before jumping to how Meticore capsules can induce a natural weight loss, understanding the concept of obesity and the state of being overweight is necessary. Being overweight means that the body has more fat stored inside it than it should have. This condition can be checked by weighing the body and taking the measurements with a measuring tape. While most people determine the ‘health level’ by looks, medical experts believe that using a standard like body mass index (BMI) is better, as it shows a more accurate picture.
There are so many variables that determine the weight, such as genetic predisposition, diet, stress levels, and lifestyle. But one thing that contributes most to this whole weight gain process is ‘metabolism.’ When the body experiences a slow metabolism, it means its power to break down food and use it to make energy is compromised. Eventually, when the body heats up, the metabolism takes a quick boost, initiating faster weight loss. Meticore diet pills seem to follow this same approach to prevent the body from obesity and fat accumulation without going to a gym or following a restrictive diet.
Meticore comes in easy-to-use capsule form, and these capsules are packed inside a plastic bottle, sealed by the company. Every user is expected to take the recommended dosage every day, for a few weeks or months, depending upon his initial weight. Individual results may vary. The exact benefits of these pills may vary for different users.
(HUGE SAVINGS TODAY) Click Here to Order Meticore at the Lowest Discounted Price Online!
How Does Meticore Really Work To Control The Core Body Temperature?
As mentioned in several Meticore reviews by customers, Meticore diet pills target core body temperature to initiate weight loss. The core body temperature represents the true health of a person, and any changes to it can affect health positively or negatively. With the help of thermoregulation, the Meticore supplement works on changing the core body temperature, raising it to a level that the body stays safe while losing weight naturally. But that’s not the only expected benefit of this supplement.
Meticore has been regarded as a multivitamin pill with numerous ingredients inside, each of which provides additional benefits, as mentioned in Meticore reviews BBB. Some of its ingredients work on appetite control, saving the body from eating unhealthy junk food and craving for it from time to time. Plus, when the body consumes fewer calories, the calories available for the body to burn automatically lower down; hence, the chance of fat accumulation reduces. Some of its ingredients can target inflammation, toxins, and free radicals, all of which can slow down metabolism and affect food-to-energy conversion. With the help of a natural formula, Meticore can independently play all of these roles without needing help from a special weight loss diet or a fitness routine.
But it also doesn’t mean that you should stop all efforts and rely on this supplement alone. If you are already following a low-calorie diet and any light to moderate activity level, it is better to continue them. Taking Meticore pills for weight loss with these basic changes can improve its progress; you can expect to see better results in less time. It can also remove all the possible factors that may change metabolic rate or initiate fat accumulation.
Some Meticore user reviews explain how they used this supplement to maintain their weight loss progress, implying that it can be used even after reaching your target weight. This way, the results can be maintained for years.
More Information On Meticore Diet Pills Available On The Official Website. Click Here Now!
Who is an Ideal Candidate for Meticore Weight Loss Pills?
Any person who is over 18 years of age, and has a higher body mass index ratio can take benefit from Meticore pills.
According to Meticore pills reviews, these capsules can be exceptionally helpful when you have all the signs of slow metabolism, with or without an obvious weight gain. It may be hard to identify slow metabolism, without knowing its outer look. Here are the most common signs of a slow metabolism.
Craving for sugary foods and drinks
Lethargic feeling, low energy, and stamina
Digestive issues, flatulence, and bloated
Impossible to lose weight
Cognitive issues, and memory problems
Fluctuating blood sugar levels and high cholesterol
Hormonal imbalance
If you are experiencing a few or all of these symptoms, you have a slow metabolism. This slow metabolism is the culprit behind your failed weight-loss attempts. But many times, slow metabolism is a result of an underlying medical condition and treating this disease automatically resolves the metabolic issues. Remember Meticore is only a weight loss support supplement, it is not a treatment pill and expecting it to treat an underlying condition is not fair.
Despite all the benefits and safety levels, Meticore may not be the best choice if you belong to the following categories.
Underage children
Pregnant and breastfeeding mothers
Alcoholics and substance abusers
People taking medicines every day
Can Meticore really help you with weight loss? Don’t forget to check Meticore before and after pictures and find out! More details can be found here!
Meticore Ingredients — Eight Ingredient Fat Melting Formula
Meticore is a fine blend of eight natural ingredients. It is a U.S. product, labeled with GMP-certified, non-GMO and safe for daily use. According to Meticore.com, there are no stimulants inside, and it doesn’t cause any ‘high’ feeling or ‘sedation’ in any user.
These Meticore capsules work to increase the body temperature by producing heat that melts the adipose layers. When combined with a healthy diet and an active lifestyle, the results can show up even faster, making the user lean and simmer within weeks. Let’s focus on the Meticore ingredients that make it such an effective product.
Moringa oleifera — also called ‘the tree of life,’ moringa has dozens of benefits to offer to its users. It has been used for centuries to treat various health conditions in traditional African and Ayurvedic medicine. Modern researches have confirmed its potential, which is why it has been added to the Meticore formula. It has nearly seven times higher vitamin C and other antioxidants than oranges. These antioxidants work on high sugar, cholesterol, and blood pressure to regulate them. There are plenty of studies that show a successful weight loss in people using moringa by preventing fat accumulation and burning stubborn fat layers.
— also called ‘the tree of life,’ moringa has dozens of benefits to offer to its users. It has been used for centuries to treat various health conditions in traditional African and Ayurvedic medicine. Modern researches have confirmed its potential, which is why it has been added to the Meticore formula. It has nearly seven times higher vitamin C and other antioxidants than oranges. These antioxidants work on high sugar, cholesterol, and blood pressure to regulate them. There are plenty of studies that show a successful weight loss in people using moringa by preventing fat accumulation and burning stubborn fat layers. African Mango Extract- This Meticore ingredient provides various vital nutrients to the body, on top of which is dietary fiber, followed by fatty acids, vitamins, minerals, and amino acids. This dietary fiber works on satiation, controlling hunger, and saving the user from emotional eating. Other benefits of African mango include blood sugar regulation, cholesterol balance, immunity boost, and digestive support.
This Meticore ingredient provides various vital nutrients to the body, on top of which is dietary fiber, followed by fatty acids, vitamins, minerals, and amino acids. This dietary fiber works on satiation, controlling hunger, and saving the user from emotional eating. Other benefits of African mango include blood sugar regulation, cholesterol balance, immunity boost, and digestive support. Fucoxanthin - this is a naturally occurring compound in wild seaweed, used as a dietary source in various parts of the world. Not many people know that brown algae or brown seaweed is a rich source of antioxidants, minerals, and vitamins that can fulfill a nutritional deficiency in the user. It is exceptionally helpful to remove toxins, free radicals, and waste materials from the body, ensuring that the body is protected from free radical damage and oxidative stress. The latest study suggests the anti-inflammatory role of fucoxanthin which further adds to its metabolic benefits.
- this is a naturally occurring compound in wild seaweed, used as a dietary source in various parts of the world. Not many people know that brown algae or brown seaweed is a rich source of antioxidants, minerals, and vitamins that can fulfill a nutritional deficiency in the user. It is exceptionally helpful to remove toxins, free radicals, and waste materials from the body, ensuring that the body is protected from free radical damage and oxidative stress. The latest study suggests the anti-inflammatory role of fucoxanthin which further adds to its metabolic benefits. Curcumin- commonly called turmeric, curcumin is mainly added as a flavor enhancer in recipes, especially curries but its medicinal benefits are unmatched. It is a natural anti-inflammatory agent that may save from chronic inflammation in the gut, toxin damage, and hormonal imbalance, all of which are directly linked with a slow metabolism. Adding turmeric in food recipes can help avail its benefits but using a supplement guarantees its daily value added to the body, which is why the Meticore weight loss supplement can be a great choice.
commonly called turmeric, curcumin is mainly added as a flavor enhancer in recipes, especially curries but its medicinal benefits are unmatched. It is a natural anti-inflammatory agent that may save from chronic inflammation in the gut, toxin damage, and hormonal imbalance, all of which are directly linked with a slow metabolism. Adding turmeric in food recipes can help avail its benefits but using a supplement guarantees its daily value added to the body, which is why the Meticore weight loss supplement can be a great choice. Ginger – another flavor enhancer that has hidden medicinal benefits for the body especially maintaining cholesterol levels. It targets chrysin and galanin and directly governs weight loss progress by preventing fat accumulation and burning thick fat layers around the belly, thighs, and hips. It improves digestion and protects the body against common issues, i.e., gas, bloating, indigestion, heartburn, nausea, and constipation during the weight loss journey.
another flavor enhancer that has hidden medicinal benefits for the body especially maintaining cholesterol levels. It targets chrysin and galanin and directly governs weight loss progress by preventing fat accumulation and burning thick fat layers around the belly, thighs, and hips. It improves digestion and protects the body against common issues, i.e., gas, bloating, indigestion, heartburn, nausea, and constipation during the weight loss journey. Citrus Bioflavonoids — as the name explains-these flavonoids have been taken from citrus fruits, such as oranges, grapefruits, etc. They work on allergic responses and immunity of the body, to help improve the weight loss effects of the Meticore supplement. They also improve blood circulation so that all body cells receive the nutrients they need to run the cellular activities. There is scientifically proven data suggesting that bioflavonoids may reverse obesity and save from various metabolic disorders.
as the name explains-these flavonoids have been taken from citrus fruits, such as oranges, grapefruits, etc. They work on allergic responses and immunity of the body, to help improve the weight loss effects of the supplement. They also improve blood circulation so that all body cells receive the nutrients they need to run the cellular activities. There is scientifically proven data suggesting that bioflavonoids may reverse obesity and save from various metabolic disorders. Bitter Orange — it is a fruit that naturally grows in Southeast Asian countries and offers various health benefits, especially in digestion. The main component in these oranges is called p synephrine that works on sugar levels and aids in digestion. Moreover, it triggers fat burning because of its natural thermogenic properties for the body, which is why it is made a part of the Meticore ingredients list. Other benefits include cardiovascular support, prevention of metabolic disorders, nutrient absorption, and immunogenic benefits.
— it is a fruit that naturally grows in Southeast Asian countries and offers various health benefits, especially in digestion. The main component in these oranges is called p synephrine that works on sugar levels and aids in digestion. Moreover, it triggers fat burning because of its natural thermogenic properties for the body, which is why it is made a part of the Meticore ingredients list. Other benefits include cardiovascular support, prevention of metabolic disorders, nutrient absorption, and immunogenic benefits. Quercetin — this is a compound that is found in onions, berries, and nuts and is linked with weight loss in some way. It improves the effects of all other Meticore ingredients a well. It acts as an antioxidant and anti-inflammatory agent inside the body, relieving chronic inflammation, controlling blood sugar, and saving from various heart conditions.
this is a compound that is found in onions, berries, and nuts and is linked with weight loss in some way. It improves the effects of all other Meticore ingredients a well. It acts as an antioxidant and anti-inflammatory agent inside the body, relieving chronic inflammation, controlling blood sugar, and saving from various heart conditions. Chromium — this is a mineral essentially required by the body, especially during weight loss. It has cholesterol and sugar-lowering effects along with appetite control and protection from hunger pangs. A daily intake of chromium is necessary to break down food, especially fat and sugar, from the diet. Due to different dietary preferences, chromium deficiency is common in people, increasing the chances of hormonal imbalance, diabetes type 2, and high cholesterol levels. Consequently, a daily intake of chromium, as a part of the Meticore diet pills, can save from all these issues, improving the overall weight loss progress.
this is a mineral essentially required by the body, especially during weight loss. It has cholesterol and sugar-lowering effects along with appetite control and protection from hunger pangs. A daily intake of chromium is necessary to break down food, especially fat and sugar, from the diet. Due to different dietary preferences, chromium deficiency is common in people, increasing the chances of hormonal imbalance, diabetes type 2, and high cholesterol levels. Consequently, a daily intake of chromium, as a part of the Meticore diet pills, can save from all these issues, improving the overall weight loss progress. Vitamin B12 — scientifically called cobalamin, this vitamin is naturally found in meats, fruits, and dairy. People who eat highly processed or junk food on a daily basis sometimes suffer from vitamin B deficiency that shows up as high stress levels, anxiety, and poor hormonal health. These issues can make weight loss hard and often compel the body towards overeating or emotional eating and gaining unhealthy weight again. The addition of vitamin B12 in the Meticore formula can cover the psychological side of weight loss and make it easy for the body to be on the weight loss track. Moreover, it can energize the body, save from cardiovascular diseases, and improve blood circulation in the body.
This Meticore ingredients list shows natural names, suggesting that the company is right about being an all-natural formula. There are no stimulants, additives, flavor enhancers or other suspicious ingredients added to it. Hence, it appears safe for all users with no likely Meticore side effects.
To Know More About The Meticore Ingredients, Doses, And Working, Click Here To Visit The Official Website
Is Meticore Legit? How to Know You Won’t be Scammed?
With thousands of people falling for online scams almost every day, trusting a new product may seem difficult. But one thing that all these people ignore while shopping for their supplements is to look for the background and analyze information before making their decision to buy it.
Health experts warn all people who buy health-related products without a background check as many of them don’t even read the label or take it wrongly and suffer from side effects later.
Though individual results may vary, the best features of this supplement are as follows.
Meticore follows a systematic approach while making weight loss easy for a person. It can fix all the causes of a slow metabolism while inducing thermogenesis. Together, these two can melt even the most stubborn fat layers, leaving behind a slim and toned body.
In addition to weight loss, Meticore reviews online show that it helps to maintain the results after reaching the target weight once. This weight management is rarely seen by any dietary formula as most of them only focus on losing weight.
Meticore diet pills are 100% natural with no questionable ingredients inside. It is also free from synthetic fillers, hormones, allergens, and toxins.
The company follows a high-standard manufacturing process using an FDA-approved facility. Every batch is tested and verified before dispatching to the warehouse. The bottles are sealed to protect the inner contents from moisture and contamination.
It has a high success rate as most Meticore users can see a difference in their weight within four to eight weeks of using it. Others who are highly obese may take up to six months for a complete transformation.
The product has no side effects or risks attached. The ingredients are sourced from premium quality sources, from verified merchandisers, and their dosages are designed as per the safe values of an adult user. Moreover, many Meticore reviews UK and Meticore consumer reports have shared how users reached their target weight loss without experiencing any negative effects.
All this suggests that Meticore is not likely a scam but a legit product for weight loss.
Also check out what Meticore customer reviews are saying about the benefits of this weight loss product. Does it help everyone? Find Out More Here!
Can you Expect Meticore Side Effects?
According to many Meticore reviews Reddit, there is nothing on the internet that may raise a question on this supplement. But it doesn’t mean that you can practically experiment with this supplement by not following the dietary instructions.
The company emphasizes a fair usage of Meticore, which means it shouldn’t be used by people under 18, those with cardiovascular and digestive issues as well as pregnant females. Never mix Meticore pills into any food or beverage recipe and go with the directions mentioned on its official website.
When used as per instructions, Meticore targets the issues that make it hard to lose weight. If this is your first time with any dietary supplement and you are confused about it, talk to the doctor and discuss all your fears. Maintain a healthy lifestyle with this supplement to experience the results faster and for a longer period.
(BEST ONLINE DEAL DEAL) Click Here To Order Meticore Bundle Package at Reduced Prices for all New Customers!
Directions to Use Meticore Capsules
Meticore comes in the form of capsules that are orally consumed with water. The daily dose is only one capsule taken at any time of the day. It can be taken with a big meal or between two meals, as per individual preferences. Like most supplements, it is better to take the Meticore pill in the morning so that it has all day to work and show results fast.
Never consume Meticore more than the daily recommendation as misuse of any supplement can cause side effects. Taking more than one pill will exceed the safe dietary values per day, which may cause digestive issues in the user. For best, stick to the guidelines provided by the company.
Where to Buy Meticore? Discount Price Offers and Refund Policy
For now, Meticore is only available online and can be bought from the official website i.e. meticore.com and mymeticore.com. The company has not signed any deal with the local retailers including Amazon and Walmart or individual resellers to avoid any Meticore scam.
Right now, Meticore is available in a single bottle pack and bundle packs. Each bottle lasts for one month, and noticeable results may take between three to six months to show up. As it is an all-natural formula, it can be used for a long time without worrying about the side effects because there are none.
Here are the pricing details of Meticore pills.
One bottle price (30 doses)- $59 only
Three bottles pack (90 doses)- $49 per bottle
Six bottles pack (180 doses)- $39 per bottle
All orders are protected with a 60-day money-back offer, as mentioned on Meticore.com.
The terms and conditions section on the official website mentions that every user is entitled to request a refund within 60 days of purchase. However, this policy is valid for only those orders that are placed through the company directly. If a person has bought Meticore from a local seller or any other online source, the company has a right to reject the refund request. Remember that the refund policy doesn’t cover the shipping charge, and the user has to cover them.
To get a refund, the customer is expected to send back his ordered bottles of Meticore to the company at the following address, with a note mentioning his order number and other necessary details. 1301 Ridgeview Drive, McHenry, IL 60050.
Find out more about the money-back offer by contacting the customer support team at [email protected] and (888) 966–1522 (TOLL-FREE LINE)
Meticore Reviews — Conclusion
Overall, the Meticore supplement appears to be a product that you can trust. Its herbal composition, faster results, affordability, and risk-free formula add to its value. The company has a full money-back policy for all unsatisfied customers, which further sweetens the deal. The company is ready to refund you if this product fails to impress you in any way. Besides, it doesn’t require you to follow any particular diet or pay for expensive gym memberships, so there are no risks involved. For more details on orders and refunds, visit the official Meticore website.
(LIMITED SUPPLIES) Click Here To Order Meticore From The Official Website Before The Stock Runs Out
Product Contact:
Meticore
[email protected]
About SupplementReviews:
This press release has been created by SupplementReviews, a USA based company that provides consumers with product reviews and reports helping them make informed decisions. Individual results may vary and this product review has been published for information purposes only. Any purchase done from this link is subject to the final terms and conditions of the website that is selling the product.
FDA and Supplements: The FDA will never approve a dietary supplement. According to the Food and Drug Administration, dietary supplements are a category of their own, and they are not subject to FDA regulation or approval. If a company is claiming that the FDA approves their diet supplement, run. This is a clear misrepresentation. — This statement has not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure, or prevent any disease. — — These statements have not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure, or prevent any disease. | https://medium.com/@rengawy1/meticore-reviews-fake-weight-loss-results-or-real-customer-success-stories-806e25942a05 | ['Abdelrahman Emad'] | 2021-04-25 21:45:04.240000+00:00 | ['Weight Loss Program', 'Supplement Reviews', 'Weight Loss Tips', 'Weight Loss', 'Fitness'] |
Feature Toggling With LaunchDarkly | Feature Toggling With LaunchDarkly
With any large scale software project, there are multiple avenues to consider when it comes to deployment: whether to ship your new software all at once, roll out the project in phases, or release a canary into the coal mines.
With all-at-once delivery, your new product will be presented to users in a single, “big bang” release. This would be the quickest approach, but yields large room for error which may come back to bite the development team further down the line. On the other hand, a phased migration takes longer to complete, but offers extra protection with smaller, incremental releases and feedback from customers earlier in the development process.
Following a phased rollout strategy means development teams need the ability to easily enable and disable new features. This is where LaunchDarkly proves useful, allowing us to create feature flags to seamlessly show and hide functionality. LaunchDarkly provides a UI for toggle management along with a wide range of SDKs for easy integration into our application code. Let’s take advantage of LaunchDarkly’s 14-day free trial to explore how this works.
How Do I Create a Feature Flag?
Starting with the UI console, LaunchDarkly configures a default project with two environments: Production and Test. We can alternate between these environments using the project drop-down at the top-left corner, or create a new project altogether under the account settings. We’ll stick with the default Production project for now, creating a new flag with the “+ Flag” button. | https://medium.com/avmconsulting-blog/feature-toggling-with-launchdarkly-f46bd0964666 | ['Ross Rhodes'] | 2021-03-21 10:31:08.885000+00:00 | ['Deployment', 'Launchdarkly', 'Feature Flags', 'Software Engineering', 'Software Development'] |
Moving Made Me Feel Like a Hoarder | Things got shoved into the closet and under the bed. Our kitchen was huge, and I took advantage of that space to buy every kitchen gadget I could. I’m sentimental and held onto items given to me by my parents and grandparents, tucking them away where they would be safe. I figured I would reference college and grad school textbooks and notebooks. Then, there’s my preference for physical planners and journals over digital. And books — I’m not too fond of eBooks and prefer to buy physical copies.
As I packed and then schlepped items to the various storage locations around the city, I fully understood the Marie Kondo craze and why people spent quarantine purging their houses.
I didn’t have the time to sort through everything as I packed, but as I unpack and venture into a new place, here’s what I plan to do and ask myself:
When Did I Last Wear This?
In America, we’re obsessed with fast fashion. We buy cheap clothes and lots of them, instead of a few key, quality pieces. I started to put together capsule wardrobes, especially for travel, but family often gifted clothing, and my closet grew.
Working from home for more than nine months taught me I don’t need 90% of my clothes. And packing each piece into more than seven suitcases and duffle bags (plus extra grocery bags) only proved my point.
When I unpack at my next location, I will take a moment with each piece and ask myself, “When did I last wear this?” If the last time I wore it was more than a year ago, it goes into the donation pile.
Does This Make Me Happy?
Marie Kondo’s method involves asking if something sparks joy, and this idea is similar. We often collect things for status or to “have” things, but do they bring us happiness?
My books and records bring me happiness and contentment, but those knickknacks, other things gifted for the sole purpose of gifting, and items purchased on impulse, don’t necessarily bring me joy. Instead, it either clutters, which brings anxiety, or gets shoved into storage to get out of the way.
For each sentimental or decor item, I’ll ask if it makes me happy. Does it bring joy or contentment into my life? If not, then it goes.
Is This Useful?
There is a trick I successfully use with my anxiety and stress, but it can translate to clutter and stored items. When I feel anxiety or stress coming on, I always myself those feelings and thoughts are useful.
As I unpack adventure out into stores post-pandemic, I will ask myself if what I’m unpacking or buying is useful. If I don’t think it’s something I can use, it will either go into the donation pile or stay on the shelf in the store.
The results will be two-fold: less clutter and less spending.
Will This Promote Hygge?
As a half-Dane seeking to quiet my mind and find more contentment, I’m bringing hygge into my life.
Scandinavian design is “marked by a focus on clean, simple lines, minimalism, and functionality without sacrificing beauty” (Apartmenttherapy.com). It’s everything I’m looking for in my next home — a lack of clutter and functionality.
Now, I’m not going out and buying all kinds of decor to promote hygge, but I’m making use of what I already own to create a cozy feeling. If something I have in those boxes won’t contribute to hygge, then it’s not something I want to keep. | https://medium.com/@kristijacobsen/moving-made-me-feel-like-a-hoarder-c07a3a82e578 | ['Kristi Jacobsen'] | 2020-12-27 19:32:40.154000+00:00 | ['Minimalism', 'Self Improvement', 'Home Improvement', 'Moving', 'Personal Development'] |
HOQU on EOS: Getting Started | Dear community!
The HOQU platform version on EOS is already running on the main network!
HOQU is an ecosystem for affiliate marketing that integrates webmasters, advertisers and the network into a single platform.
Key features and benefits of the HOQU platform on EOS:
The ability to create or register your network on the HOQU platform;
A common base of webmasters and advertisers, thanks to which advertisers and webmasters have the opportunity to interact with almost all networks on the platform;
The transparency and honesty of the actions of all platform participants through the use of smart contracts, as well as transaction security.
Low commission (0.5% of the reward for a confirmed lead)
Transparent rating system.
Registration on the platform
Registration on the platform is available via the following link: login.hoqu.io
Select the registration application appropriate for your role:
Webmaster
Advertiser
Network
The following authorization methods are available for registration in each application:
Registration by email
2. Sign Up with Scatter
Registration by email
To register using email, you must fill in the data in the window that appears.
The account name must be 12 characters long.
Note: EOS allows you to use unique names (account names) for token transfers.
If an account with the name you entered already exists on the EOS network, then you will see an error indicating that this account is already registered.
Accordingly, if the given name for the account is already taken, then it is necessary to choose another combination of characters.
If you want to use an account that already exists on the EOS network to work with the platform, then we recommend using Scatter to register with the application.
After successfully filling in the form, you must proceed to the next step.
2. Next, you need to generate keys and link them to your account.
Note: To use an EOS account as a wallet (for payment), you must bind at least one pair of private-public keys to it.
On the page that opens, click Generate Key, after which a pair of keys will be automatically generated, which will be associated with your EOS account upon completion of registration.
The generated key pair must be securely stored. Then click Register.
3. Email Verification
To complete the registration, you must confirm the email by following the instructions sent to the address you specified during registration.
After the email is confirmed, the account will be sent for approval. Why is this step necessary? To work with the HOQU platform on EOS, you must have an account registered on the EOS network. Therefore, after and in case of approval of the account being created, the HOQU team will independently create for you an account on the EOS network with the same name. Then you get full access to the application.
Note: Use Scatter to register an account on the platform by skipping the approval step. You will thus be creating an account on your own.
In the future, you will also be able to log in to the platform using your email address to log into an account that you have already registered.
Scatter
Scatter is an analogue of MetaMask for the EOS network. It creates a secure access to the wallet without the need to show the private key each time.
To work with Scatter, you must have an account on EOS with keys attached to it. We will describe the scheme of working with Scatter below in more detail.
Install and run:
Follow the link, download, install and run Scatter.
When you first start, you need to think up and enter the password twice, which you will need later to enter Scatter.
Next, Scatter will generate 12 words that need to be copied and stored securely.
After agreeing to the terms of use, you will be asked to make a backup, with which you also need to agree.
2. Import the keys.
To get started, Scatter will ask for a key, which you also need to agree on by providing one of the keys.
Click Add Keys →Import key →Text →Enter the private key from the key pair → Confirm.
After successful import of the key pair, a window will open where you can view information relating to these keys, as well as carry out settings.
The Key name field will contain the automatically generated name for the key, which, if desired, can be easily changed, as was done in the example.
Click on the Back button, which is located in the upper left corner, and go to the main window. In the field to the left, you will see the Key name of the key just imported. The number of linked accounts will be indicated below.
Scatter allows you to work simultaneously with multiple accounts. If you need to add another account, click: Add Keys →Import key → Text → Enter the private key of the key pair associated with the account you want to add.
3. Creating an account in the HOQU app using Scatter.
Select Scatter as the authorization method.
If at least one account is configured in Scatter, a window will appear allowing you to select an account for authorization. If Scatter remembers your account, then this step will be skipped.
For authorization, you must click on login near the selected account.
Thus, if the account is used for authorization for the first time, then after authorization it will be associated with the role under which you have entered. For example, if you are logged in as a merchant for the first time, then this account can later be used only for authorization as a merchant. To enter another application / role, you will need to create another account.
After selecting the account in the window that appears in the Scatter, click Allow.
Note: secret is a random number generated automatically at each authorization. Used by internal platform logic to verify account ownership.
After you give permission, the application that interests you will open.
How to change an account to access another application.
With successful authorization, Scatter will remember your authorization. If you open Scatter, you will find information about it on the main application window.
This means that Scatter remembered the account that was used for actions on login.hoqu.io.
Therefore, if you want to use another account, then you need to delete this data from Scatter. To do this, just click on the symbol that resembles a basket next to the saved authorization method.
How to get started with the platform
HOQU for Affiliates
To work with the platform, you need to:
1. Fill in the profile data (Account → Profile).
2. Join the network (more details in the Affiliate Networks section).
3. Create an advertising campaign for a specific offer of the network.
After the campaign is created, the affiliate will receive an affiliate link to which the traffic is sent as part of this campaign. Users following this link to the merchant’s website will take actions, including targeted ones, and the affiliate will receive rewards for them.
Leads in the HOQU platform will be recorded at the time the target action is performed with its parameters being determined in the conditions of the offer.
HOQU for Merchants
To work with the platform you need to:
1. Fill in the profile (Account → Profile).
2. Join the network (more details in the Affiliate Networks section).
To join the affiliate network, you must select a network and send a request to join the network. The application will be immediately sent to the representatives of the selected network. After the network approves the application, the network manager will contact the merchant.
3. Provide data for creating offers
The network you joined creates an offer for you. The merchant must provide all the materials to create an offer.
4. Confirm the offer created by the network, to activate the offer.
5. Conduct integration, or add integration codes from the HOQU platform to your resource. Integration instructions will be available later in the Help center, as well as on the offer card.
6. Have a positive balance on your wallet.
HOQU for Affiliate Networks
To start working with the platform you need to:
Fill in the network data.
To work on the HOQU platform, the network owner must fill in the network data.
2. The network can set up special settings for joining, as well as add managers to its network.
You can view all applications for joining your network in the Participants section and accept the merchant’s / affiliate’s application, or reject it altogether.
3. Only the network can create an offer in the system, which subsequently becomes attached to a specific merchant. The offer itself is created on the basis of information provided by the merchant. Before the offer is made available to affiliates, it must first be confirmed by the merchant.
After the offer is activated, the affiliate will be able to create an offer campaign and get an affiliate link. Information about existing campaigns can be found in the Campaigns section.
When creating an offer, the network can indicate in the settings Working conditions: Publicity. In this case, when the affiliate creates the campaign, the campaign will first be moderated. The network conducts moderation and only after that the affiliate gets an affiliate link.
Key sections of the application
1. Leads
A lead is a target action of a potential client, which is defined in the offer.
To view information about leads, go to the Leads section. This page will display a list of your leads with such data as: ID, status, date, offer, payout.
The merchant/ affiliate/ network can view information on leads in the “Leads” section and reports on this section by day and by offers.
2. Dashboard
The merchant/ affiliate/ network can view basic information in the Dashboard section, which shows general information on the system in the form of graphs (earnings / payments / merchant’s costs/ affiliate’s earnings / conversion) and tables.
3. In the Analytics section you can see the earnings / payments and conversion analytics.
4. Offers
The offer is an merchant’s advertising offer, which is entered into an affiliate network.
The offer is created for the merchant via an application to the network.
The offer includes:
Basic data (name, category, type, description, rules)
Links
Conditions of work (level of publicity of the offer, Level of the affiliate, waiting time)
Settings (country, status, traffic sources, logo, targeting, limits)
5. Affiliate networks
How to join a network?
To join a network, go to Networks → All networks.
In the general list of networks, it is possible to view information about each network and join the one of interest. In order to join a network, you must click on the Join button on the network card. If the network has the Auto Confirmation setting, then the merchant automatically joins the network. If this setting is not present, then the application for connection will be immediately sent to the network. The merchant can work with different networks.
The network can set the settings for joining. If you do not meet the requirements of the network to join, you will not be able to join this network.
How to disconnect from a network?
In order to disconnect from the network, you must select Networks → My Networks.
On the network card you want to leave, click on the “Leave” button.
If you already have offers on the network you want to leave, then all active offers will be deactivated when you disconnect from the network.
6. Tickets
If any questions arise, the merchant/ affiliate can ask the network using the ticket system (you can create a ticket using the side menu of our interface: Support →Tickets section). In addition, each affiliate or merchant is assigned a network manager, whose contacts are displayed in the Support → Personal Manager section, which can also be contacted to resolve issues.
All questions relating to the operation of the platform, suggestions and comments can be sent to the email [email protected]. In addition, for a quick consultation, the affiliate networks can use the chat in the application for networks,that will be available in the near future. | https://blog.hoqu.io/hoqu-on-eos-getting-started-25b3a04ec8a2 | [] | 2019-04-29 10:32:05.050000+00:00 | ['Affiliate Marketing', 'Getting Started', 'Blockchain', 'Eos', 'Hoqu News'] |
The Jimmy Butler Story: Why The Sixers Conflict Will All Be For Good | from Philadelphia 76ers on Instagram
The trade for Jimmy Butler to my Philadelphia 76ers was one that cemented the status of the Sixers as an established team with three superstars. While I was dismayed that we gave up core pieces of “The Process” in Robert Covington and Dario Saric, Butler adds an undeniable mental and athletic edge to the young and growing team, especially with Markelle Fultz’s mysterious shoulder injury.
Finally, we have three players with star credentials in Ben Simmons, Joel Embiid, and Jimmy Butler. But naturally, with all-star talent comes all-star level egos, and Jimmy Butler recently made headlines where he allegedly “aggressively challenged” Coach Brett Brown during a team film session about his role in the offense, where some who witnessed the confrontation called it “disrespectful.”
Already, about two months after trading for Butler, alarm bells have been ringing off in the minds of analysts and critics of the team. This was why he was traded from the Timberwolves, they said. This is why Jimmy Butler is a “locker room cancer.”
But let’s take a step back and look at the story of Jimmy Butler and how he came to be the star of Jimmy Butler we know today, in one of the stories of transcendent ascendance we have rarely seen in the NBA.
Jimmy Butler was once homeless in Tomball, Texas, a town outside of Houston as a teenager. His father abandoned the family as an infant, and his mother kicked him out of the house at 13, telling him “I don’t like the look of you. You gotta go.” With no means of surviving on his own, Butler survived by staying with families of different friends. Every couple of weeks, Butler had to move to another family’s home.
His senior year of high school, he moved in with the family of his best friend, Jordan Leslie (now in the NFL). Leslie’s mother, Michelle Lambert, was initially reluctant to take him in: Butler didn’t have the best reputation in Tomball, and the family struggled with finances having to take care of seven kids. But they took him in under the condition that he was a good role model for the younger kids in Lambert’s family. She would be a guiding force for the entirety of Butler’s life.
“That’s my family. That’s Michelle Lambert. She is my mom,” Butler said of her.
His story is often equated to the basketball version of The Blind Side, a movie about the life of Michael Oher, an NFL offensive tackle. The biographical movie features Oher’s homelessness as well, his adoption by the Tuohy family, and subsequently becoming a first-round draft pick in the NFL.
But although Jimmy Butler graduated high school a talented basketball player, he didn’t play in the AAU. He went to a Tyler Junior college before transferring to Marquette on scholarship. He received offers from many elite basketball programs, like Clemson, Kentucky, and Marquette, but ultimately, Lambert persuaded him to go to Marquette.
“That’s a great academic school. I told him he should go [to Marquette] because basketball may not work out long-term. He needed a good education and a degree to fall back on.”
At Marquette, Coach Buzz Williams stated that he’d never been harder on any player as much as he was on Butler. According to Williams, “I was ruthless on him because he didn’t know how good he could be. He’d been told his whole life he wasn’t good enough.” He eventually became an elite college player who could seemingly do everything on the court: rebound, defend every position and lead the team.
Despite his adverse circumstances, Butler did not succeed because of the pity other people had for him, but rather from courage. When Butler was going through the draft combine in 2011 he urged Chad Ford of ESPN to not take an interpretation of his life story that made people feel sorry for him.
“I hate that. There’s nothing to feel sorry about. I love what happened to me. It made me who I am. I’m grateful for the challenges I’ve faced. Please, don’t make them feel sorry for me”
The unfavorable circumstances were turned into positives for Butler. Because of what he’s overcome, he believes and inspires others to believe that they can overcome everything. I’m inspired by Butler’s story in the mere writing of this article, and more so too by his career. This is a man who went from averaging 2.6 points per game his rookie year to becoming a four-time all-star, four-time all-defensive player. Butler’s individual work ethic, determination, and drive are the best of the NBA. I believe it is true that he himself can do anything.
But can he will other people to do so? Is his intense style too headstrong for the players and coaches around him? It certainly was in Minnesota. But the media is also certainly reading too much into a small conflict and blowing it up. The Sixers have obviously been through their own share of adversity through the Process, evident in 10 and 18 win seasons. A team that has persisted and grown in the face of adversity, like the Sixers, can certainly withstand some drama of this small magnitude, and the expectation that a team clicks within two months of having its pieces together was unrealistic in the first place. If there’s any coach I trust to get these pieces together, it’s Brett Brown.
Since Jimmy Butler joined the team, the team has gone 16–5. He is clearly helping the team win, and I have complete faith that Butler and the Sixers will be fine, and that any growing pains the team goes through will all be for good. | https://ryanfan.medium.com/the-jimmy-butler-story-why-the-sixers-conflict-will-all-be-for-good-568e172d93a7 | ['Ryan Fan'] | 2019-04-12 04:01:02.097000+00:00 | ['Sports', 'Inspiration', 'Basketball', 'NBA', 'Sixers'] |
Focal Arche headphone DAC/amp review: It doesn’t get much better than this | It’s been less than 10 years since Focal entered the headphone market, but in that short time, the company has established itself as one of the preeminent makers of ultra-high-fidelity cans. TechHive has reviewed four models so far—the Clear, Elegia, Radiance, and Stellia—and all were judged to be excellent, though they will set you back quite a pretty penny.
To round out its headphone-related portfolio, Focal recently introduced the Arche DAC/headphone amp. Does it occupy the same rarefied heights of performance as the company’s cans? Is it a match for the glorious Stellia (which I had on hand for this review)? The answer is a resounding yes!
Mentioned in this article Focal Clear Read TechHive's reviewMSRP $1,499.00See it If you can pull the trigger before the end of 2020, and you already own a Focal Clear, Stellia, or Utopia headphone, Focal will give you a $1,000 voucher that you can apply to your Arche purchase. You’ll find more details on that at the end of this review.
[ Further reading: The best headphones you can buy ]FeaturesThe Arche is a solid brick measuring 7.8 x 2.5 x 11.4 inches (WxHxD) and weighing a hefty 10.25 pounds—the build quality is obviously of the highest order. Inside, the electronics are no less impressive. The DAC (digital-to-analog converter) is an AK4490 from Asahi Kasei Microdevices that provides two channels of conversion for PCM up to 768kHz at 32 bits and DSD up to 11.2MHz (aka DSD256). The Arche’s inputs, however, have somewhat lower limits, which I’ll discuss shortly.
Focal The Focal Arche is a solid brick of high-end electronics.
One feature that’s missing is the ability to decode MQA (Master Quality Authenticated) files. MQA is a lossless encoding scheme developed by Meridian that reduces the size and bandwidth requirements of high-resolution audio files. MQA titles from a provider such as Tidal must be decoded before being sent to the Arche.
True to its audiophile aspirations, the amplifier section is a completely dual-mono, pure Class A, fully balanced design that provides up to 1 watt/channel at 1 kHz for headphones with an input impedance of less than 32 ohms. The amp can drive impedances from 16 to 600 ohms with a frequency response from 10Hz to 100kHz, THD less than 0.001%, and signal-to-noise ratio greater than 116dB at 32 ohms. Those are some seriously impressive specs!
Interestingly, the Arche offers several presets that tailor the amp in various ways. For example, there are presets that match the impedance of the amp to the impedance of five Focal high-end headphones—Clear, Elear, Elegia, Stellia, and Utopia. In addition, there are two additional presets: Voltage and Hybrid. As you might expect, the Voltage setting puts the amp in voltage mode, while the Hybrid setting is a combination of voltage- and current-mode amplification. According to the company, the Voltage setting is designed to sound tube-like, while Hybrid is supposed to provide more of a solid-state sound.
On the back panel are three digital-audio inputs—coax and optical Toslink S/PDIF and a USB-B port—along with a pair of unbalanced RCA analog-audio inputs. Also on the back are a pair of balanced XLR outputs and a pair of unbalanced RCA outputs, which let you use the Arche as a standalone DAC in a 2-channel audio system. Rounding out the back panel is a USB-A connector that is used to update the firmware, a power on/off switch, and an AC power-cord receptacle.
Focal The front panel (top) includes a balanced 4-pin headphone output and unbalanced 1/4-inch headphone out, display, and a multifunction knob for volume control and menu selection. The back panel holds (L-R): USB-A port for firmware updates, USB-B input for digital audio, coax and optical digital-audio inputs, RCA stereo analog-audio input, stereo XLR balanced outputs, and stereo unbalanced RCA outputs.
The coax and optical inputs are limited to PCM digital-audio resolutions up to 192kHz at 24 bits. I tried to find out the maximum PCM resolution of the USB input, but Focal did not respond to this question by the time this review was due. DSD can be accepted only by the USB input. In all cases, the digital-audio signal is converted to 384kHz/32-bit PCM internally.
Mentioned in this article Focal Elegia Read TechHive's reviewMSRP $899.00See it The front panel has a center-mounted electroluminescent (EL) display with a large multifunction knob to its right and two headphone outputs to its left. In its default mode, the display shows the volume setting and selected input, while the knob adjusts the volume. When you adjust the volume, the display also reveals the gain setting (low or high) and the PCM sample rate of the incoming signal. Press the knob twice to display the main menu, turn the knob to select the parameter you want to tweak, and press the knob to select that parameter.
Actually, there are two pages of parameters. The first page includes input selection, gain (low/high), phase (normal/reverse), and amplifier (which lets you select an amplifier preset). The second menu page lets you control the brightness of the display, enable or disable sleep mode after a period of inactivity, reset the unit to factory condition, and display the firmware version and serial number of the unit.
The two headphone outputs include a standard 1/4-inch unbalanced output and a 4-pin balanced XLR output to use with the corresponding cable included with Focal’s headphones. When I reviewed the Stellia, I assumed the connection to each earcup was not balanced because the connector to the earcup has only two conductors, which my contact confirmed. At that time, it didn’t really matter, since I was using the unbalanced cable anyway.
Focal The Arche comes with a solid-aluminum headphone stand that lets you hang your cans with the amp (the Stellia are featured here).
With the Arche, however, I would use the 4-pin balanced cable, so I wanted to verify that the headphone itself does not have balanced internal wiring. This time, the company said the internal wiring is, in fact, balanced. Wait, what? I finally got the story straight after talking with the Focal product manager.
With that 4-pin cable, two of the pins carry the positive and negative signals for the left channel and the other two pins carry the positive and negative signals for the right channel. The two conductors on the connectors for each earcup convey the positive and negative signals for that channel to opposite ends of the voice coil, which is the definition of a balanced configuration. By contrast, in an unbalanced connection, the voice coil is driven only by the positive signal; the negative ends of both voice coils are tied together and to a common ground.
One nice touch is the solid-aluminum headphone stand that comes with the Arche. You insert it into one of the slots on the top of the unit, and you can hang your headphones on it so they don’t get lost.
Connection, Settings, CablesI started by connecting the Stellia headphone to the Arche using the 4-pin XLR cable. Next, I connected my iPhone XS to the Arche’s USB input using a Lightning-to-USB camera adaptor and a USB-A-to-USB-B cable. When I turned on the Arche, the phone reported that the device requires too much power and wouldn’t connect. Why would the Arche require any power at all? It’s plugged into an AC wall socket.
When I asked Focal about this, they said this is a known issue with some DAC/amps, though they are not sure why it happens. They recommend using the Apple Lightning-to-USB 3 camera adaptor, which has a separate Lightning port that you can connect to power. This is pretty kludgy, and I doubt that many people will use the Arche with their iPhone anyway.
Mentioned in this article Focal Radiance Read TechHive's review$1,290.00MSRP $1,290.00See iton Headphones.com So, I connected the Arche to my iMac via USB and played tracks from the Tidal Master library using the Tidal app, which worked fine. During my initial listening, I tried different amp presets, but I heard virtually no difference at all. The Clear preset might have been just a tad brighter than the others, but the difference was so tiny that it could easily be dismissed. I suspect it would make a bigger difference with headphones that have a much higher impedance.
I also tried the Voltage and Hybrid settings. The Voltage setting was a bit louder and richer, and the sound was slightly more present. I ended up sticking with the Stellia preset for most of my listening, but I could definitely recognize how the Voltage setting might be appealing.
In addition, I tried the low and high gain settings. As expected, they sounded the same except for level; I could easily match the perceived level at both settings with the volume knob. I was happy to discover that the Arche comes out of sleep mode with the volume set to 20, no matter what the level was when it went to sleep, which is great to avoid unpleasant surprises.
My last comparison was between the balanced and unbalanced cables. Again, the difference was very minor. The balanced connection sounded a bit more open and present, but not by much. Still, I recommend using it with the Arche.
Focal The Focal Arche DAC/amp is an elegant bit of kit.
Music timeIt’s December as I write this, so I started with Jacob Collier’s new single, “The Christmas Song.” This is a rich, dense, a cappella arrangement that’s classic Collier with just a bit of synth bells and a melodica solo. It exhibits a wide pitch and dynamic range, and the Arche rendered everything beautifully. The lead vocal was entirely natural, and the backing vocals were perfectly balanced in a clear, open presentation.
I’m a big fan of Donald Fagen, co-founder of Steely Dan, so I cued up the title track from his 2006 solo album Morph the Cat. It starts with a low bass line, which sounded deep and rich from the Arche. The vocals, horns, guitar, electric piano, and drums were similarly exquisite—well-balanced with superb imaging.
Mentioned in this article Focal Stellia Read TechHive's reviewSee it For some relatively out-there jazz, I listened to “Autumn Pleiades” from Dimensional Stardust by Rob Mazurek and Exploding Star Orchestra. This piece is in the musical form of a canon played by a large jazz orchestra, slowly building by adding instruments and melodic variations over a repeating bass line and harmonic progression. All instruments were clearly delineated, yet they formed a cohesive whole in a clean, open sound stage.
Solo piano is always a challenge for any audio system, so I listened to “Over the Rainbow” from Dave Brubeck’s album Lullabies. The sound of the piano was rich, well-balanced, and open with no hint of congestion.
One of my favorite discoveries this year is “Lonely Alone” from the album Threads by Sheryl Crow. On many of the tracks, she’s joined by famous singers—in this case, Willie Nelson. It’s an amazing mix, deep and immersive, almost as if it’s in surround. The vocals by Crow and Nelson were entirely natural and right up front, while the rest of the instruments, including deep bass, guitar, brush drums, organ, and harmonica, were clearly delineated within a wonderfully cohesive whole. This is how to mix a country song!
Focal You can dock the included headphone rest to the top of the Focal Arche AC/amplifier.
And now for something completely different: “Scuba Scuba” from Underwater Sunlight by Tangerine Dream. This rhythmic ambient track is almost entirely electronic with a wide frequency range and lots of stereo effects. The Arche rendered it all beautifully: clean, clear, and open.
For some classical music, I listened to the first movement of Vivaldi’s Concerto for Two Mandolins in G Major as performed by Avi Avital, Alon Sariel, and the Venice Baroque Orchestra on Art of the Mandolin. As I had come to expect, the sound was clean and open; I could hear each mandolin clearly along with each section of the orchestra. Even the super-low notes from the theorbo came through beautifully.
Finally, I cued up “The Great Gate of Kiev” from Mussorgsky’s Pictures at an Exhibition as orchestrated by Ravel and performed by the Berliner Philharmoniker under the direction of Simon Rattle. This piece has wide dynamic range from super-quiet passages to a crashing finale, and I could hear each section and solo instrument clearly. I was surprised, however, that the overall sound was a bit restrained—not veiled or congested, just not as present as I had heard on other tracks.
I wondered if it was the recording, so I played another big orchestral favorite, “Pines of the Appian Way” from Respighi’s Pines of Rome as performed by Filharmonica della Scala under Riccardo Chailly. Much better! The overall sound was more present and unrestrained, and the almost subterranean bass drum came through beautifully.
Bottom lineThe Arche is a worthy companion for any of Focal’s high-end headphones as well as just about any headphones you care to use with it. Its sound quality is impeccable: clean, clear, open, and utterly neutral. Every track I played sounded completely natural with no congestion, wide dynamic range, and effortless reproduction throughout the entire audible frequency spectrum.
The feature set is equally impressive. It offers impedance-matching presets for Focal’s headphones along with other settings to optimize the output for a wide range of cans and a variety of inputs and outputs. It can even act as a standalone, fully balanced DAC for speaker-based 2-channel audio systems. The only thing missing is MQA decoding.
As you might expect, all that capability and performance doesn’t come cheap: The Arche’s list price is a whopping $2,490. But if you’re thinking about investing in one of Focal’s high-end headphones and a comparable headphone amp, the company is offering some great package deals on Amazon through the end of the year. You can get the Arche and Clear for $3000 (a savings of $980 off the separate prices), the Arche and Stellia for $4,000 (a savings of $1,480), or the Arche and Utopia for $5,000 (a savings of $1,480).
And if you already own one of those headphones, Focal is offering a $1,000 voucher toward the purchase of an Arche through the end of 2020; click here for details.
If you’re a headphone enthusiast with very deep pockets, the Focal Arche is a worthwhile investment in your listening pleasure. And if you also have a high-end 2-channel audio rig, the Arche can serve double duty as an outstanding DAC with a fully balanced output. That’s two components for the price of one, which makes it a smart investment in my book.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details. | https://medium.com/@maria73182002/focal-arche-headphone-dac-amp-review-it-doesnt-get-much-better-than-this-3c53e31b087d | [] | 2020-12-22 08:17:28.545000+00:00 | ['Surveillance', 'Music', 'Chargers', 'Home Theater'] |
C++ Basics — Part I (for beginner’s) | C++ is a general-purpose programming language.
(and a lot more)
Before diving into code you must have compiler installed in your system.
Compiler — A program that converts High Level Language(HLL) into Low Level Language (LLL).
HLL :
int fib(int n) { int store[n+2];
store[0] = 1;
store[1] = 1; int i;
for(i=2;i<n;i++) {
store[i] = store[i-1] + store[i-2];
} return store[n-1];
} // Any language that you can code in, and is very close to "English" (a natural language) which is easy for a human to read, write and understand.
LLL :
mov edx, [esp+8]
cmp edx, 0
ja @f
mov eax, 0
ret // Commands or functions in the language map closely to processor instructions.
Windows Users : Install CodeBlocks (IDE) from here
Linux Users : Install g++ (highly recommended) and a CodeEditor (Sublime or VSCode)
Lazy Users : (yes lazy) : Can use online IDE’s , hackerearth, ideone .. etc. “I recommend you to always refrain from online IDE’s especially during a contest”
before we start, you must know that a C++ file has extension ‘cpp’, while a C file has extension ‘c’.
Let’s Start
#include<iostream>
using namespace std; int main() {
// A simple program to print "Hello World"
cout<<”Hello World!”;
return 0;
} Output: Hello World!
Now with explanation
#include<iostream> // iostream is a header file, it contains
// definition of cin & cout
using namespace std; // if this were not used then we'd have to
// write std::cout<<"abcd"; in the program
int main() { // declaration of function with name
// 'main' and return type of integer.
// the definition is found within "{}"
cout<<"Hello World!"; // cout prints output on the terminal
// notice - '<<' this is an operator
// notice - '//' statements beginning with
// these characters do not get executed,
// they are used for comments
return 0; // function main would return 0
// (an integer)
}
Let’s Math
#include<iostream>
using namespace std; int main() {
cout << "3 + 2 = " << 3+2 << endl;
cout << "3 - 2 = " << 3-2 << endl;
cout << "3 * 2 = " << 3*2 << endl;
cout << "3 / 2 = " << 3/2 << endl;
cout << "10 / 3 = " << 10/3 << endl;
cout << "10.0 / 3 = " << 10.0/3 << endl; return 0;
} OUTPUT :
3 + 2 = 5
3 - 2 = 1
3 * 2 = 6
3 / 2 = 1
10 / 3 = 3
10.0 / 3 = 3.33333
Notice 3 / 2 = 1 and 10 / 3 = 3, yes you guessed it correct, only the integral part is retained.
This happens because by default data-type is integer. Unless specified like in example 10.0 / 3 = 3.33333 (any operation that includes a decimal will result in one)
Book : Balaguruswamy OOPs with C++ (recommended)
Please go through chapter 3, 4 for a detailed content.
“Recommended”
All the chapters are important and would be covered in your course in span of one to two semesters. I cannot cover everything in this blog hence provided you with a resource.
Topics that you must know before moving forward are:
Variables (data type) Loops (for, while, do) IF ELSE Arrays & Matrix Functions
Important: To learn C/C++ you must read a book and run your code.
There is no other easy way.
(do not watch videos for this, start reading the book)
The next part would focus on getting a good grip on the language. | https://medium.com/programming-club-nit-raipur/c-basics-part-i-for-beginners-6c147101b35b | ['Aditya Agrawal'] | 2018-09-24 01:40:30.726000+00:00 | ['Technical', 'Students', 'Programming', 'Programming Club', 'Nit Raipur'] |
Harvard CS 50x — Week 9 (Flask). (This is a summary of week 9 from the… | (This is a summary of week 9 from the Harvard CS 50x series. Visit this page to see more from this series! )
It’s currently a Friday night in London, and I’m feeling a little exhausted. Really looking forward to the holidays now!
However, before I distract myself by dreaming too much of the upcoming holidays, I’m spending my Friday night catching up on the second last video of CS50x. This week’s focus was Flash.
Summary
Flask → A web app framework mainly used for Python
Design Pattern (Ways that humans write code)
MVC (Model, View, Controller)
Flask has certain organisational recommendations
Example of an organisational recommendation in Flask
“flask run” → starts flask app
GET → request.args
POST → request.form
Reusability of code
Layouts + use of Jinja
Froshims
Error checking
flask_mail
Sessions
Example of sessions represented by a “stamp on the hand”
Cookies
Logging in / out
JavaScript
“%” → wildcard for SQL commands
JSON(JavaScript Object Notation) → jsonify (takes a python list and returns it in a JSON format)
jQuery
AJAX
Thoughts
This week was really straightforward for me as the principals covered have already been explained to my previously during my Bootcamp course with Flatiron School. Nonetheless, it was intriguing to see everything written in Python using Flask. Can’t believe I only have 1 week of CS50x left - then I’ll be done :)!
Links to Assignments
No assignment this week. | https://medium.com/@ja9-look/harvard-cs-50x-week-9-flask-b89912bf63bf | ['Janine L'] | 2020-12-11 20:51:42.052000+00:00 | ['Computer Science', 'Cs50', 'Software Engineering', 'Flask', 'Python'] |
100% Committed to Open Source, YugabyteDB Community Update — June 11, 2020 | While it is still early days, it is exciting to see the YugabyteDB community hit some cool milestones! We wanted to share them with you, plus update you some additional community related news.
Our Commitment to Social Justice
First things first. Recent events that resulted in the death of George Floyd at the hands of police have been traumatic and painful for many of us, nationally and globally. It’s clear there is an overwhelming need for racial justice, equality, and the end to violence. At Yugabyte, we believe in equality for all; we do and will remain forever true to this pledge.
Our Commitment to Keeping Our Employees, Community, and Customers Safe
COVID-19 is having a devastating effect on people and communities around the globe. Health and safety remain a top priority for us. As such, we have been and will be conducting all business virtually until it is safe otherwise; this applies to all employee, community, and customer interactions, as well as Yugabyte job interviews, so we can all continue doing our part with social distancing and containment efforts.
Our Commitment to Open Source
YugabyteDB doesn’t pay lip service to open source like other Distributed SQL projects. We don’t trick you into downloading software that has time-bombed enterprise features that you can “try,” but ultimately have to “buy”. Advanced features like read replicas, encryption, distributed backups, change data capture, and multi-master async replication are all available under the Apache 2.0 license, not a fake open source license like Business Source License. To learn more about our commitment to open source, check out, “Why we changed YugabyteDB licensing to 100% open source”.
4,000+ GitHub stars
We recently crossed the “4000 GitHub stars” milestone on our GitHub repo! A huge thanks to everyone in the Yugabyte community who continue to encourage us onward with constructive feedback, bug reports, and feature requests. If you dig YugabyteDB, it’s simple to show us some love, just add your name to the Stargazers board here.
100+ open source contributors
We recently crossed the “100 contributors” milestone on GitHub and, as of this writing, now have 102 individual contributors who have helped make the core database software more functional, faster, and more reliable. We’d like to take a moment to highlight three recent contributions.
Are you ready to make a contribution? Check out our contributors documentation on how to get started.
It’s worth noting that the first couple of these contributions came from students who were inspired to contribute after sitting in on a virtual guest lecture I gave about Extending PostgreSQL to a Google Spanner Architecture at the University of Texas at Austin. We’ve also recently given deep dive tech talks at Twitter and Pinterest.
If your engineering team wants to get a better understanding of YugabyteDB’s architecture, PostgreSQL-compatibility, resilience characteristics, plus guidance on how to perform benchmarks, drop us a line to set up a tech talk at your company. These FREE virtual sessions are normally an hour long, delivered over Zoom and tailored to your areas of interest.
1150+ community Slack members
We know that development teams working on fast moving projects with tight timelines need to get answers to their questions quickly from the folks who are actively developing YugabyteDB. They also want to exchange learnings with experienced community members who have “been there and done that.”
YugabyteDB’s Slack channel is the community’s “center of gravity” on the Internet for helping make everyone’s project a success. You can join the conversation here, our community is standing by!
Our Commitment to Giving Back
Community-powered charitable giving
With the unprecedented circumstances facing our global community, we looked for even more ways to contribute. Now that events and meetups are virtual for the foreseeable future, in lieu of our in-person events budget-including food, drinks, and swag-we’re donating those funds to charity. At the end of our virtual events, we ask attendees to vote for their favorite charity, and then we make a donation on their behalf. For example, after the recent PostgreSQL virtual meetup, we made our first donation of $200 to No Kid Hungry to help feed children who are missing meals due to school closures.
Our Commitment to Creating Opportunities
We are hiring!
We know that there are people out there looking for work, and we are grateful to have some open positions. If you share our beliefs on equality for all, and want to help us on our journey to becoming the default database for the cloud, check out our open positions, including roles such as Developer Advocates, Customer Success Engineers, and more.
Our Commitment to Rewarding the Community
Announcing the YugabyteDB Community Heroes program
We are excited to announce our newest community program — Community Heroes! By now we hope you can tell that community is core to what we do at Yugabyte. We are excited to launch the YugabyteDB Community Heroes program to recognize and reward rockstar community members for their amazing contributions and achievements. We work alongside you in Slack, in the Forums, in GitHub, and are inspired by and grateful for your contributions.
YugabyteDB Contributors — Community members who help us improve YugabyteDB through code, documentation, or helping others.
YugabyteDB Heroes — Experienced contributors who consistently make high-quality contributions and champion YugabyteDB within other communities.
YugabyteDB Super Heroes — Community leaders who take the YugabyteDB community to the next level in their local area.
The engineering, community, and marketing teams at YugabyteDB are on the constant look out for community members to nominate for a reward, so now is a great time to get involved!
Installing YugabyteDB has its rewards
In less than a year, we have mailed over 500 limited edition t-shirts destined to over 55 countries to sharp-eyed developers who have spotted the easter egg hidden in plain sight in the YugabyteDB admin UI. Want a reward for yourself? It’s simple, just install YugabyteDB on the platform of your choice and have a look around.
Our Commitment to Supporting Open Source Communities
YugabyteDB integrates with your favorite open source projects
When open source matters, it is important that YugabyteDB plays nice with your favorite open source projects and tools. Every month we announce several open source integrations, so this list is not comprehensive, but should give you an idea of the sorts of projects that integrate with YugabyteDB that have been demonstrated in our Docs, blogs, and video library.
Community events
Whether it is a large event or a local Meetup, Yugabyte is committed to supporting the people and projects that make the YugabyteDB community and open source prosper. Join us at one of these upcoming events:
You can find all of our events here. Want to invite us to speak at a future conference or Meetup? Contact us. | https://medium.com/yugabyte/100-committed-to-open-source-yugabytedb-community-update-june-11-2020-the-distributed-sql-b15f812b36ae | ['Karthik Ranganathan'] | 2020-08-04 16:22:04.417000+00:00 | ['Database', 'Open Source', 'Software Development', 'Sql', 'Hiring'] |
Training workshops — from colleagues for colleagues | We kept pondering that question quite some time. We know we have some very bright people amongst our colleagues. And we also have more than enough colleagues willing and able to learn. So how to combine these two factors?
Some background information
We were experimenting with this for years now. To understand why this is not an easy question you need to understand how we are operating — or better, how we are invoicing:
Bauer + Kirch started out with a product which was — and still is — sold to customers via licensing. But most of our revenue is made with individual development for our customers. That means that we are not selling a product or licenses but time and material.
So every minute not invested into a project is a minute not bringing in money.
Although this is somewhat true for product companies as well it is not that visible when your license fees keep coming in. If you stop investing time and effort into your products your customers will cancel their subscription at last — but it will take some time for them to become that annoyed.
We needed to find the balance between spending every minute on projects (and therefore stagnate in matters of technological expertise) and learning all the things we would like to improve ourselves in (but neglecting out projects and customers).
Discovering new ways
Things changed a few years ago when we restructured most of our teams using Scrum.
Our Sprint Planning is based on a mix of velocity and team capacity: We usually sum up the time of every team member spent at work during a sprint and the amount of work done, i.e. burned User Stories. For upcoming sprints we estimate how many hours we expect the team to work (subtract vacations, sickness and such) and — et voilà — we “know” how many User Stories we want to tackle this sprint. | https://medium.com/bauer-kirch/training-workshops-from-colleagues-for-colleagues-ad2709cecc20 | ['Patrick Rölike'] | 2020-04-23 10:28:11.320000+00:00 | ['Improvement', 'Workshop', 'Scrum', 'Training', 'Agile'] |
How to Write Code With Your Kids at Home | CORONAVIRUS
How to Write Code With Your Kids at Home
Are you locked in at home with kids? Here are some tips to help you stay sane, do some work, and still leave a smile on your kids’ faces
Photo by Allen Taylor on Unsplash.
Worst-case scenario: You are alone, your kid is five years old and full of energy, and your company successfully switched to remote working or you are a freelancer already working from home.
Everything would work out fine if your kid could stop storming in every five minutes with a question or request:
I’m hungry.
I’m angry.
I’m lonely.
I need to poo.
Mommy, Daddy, I have a million ideas…
And we all know that interruptions don’t help anyone write code.
To really get anything done, you need at least an hour of quiet time. And with the coronavirus here to stay, what can we do? Things still need to be done. Bugs need to be removed, urgent things need to be handled.
So here are some tips to help you stay sane, write some code, fix some bugs, and still keep your kid happy: | https://medium.com/better-programming/how-to-write-code-with-your-kids-at-home-f7959fa22dbe | ['Jana Bergant'] | 2020-04-29 11:49:02.416000+00:00 | ['Startup', 'Productivity', 'Kids', 'Programming', 'Parenting'] |
Step Two After a Job Loss. | So, you got laid off. Or fired. Or quit. Regardless of what led you to this moment, you’re here now and looking for a new job. Maybe?
You’re in medias res. In the middle of things.
Photo by Kevin Wang on Unsplash
I have a spectacular track record of looking for work. In the 25 years since my first job, I’ve held 22 different titles in nine different cities in two countries. I’ve worked for companies as small as five and as large as 15,000. Private, public, big money, not-for-profit, thriving and dying. I’ve been hired, fired and early-retired. I’ve quit, I’ve re-negotiated, I’ve re-located, I’ve turned down offers and have certainly been turned down, too.
My resume is everything that your parents likely warned you against, but the tapestry of industries and exposure I have had in my career is firmly part of my brand now. After getting to peek behind the curtain of several different organizations, I have one heck of a toolbox that I bring with me to the table.
What you don’t see on the resume is how I moved through those ‘in between times’ — the time between old job and new job. It wasn’t always pretty. Job loss can be really challenging to navigate, and sometimes the motivation to take action just wasn’t there. I got better at this each time and figured a few things out. Like this:
The second thing to do once you’re out of a job is redefine what a successful day looks like.
Photo by Glenn Carstens-Peters on Unsplash
It’s likely that you were clear on what a successful day looked like when you were working. You had a list of things to do, people to connect with, projects to influence, emails to write and decisions to make. At least eight hours of your day was in service of your job. And now without that clarity, well, what?
Despite what others may say to you, looking for a new job is not your new full-time job.
Would you take a job that has you job searching for eight hours a day? Likely not. You are responsible for the design of this time, and when you intentionally create what fills it, you are practicing self-leadership. | https://medium.com/@bronwyn-smith/step-two-after-a-job-loss-292d8061d470 | ['Bronwyn Smith'] | 2019-10-19 22:23:58.262000+00:00 | ['Career Advice', 'Unemployment', 'Coaching', 'Leadership', 'Job Search'] |
The Difference Between Twitch Affiliates and Partners | Twitch Affiliate Invitation Email
Ever since all this hubbub about Twitch Affiliates started, we’ve been seeing a lot of questions surrounding the new revenue stream for broadcasters.
Can Twitch affiliates make money streaming? What’s the difference between a Twitch Partner and Twitch Affiliate? Or simply, what Is a Twitch Affiliate?
At it’s core, streaming is a good time, it allows you to do what you love from the comfort of your home, and interact with other gamers/streamers that have the same interests as you. But if you are trying to take your hobby to the next level and become a full time streamer, don’t be naïve. It requires dedication, commitment, and hard work in order to be successful.
Of all the streaming platforms out there, (Mixer, Hitbox, YouTube Gaming etc.) Twitch is undoubtedly the largest live broadcasting platform in the world for all types of streamers. However, with over one million monthly broadcasters, if you are a small-mid sized streamer, it has become difficult to takes steps towards the much sought after Twitch partnership. This is why Twitch has announced its new Affiliate program.
The program serves as an effort to recognize the dedication and hard work of small - medium sized streamers by giving them the incentive to continue following their passion- while opening up the opportunities to earn some serious money.
We’ve had some of our LVLUP Dojo members see huge success in the Affiliate Program, ItsSkitz has only been an affiliate for a couple weeks and already has over 200 subs (as of July 18th, 2017).
There have been a lot of questions and controversy surrounding the program’s release, so we caught up with Twitch Community Admin D1 to address some of those questions, discuss the program as a whole, and get the inside scoop on how you can join the program and start making some money on Twitch!
So let’s get into it…
The Twitch Affiliate Program in a nutshell
The Twitch Affiliate Program takes dedicated and qualified streamers and puts them one step closer toward making their passion a career. Affiliates may begin making money on Twitch as they build their audience and work toward the coveted status of Twitch Partner.
What makes a streamer qualified to participate in the Twitch Affiliate Program?
Any streamer having met the baseline criteria (outlined below) is eligible for the Twitch Affiliate Program:
· At least 500 total minutes broadcast in the last 30 days · An average of 3 simultaneous viewers or more over the last 30 days · At least 50 Followers · At least 7 broadcast days in the last 30 days
The main aim of the Twitch Affiliate Program is to expand the number of streamers on Twitch who use streaming as a means to generate revenue as well as give more people a chance to make their dreams of full-time streaming come true.
How will I know if I have been accepted into the program?
Once you have achieved and maintained the above broadcasting criteria, you will get an invitation via email from Twitch admin. Once you receive the email, there are only a few steps involved with activating your Affiliate account (which are outlined in the email). After you provide Twitch with the necessary information, it’s official, you can now begin conquering the world.
How do Affiliates make money on Twitch?
The Affiliate Program from Twitch allows eligible streamers to start earning income on Twitch while building their audience at the same time. It is an opportunity that aims to bridge the gap between the emerging streamer and Twitch Partners. Here’s how Affiliates can earn money streaming:
· Twitch Affiliates receive a subscribe button where viewers can pledge their allegiance and support to your channel.
· Affiliates can generate revenue by accumulating Bits.
· Affiliates will be able to track revenue from Bits on the Revenue tab of their Dashboard. They will receive 1 emoticon slot for their subscribers.
· Affiliates can earn revenue from game sales and in game items that originate from their channel page.
What are the differences between Twitch Partners and Affiliates?
While at their core, the Affiliates and Partners are essentially granted the same capabilities with their streaming, there are a few monetization options and badges of honor that Twitch withholds strictly for it’s partners. For example, Twitch Affiliates can receive bits. However only Twitch partners can have a verified chat badge.
The chart below covers what’s available to Partners vs. Affiliates in more detail.
Like this post? Be sure to share it with your fellow streamers and let them know the good word about the Twitch Affiliate Program.
Want to explore the different revenue streams that you can create through gaming?
If you are really serious about pursuing this opportunity, check out a LVLUP Dojo membership. We are the leading name in the industry for streamer growth. We invest in gamers to propel their careers and help turn a hobby into a career. The Dojo provides you the platform to grow towards becoming a Twitch Affiliate and start making money gaming. Along with growing and legitimizing eSports in mainstream media. | https://blog.lvlupdojo.com/the-difference-between-twitch-affiliates-and-partners-e29175dfa435 | ['Lvlup Dojo'] | 2017-07-24 23:08:01.057000+00:00 | ['Twitch', 'Streaming', 'Esport', 'Videogames'] |
HDFury 4K Arcana review: This magic box fixes a crucial shortcoming of pre-2019 TVs | HDFury 4K Arcana review: This magic box fixes a crucial shortcoming of pre-2019 TVs Carol Jan 27·5 min read
To some, the HDFury Arcana will seem like an overpriced gadget that doesn’t add anything to their home theater setup. For everyone who owns a pre-2019 smart TV and a relatively high-end soundbar capable of decoding object-based soundtracks (i.e., Dolby Atmos or DTS:X), it will be nothing short of a miracle.
Mentioned in this article Sonos Arc Read TechHive's review$799.00MSRP $799.00See iton Sonos The $199 4K Arcana fills a very specific need: As described by the company, the small box enables “full audio to any eARC sound system for up to Dolby Atmos, Dolby TrueHD, Dolby MAT Atmos, DTS:X, DTS-HD Master Audio and older formats.” In layman’s terms, it means a high-end soundbar such as the Sonos Arc will deliver full lossless audio to your soundbar even if your TV doesn’t support eARC.
This is not a rare problem for people who value high-quality audio. While most modern televisions support HDMI ARC (the Audio Return Channel that sends sound from your TV to your audio system over an HDMI cable), ARC doesn’t provide enough bandwidth to carry high-resolution lossless audio formats such as Dolby True HD and DTS-HD Master Audio. A new standard called eARC (Enhanced Audio Return Channel) delivers more than enough bandwidth, but only newer TVs—starting with high-end models sold in 2019—support eARC. Read this story for a deep dive into the two standards.
Michael Simon/IDG The HDFury Arcana will add some extra cable clutter to your setup.
If you’re like me, you’re going to find out the hard way that your home theater system isn’t all that it could be—after you’ve invested hundreds of dollars in a new soundbar. Few things are more frustrating than splurging on an expensive new device only to discover that another component in the chain prevents it from delivering all that it’s capable of. That’s exactly what happened to me: I excitedly set up my new Sonos Arc, put on a movie with a Dolby Atmos soundtrack, and heard only lower-resolution, Dolby Digital Plus. (This Sonos support article explains what’s needed for each audio format).
[ Further reading: Everything you need to know about Dolby Atmos and DTS:X ]Granted, it still sounded much better than the five-year-old Sonos Playbar I replaced, but for the price I paid, I wanted nothing less than full 3D spatial sound. And I wasn’t about to replace the 82-inch TV I bought last year just to get eARC support.
That’s when I found the HDFury Arcana, a box specifically designed to address my issue. It has one input and two outputs, one that goes to the Sonos Arc’s HDMI port, and one that goes to my TV’s ARC port. It’s not unlike an external GPU for a laptop—the Arcana upgrades your TV’s ARC port to an eARC one as if you had purchased a brand new set. It’ll even work with TVs or projectors that don’t have a dedicated ARC port, and it solved the occasional lip-sync issues I was having with regular ARC.
Michael Brown / IDG Instead of your TV, the Sonos Arc will plug into the Arcana.
But if you’re reading this, your situation is likely the same as mine: You bought a Sonos Arc to go along with your kinda-new big-screen 4K TV that can’t deliver full lossless Atmos sound to it. And for that alone, The HDFury is well worth its $199 price tag.
Plug and playSetting up the Arcana is incredibly simple, though it’ll obviously add to the cable clutter behind your TV. The box is small enough to hide, but you’ll need to provide AC power to the Arcana and add another HDMI cable to your collection.
Keep in mind that the Arcana doesn’t come with any HDMI cables, so you’ll need to bring your own; specifically, one labeled Premium High-Speed HDMI, which provides bandwidth of 18Gbps. The cable that comes with the Arc will do the trick, but if you have any issues with sound or picture quality, try swapping out one of your other cables before running other diagnostic fixes.
IDG When you see the Dolby Atmos logo in the Sonos app an angel gets its wings.
The box has a small OLED display with various menu options and a somewhat clunky navigation wheel, but you shouldn’t have to venture beyond the Audio menu (which will likely stay set to “eArc Only”). There’s also an option for firmware updates when you insert a USB stick. Mine arrived up to date, so I haven’t used it.
If you have only one device to connect, you’ll plug the HDMI cable directly into the Arcana and you’re done. Since that’s not realistic and you probably have more than one device in your setup, the Arcana supports HDMI switches between the box and your TV. I was able to integrate my existing Caavo Control Center and use the full functionality of its remote without an issue.
Michael Simon/IDG You’ll need to download forware onto a USB stick to update the Arcana.
You might also need to reconfigure your 4K UHD Blu-ray player to send the correct audio bitstream to your soundbar. My Sony UBP-X800 player defaulted to PCM 7.1 sound (which is still better than I got without the Arcana), but a quick trip to the settings fixed it. All I needed to do was turn BD Audio Mix off to tell the player the Arc would handle audio decoding.
Small hiccups don’t spoil the showI have experienced a few quirks with my setup that you might also encounter, but nothing was a deal-breaker. Each time I power on my system, a message from the Arcana appears on the bottom of the TV screen for a few seconds. (Update: HDFury points out that this readout can be turned off by selecting "Off" in the OSD settings menu.) Also, when I play or stop a show within some apps, the screen goes black for a second or so. I’ve tried various settings on the Arcana to fix it, but nothing has helped, so it could be due to my Caavo. At any rate, it’s a minor nuisance.
Michael Brown / IDG There are a few hiccups here and there, but the Arcana will unleash the full power of high-end soundbars such as the Sonos Arc.
I also lost the ability to control the Sonos Arc using my Caavo remote. My Samsung TV doesn’t recognize the Arc as a receiver like it did when the speaker was plugged directly into an HDMI port on the set, so the TV thinks I’m controlling its internal speakers (and displays “TV speaker” on its screen). All I needed to do was reconfigure the Caavo remote to control the Arc’s volume instead of the TV’s. That took seconds in the Sonos app.
I’ve been using the Arcana as part of my home theater setup for a couple of weeks now, and I haven’t had any issues with connectivity. Nor have I needed to adjust settings or fiddle with anything. When I power on my Caavo, the Arcana sends the proper signals, and when I play an Atmos movie or show, the Dolby Atmos logo appears and I get fully lossless sound from my Blu-ray player. It’s a beautiful thing to see. And hear.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details. | https://medium.com/@carol67897948/hdfury-4k-arcana-review-this-magic-box-fixes-a-crucial-shortcoming-of-pre-2019-tvs-c199518d303a | [] | 2021-01-27 19:10:57.082000+00:00 | ['Electronics', 'Consumer', 'Connected Home', 'Deals'] |
Allergy warning for Pfizer/BioNTech vaccine after UK health workers with allergy history suffer reaction | By Emma Reynolds, Sharon Braithwaite and Amy Cassidy, CNN
People with a “significant history of allergic reactions” should not be given the Pfizer/BioNTech coronavirus vaccine, UK health authorities said Wednesday, after two health care workers experienced symptoms after receiving a shot the day before.
https://note.com/qitsdsfwe/n/n5f8ccc5aa20d
https://kisdewa.tumblr.com/post/637050555060060160
https://traitsai.hatenablog.com/entry/2020/12/10/033251
https://www.peeranswer.com/question/5fd115ecb659091425963122
https://slexy.org/view/s21E1IHug3
https://paiza.io/projects/BHt2XgZnngzsNhpUb3RgGA
https://pastelink.net/2crtp
https://blog.goo.ne.jp/xosolo/e/a4047bda20abb6c3abb15c7947628d44
https://www.acvecc.org/fve/Fr-Om-v-Ma01.html
https://www.acvecc.org/fve/Fr-Om-v-Ma02.html
https://www.acvecc.org/fve/Fr-Om-v-Ma03.html
https://www.acvecc.org/fve/Fr-Om-v-Ma04.html
https://www.acvecc.org/fve/Fr-Om-v-Ma05.html
https://www.acvecc.org/fve/OBO-v-Aws-01.html
https://www.acvecc.org/fve/OBO-v-Aws-02.html
https://www.acvecc.org/fve/OBO-v-Aws-03.html
https://www.acvecc.org/fve/OBO-v-Aws-04.html
https://www.acvecc.org/fve/OBO-v-Aws-05.html
https://www.acvecc.org/fve/Mv-v-Ma21.html
https://www.acvecc.org/fve/Mv-v-Ma22.html
https://www.acvecc.org/fve/Mv-v-Ma23.html
https://www.acvecc.org/fve/Mv-v-Ma24.html
https://www.acvecc.org/fve/Mv-v-Ma25.html
https://www.acvecc.org/fve/Po-v-ly-1.html
https://www.acvecc.org/fve/Po-v-ly-2.html
https://www.acvecc.org/fve/Po-v-ly-3.html
https://www.acvecc.org/fve/Po-v-ly-4.html
https://www.acvecc.org/fve/Po-v-ly-5.html
https://www.acvecc.org/fve/real-v-mm-01.html
https://www.acvecc.org/fve/real-v-mm-02.html
https://www.acvecc.org/fve/real-v-mm-03.html
https://www.acvecc.org/fve/real-v-mm-04.html
https://www.acvecc.org/fve/real-v-mm-05.html
https://www.acvecc.org/fve/video-atl-v-sal-liv-1.html
https://www.acvecc.org/fve/video-atl-v-sal-liv-2.html
https://www.acvecc.org/fve/video-atl-v-sal-liv-3.html
https://www.acvecc.org/fve/video-atl-v-sal-liv-4.html
https://www.acvecc.org/fve/video-atl-v-sal-liv-5.html
https://simxperience.com/dir/v-mar-v-mn1.html
https://simxperience.com/dir/v-mar-v-mn2.html
https://simxperience.com/dir/v-mar-v-mn3.html
https://simxperience.com/dir/v-mar-v-mn4.html
https://simxperience.com/dir/v-mar-v-mn5.html
https://simxperience.com/dir/ena-v-Awas-01.html
https://simxperience.com/dir/ena-v-Awas-02.html
https://simxperience.com/dir/ena-v-Awas-03.html
https://simxperience.com/dir/ena-v-Awas-04.html
https://simxperience.com/dir/ena-v-Awas-05.html
https://simxperience.com/dir/mo9v-v-mn-c201.html
https://simxperience.com/dir/mo9v-v-mn-c202.html
https://simxperience.com/dir/mo9v-v-mn-c203.html
https://simxperience.com/dir/mo9v-v-mn-c204.html
https://simxperience.com/dir/mo9v-v-mn-c205.html
https://simxperience.com/dir/Pt-v-ol-ao01.html
https://simxperience.com/dir/Pt-v-ol-ao02.html
https://simxperience.com/dir/Pt-v-ol-ao03.html
https://simxperience.com/dir/Pt-v-ol-ao04.html
https://simxperience.com/dir/Pt-v-ol-ao05.html
https://simxperience.com/dir/esp-re-v-mx-01.html
https://simxperience.com/dir/esp-re-v-mx-02.html
https://simxperience.com/dir/esp-re-v-mx-03.html
https://simxperience.com/dir/esp-re-v-mx-04.html
https://simxperience.com/dir/esp-re-v-mx-05.html
https://simxperience.com/dir/video-atletico-v-g-1.html
https://simxperience.com/dir/video-atletico-v-g-2.html
https://simxperience.com/dir/video-atletico-v-g-3.html
https://simxperience.com/dir/video-atletico-v-g-4.html
https://simxperience.com/dir/video-atletico-v-g-5.html
https://www.acvecc.org/fub1/v-ideo-Bayern-dazn-01.html
https://www.acvecc.org/fub1/v-ideo-Bayern-dazn-02.html
https://www.acvecc.org/fub1/v-ideo-Bayern-dazn-03.html
https://www.acvecc.org/fub1/v-ideo-Bayern-dazn-04.html
https://www.acvecc.org/fub1/v-ideo-Bayern-dazn-05.html
https://www.acvecc.org/fub1/Mon-v-Mad-dfb-de-01.html
https://www.acvecc.org/fub1/Mon-v-Mad-dfb-de-02.html
https://www.acvecc.org/fub1/Mon-v-Mad-dfb-de-03.html
https://www.acvecc.org/fub1/Mon-v-Mad-dfb-de-04.html
https://www.acvecc.org/fub1/Mon-v-Mad-dfb-de-05.html
https://www.acvecc.org/fub1/v-ideo-Inter-sk8-it-01.html
https://www.acvecc.org/fub1/v-ideo-Inter-sk8-it-02.html
https://www.acvecc.org/fub1/v-ideo-Inter-sk8-it-03.html
https://www.acvecc.org/fub1/v-ideo-Inter-sk8-it-04.html
https://www.acvecc.org/fub1/v-ideo-Inter-sk8-it-05.html
https://www.acvecc.org/fub1/v-ideo-Atletico-rtve-01.html
https://www.acvecc.org/fub1/v-ideo-Atletico-rtve-02.html
https://www.acvecc.org/fub1/v-ideo-Atletico-rtve-03.html
https://www.acvecc.org/fub1/v-ideo-Atletico-rtve-04.html
https://www.acvecc.org/fub1/v-ideo-Atletico-rtve-05.html
https://www.acvecc.org/fub1/Mad-v-Bor-viv-tyc-01.html
https://www.acvecc.org/fub1/Mad-v-Bor-viv-tyc-02.html
https://www.acvecc.org/fub1/Mad-v-Bor-viv-tyc-03.html
https://www.acvecc.org/fub1/Mad-v-Bor-viv-tyc-04.html
https://www.acvecc.org/fub1/Mad-v-Bor-viv-tyc-05.html
https://www.acvecc.org/fub1/v-ideo-Salzburg-ofb-at-01.html
https://www.acvecc.org/fub1/v-ideo-Salzburg-ofb-at-02.html
https://www.acvecc.org/fub1/v-ideo-Salzburg-ofb-at-03.html
https://www.acvecc.org/fub1/v-ideo-Salzburg-ofb-at-04.html
https://www.acvecc.org/fub1/v-ideo-Salzburg-ofb-at-05.html
https://www.acvecc.org/fub1/v-ideo-Marseille-lequipe-fr-01.html
https://www.acvecc.org/fub1/v-ideo-Marseille-lequipe-fr-02.html
https://www.acvecc.org/fub1/v-ideo-Marseille-lequipe-fr-03.html
https://www.acvecc.org/fub1/v-ideo-Marseille-lequipe-fr-04.html
https://www.acvecc.org/fub1/v-ideo-Marseille-lequipe-fr-05.html
http://comfamiliarcamacol.com/pok1/v-ideo-Bayern-dazn-01.html
http://comfamiliarcamacol.com/pok1/v-ideo-Bayern-dazn-02.html
http://comfamiliarcamacol.com/pok1/v-ideo-Bayern-dazn-03.html
http://comfamiliarcamacol.com/pok1/v-ideo-Bayern-dazn-04.html
http://comfamiliarcamacol.com/pok1/v-ideo-Bayern-dazn-05.html
http://comfamiliarcamacol.com/pok1/Mon-v-Mad-dfb-de-01.html
http://comfamiliarcamacol.com/pok1/Mon-v-Mad-dfb-de-02.html
http://comfamiliarcamacol.com/pok1/Mon-v-Mad-dfb-de-03.html
http://comfamiliarcamacol.com/pok1/Mon-v-Mad-dfb-de-04.html
http://comfamiliarcamacol.com/pok1/Mon-v-Mad-dfb-de-05.html
http://comfamiliarcamacol.com/pok1/v-ideo-Inter-sk8-it-01.html
http://comfamiliarcamacol.com/pok1/v-ideo-Inter-sk8-it-02.html
http://comfamiliarcamacol.com/pok1/v-ideo-Inter-sk8-it-03.html
http://comfamiliarcamacol.com/pok1/v-ideo-Inter-sk8-it-04.html
http://comfamiliarcamacol.com/pok1/v-ideo-Inter-sk8-it-05.html
http://comfamiliarcamacol.com/pok1/v-ideo-Atletico-rtve-01.html
http://comfamiliarcamacol.com/pok1/v-ideo-Atletico-rtve-02.html
http://comfamiliarcamacol.com/pok1/v-ideo-Atletico-rtve-03.html
http://comfamiliarcamacol.com/pok1/v-ideo-Atletico-rtve-04.html
http://comfamiliarcamacol.com/pok1/v-ideo-Atletico-rtve-05.html
http://comfamiliarcamacol.com/pok1/Mad-v-Bor-viv-tyc-01.html
http://comfamiliarcamacol.com/pok1/Mad-v-Bor-viv-tyc-02.html
http://comfamiliarcamacol.com/pok1/Mad-v-Bor-viv-tyc-03.html
http://comfamiliarcamacol.com/pok1/Mad-v-Bor-viv-tyc-04.html
http://comfamiliarcamacol.com/pok1/Mad-v-Bor-viv-tyc-05.html
http://comfamiliarcamacol.com/pok1/v-ideo-Salzburg-ofb-at-01.html
http://comfamiliarcamacol.com/pok1/v-ideo-Salzburg-ofb-at-02.html
http://comfamiliarcamacol.com/pok1/v-ideo-Salzburg-ofb-at-03.html
http://comfamiliarcamacol.com/pok1/v-ideo-Salzburg-ofb-at-04.html
http://comfamiliarcamacol.com/pok1/v-ideo-Salzburg-ofb-at-05.html
http://comfamiliarcamacol.com/pok1/v-ideo-Marseille-lequipe-fr-01.html
http://comfamiliarcamacol.com/pok1/v-ideo-Marseille-lequipe-fr-02.html
http://comfamiliarcamacol.com/pok1/v-ideo-Marseille-lequipe-fr-03.html
http://comfamiliarcamacol.com/pok1/v-ideo-Marseille-lequipe-fr-04.html
http://comfamiliarcamacol.com/pok1/v-ideo-Marseille-lequipe-fr-05.html
The precautionary advice was given after the pair “responded adversely” following their shots on the first day of the mass vaccination rollout in the UK, National Health Service England said Wednesday.
The two staff members — who both carried an adrenaline auto injector and had a history of allergic reactions — developed symptoms of anaphylactoid reaction after receiving the vaccine on Tuesday. Thousands overall were vaccinated in the UK on Tuesday, NHS England told CNN on Wednesday.
“As is common with new vaccines the MHRA [Medicines and Healthcare products Regulatory Agency] have advised on a precautionary basis that people with a significant history of allergic reactions do not receive this vaccination after two people with a history of significant allergic reactions responded adversely yesterday,” said Stephen Powis, the national medical director for NHS England, in a statement. “Both are recovering well.”
The MHRA issued new advice to health care professionals stating that any person with a significant allergic reaction to a vaccine, medicine or food — such as previous history of anaphylactoid reaction, or those who have been advised to carry an adrenaline autoinjector — should not receive the Pfizer/BioNtech vaccine.
The advice also states that vaccines “should only be carried out in facilities where resuscitation measures are available.”
“We are fully investigating the two reports that have been reported to us as a matter of priority,” an MHRA spokesperson said.
“Once all the information has been reviewed we will communicate updated advice,” the spokesperson added.
They advised anyone with a history of a significant allergic reaction due to receive the Pfizer/BioNTech vaccine to speak to the health care professional administering the vaccine.
Pfizer said in a statement that it had been advised by the UK regulator of “two yellow card reports that may be associated with allergic reaction” due to administration of the vaccine.
“As a precautionary measure, the MHRA has issued temporary guidance to the NHS while it conducts an investigation in order to fully understand each case and its causes. Pfizer and BioNTech are supporting the MHRA in the investigation,” the statement said.
“In the pivotal phase 3 clinical trial, this vaccine was generally well tolerated with no serious safety concerns reported by the independent Data Monitoring Committee. The trial has enrolled over 44,000 participants to date, over 42,000 of whom have received a second vaccination.”
Documents released on Tuesday by the US Food and Drug Administration (FDA) said the Pfizer/BioNTech trial data indicated that there were potentially slightly more adverse responses thought to be allergic reactions among the vaccine group compared with the placebo group, at 0.63% compared with 0.51%.
Pfizer’s trial protocol shows that people with a history of severe allergic reaction (e.g., anaphylaxis) “to any component of the study intervention” were not able to take part.
Stephen Evans, professor of pharmacoepidemiology at the London School of Hygiene & Tropical Medicine, told the UK’s Science Media Centre that the increase was only “small” but said there was “a lot of uncertainty around that estimate.”
He said that “some people won’t know if they have hypersensitivity to some constituents of the vaccine.”
He backed the MHRA advice for people who carry an EpiPen to delay having a vaccination until the reason for the allergic reaction has been clarified. But he said the news did not mean the general public should be anxious.
Peter Openshaw, professor of experimental medicine at Imperial College London, said: “As with all food and medications, there is a very small chance of an allergic reaction to any vaccine.
“The fact that we know so soon about these two allergic reactions and that the regulator has acted on this to issue precautionary advice shows that this monitoring system is working well.” | https://medium.com/@dopefala/allergy-warning-for-pfizer-biontech-vaccine-after-uk-health-workers-with-allergy-history-suffer-5f32f0b1da31 | ['Dope Fala'] | 2020-12-09 18:38:28.418000+00:00 | ['Warning', 'Biotechnology', 'Pfizer', 'Allergy Treatment', 'Vaccines'] |
Spring Boot Microservices — Containerization | Spring Boot Microservices — Containerization
In this article, we will understand the advantages of containerizing Spring Boot Microservices. We will also do sample implementation, based on Docker and Spring Boot. This is the 7th part of our series — Spring Boot Microservices — Learning through examples
Photo by Sebastian Herrmann on Unsplash
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
This is very similar to the package we build for Spring Boot Microservices. Our Spring Boot applications are packaged as Jar files which assume that the runtime environment has Java installed. Containers are packaged in such a way that they can run directly at operating system level. This helps to isolate the bundled softwares from its environment and ensure that they work uniformly. We will understand this topic in more detail with the help of following sections —
The background — This part will discuss the evolution of deployment patterns and technologies, resulting in the concept of Containers. I have already covered similar content in the article on Kubernetes. If you have already gone through this, you can skip this part.
How Containers Work? — This part will discuss Containers in more detail including its core components and functions.
Sample Implementation — We will do sample implementation based on Docker along with our Spring Boot Microservices. We will create and run containers for our already built services.
The background
Age of Physical Machines
In the early era, applications were deployed on the physical machines. Deploying one application per machine was very costly in terms of resource utilization. On the other hand if we try to optimize this, by deploying multiple applications on the same machine, it was not possible to define the resource boundaries. Failure in one application could have propagated to others systems.
To understand this, lets say our physical machine is hosting two applications — E-commerce Application and the Inventory Management System and if the latter is eating up all the memory, most often our e-commerce system will stop working as well. Virtualization brought the much needed reform!
Age of Virtual Machines
The technology allowed us to run multiple Virtual Machines (VM) on the single physical machine with the clear separation of resources, platform, libraries and security.
This facilitated better utilization of resources. Enterprises invested in the shared pool of Virtual Machines, which provided more flexibility and scalability to the software systems.
If the holidays season require our e-commerce system to work with two machines, we can get the additional machine(virtual) much faster. When the holiday season is over, I can return back the additional Virtual machine, releasing the resources back to the shared pool.
The technology worked great, but soon the enterprises realized this is relatively heavier. It required the whole operating system to be installed, on each of the virtual machines. Additionally, though the procurement time was greatly reduced, but the pace did lag behind in the age of Agile Development and Microservices.
Agile Development & Microservices
At the time when we were moving from physical to virtual machines, the software development was going through the transformation as well. The development model changed with the introduction of Agile methodology. We started updating and deploying the applications much faster.
It became difficult for the monoliths to survive the Agile world. The fast pace development set the foundation for Microservices. Our e-commerce system was no more a monolith now. It got divided into multiple microservices — Product Catalog, Inventory, Shopping Cart, Order Management and many others. With the Microservices, the traditional deployment model did not work, and it posed new challenges.
The number of microservices, each with a different set of technologies and dependencies, needed a different environment. The earlier adopted options included “one service one host” or “one service one vm”. Again, this was a costly option, and on the other hand deploying “multiple services per host or vm” created similar challenges as in the case of applications.
Microservices were broken down in smaller services so that they can evolve independently. Each of service needs were unique, in terms of technologies and resources, and it needed its own isolated space. Containers provided the lightweight virtualization.
How Containers Work?
Lightweight Containers
Containerization provided the means to partition the underline system (physical or virtual) into the virtual containers. They are lightweight compared to the virtual machines because they don’t need the extra load of a hypervisor, but run directly within the host machine’s kernel.
Containers share the operating system but they can have their own filesystem, memory and CPU. This facilitate flexible and faster deployments, which in turn helps in achieving continuous development and deployment practices.
Containers solved the fundamental problem of resource sharing and isolation with the much efficient mechanism.
Docker
There are many container technologies available in the market but Docker is a clear winner. We will be using this for our sample implementation. Before jumping to the exercise, lets get familiar with the core components of this technology.
Containers are realized with the help of images which are nothing but the build and run instructions for a software application. We need to bundle all the dependencies as part of the container image. In case of Spring Boot Microservices this will typically contain the Java environment and the executable Jar. You will soon see the running example as part of the Sample Implementation.
courtesy: docker.com
Docker CLI (Command Line Interface) provide the options to create and run the container images. But the real work is done by Docker Engine which keeps running as a long-running daemon process. Docker provide the APIs which the programs can use to talk and instruct the Docker daemon.
Images typically reside in a central repository called Docker Registry. Any user or group interested in running the container image can pull it from here.
Benefits
Containerization provides many benefits in the development of software systems —
Isolation and security allow us to run multiple containers at the same time on a given host. For instance, we can deploy one or more instances of Product Catalog Service and Inventory Service on the same machine.
The container becomes the unit for distributing the application. This means, we can create container image of Product Catalog Service and give it to other teams for further validation and deployment steps.
Container images are standardized and portable. We can use the same image of Product Catalog Service to deploy in the production environment. Be it local data center or the remote cloud, it works consistently.
Now that we understand container and its benefits, lets dive into the practical world of it.
Sample Implementation
With the sample implementation, we will try to understand how the container technology works with our Spring Boot based Microservices. I have segregated the exercise in easy to follow steps —
Basic Setup
In the sample implementation we will use Docker, which is the most popular container technology. You can visit https://www.docker.com/get-started to get the appropriate docker setup on your machine. If you are on linux you just need a binary, but for Mac or Windows, you need to install Docker Desktop.
We also need Java, Git, Maven as we will be containerizing our Spring Boot Microservices. We already created multiple services during our previous exercises in this series — Learning through Examples. Lets get the microservices code from Github, by running the following command —
This will get the code for all the services. We will be creating the container image for our product catalog service. For this, lets browse to the directory spring-boot/product-catalog . All the commands in subsequent sections have to be executed in the context of this directory.
Creating Container Image
Docker can build images automatically by reading the instructions from a Dockerfile . A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession. Here is the sample Dockerfile —
FROM adoptopenjdk:11-jre-hotspot as builder
ARG JAR_FILE=target/product_catalog-0.0.1-SNAPSHOT.jar
ADD ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
In this file, we are giving following instructions
FROM command instructs to use base image as the one available for adoptopenjdk:11-jre-hotspot .
command instructs to use base image as the one available for . ARG command instructs to use JAR_FILE as a variable. Default value specified is target/product_catalog-0.0.1-SNAPSHOT.jar . This refers to the executable jar which will be generated once we run mvn install command
command instructs to use JAR_FILE as a variable. Default value specified is . This refers to the executable jar which will be generated once we run command ADD command instructs to copy the jar file to the image filesystem with the new name as app.jar
command instructs to copy the jar file to the image filesystem with the new name as ENTRYPOINT command allow you to configure a container that will run as an executable. In our case, we are instructing to execute the jar file with the help of java command.
With all the build instructions written in Dockerfile , we will create the container image with the help of docker build command. This will generate the container image and tag it with the name spring-boot-examples/product-catalog
docker build -t spring-boot-examples/product-catalog .
You can check the available image with the docker command docker images . This will list down all the docker images registered locally. One of them should be with the name above. Now that you have the container image ready for Product Catalog Service, you can start the service with docker run command
docker run -p 8080:8080 spring-boot-examples/product-catalog
The service will work the same way as it does through mvn spring-boot:run . We can access the create/update/delete and get apis. Play around with the service to ensure its apis are working correctly. As a sanity check, I can create a test-product-123 with the following post request.
{
"title":"test-product-123",
"desc":"test product 123",
"imagePath":"http://test-image-path",
"unitPrice":10.00
}
and use the get api to view the product details by accessing http://localhost:8080/product/test-product-123 . If both the api work, it means you have successfully created the container image of our Spring Boot Microservice.
We can also create the container image with the help of Spring Boot build plugin, Spotify Maven plugin, Palantir Gradle plugin, and Jib from Google. You can check out the spring guide on this here. The guide also provides suggestions to optimize the image through multi-stage builds.
We touched upon some basic commands in dockerfile to create the image. To make the images more robust and flexible, you should check the full list of commands available here.
Running the Container
We already did a sample run of our container image in the previous section. ENTRYPOINT command helps in defining the executable instructions for the image. In this section, we will focus on running the container with some additional options.
Lets say we want to set the initial memory size for the Product Catalog Service. We can update the dockerfile by adding this option with the ENTRYPOINT command.
FROM adoptopenjdk:11-jre-hotspot as builder
ARG JAR_FILE=target/product_catalog-0.0.1-SNAPSHOT.jar
ADD ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Xmx500m","-jar","/app.jar"]
This will start the service and ensure that the maximum size of the memory allocation for JVM is not exceeded beyond 500 MB. If we want to keep this value flexible, we can refer an environment variable JAVA_OPTS with the sh command as below.
FROM adoptopenjdk:11-jre-hotspot as builder
ARG JAR_FILE=target/product_catalog-0.0.1-SNAPSHOT.jar
ADD ${JAR_FILE} app.jar
ENTRYPOINT ["sh","-c","java ${JAVA_OPTS} -jar /app.jar"]
Once we build the image, we can pass the value of this environment variable at the time of running the container.
docker run -p 8080:8080 -e "JAVA_OPTS=-Xmx500m" spring-boot-examples/product-catalog
You can define the environment variable in dockerfile with the help of ENV instruction, but it will restrict the value at build time only. With -e option, you can define the environment variable in docker run command. We used it to pass arguments to Java process. The approach is bit different if we want to pass the properties in spring context. To pass those values, we must provide placeholders ${0} and {@} as below in the dockerfile
ENTRYPOINT ["sh","-c","java ${JAVA_OPTS} -jar /app.jar ${0} ${@}"]
Once the dockerfile is updated, build the image again, and run the docker container. We can pass the value ${JAVA_OPTS} as part of the run command.
docker run -p 9000:9000 -e "JAVA_OPTS=-Xmx256m" spring-boot-examples/product-catalog --server.port=9000
Docker supports the command in two forms — exec and shell. We already used exec form in our example. Usage of shell form looks like this -
ENTRYPOINT command param1 param2
Docker suggests to use exec as the preferred form. This gives complete control on the process execution unlike the shell form. For instance we can stop the service by pressing CTRL-C in case of exec form which is not possible through shell form.
Publishing Container Image
Container images are like any other system file which can be exchanged across users and groups. We did learn to create the image in the previous section. To reuse the images, we need to publish them to a central repository. Once the image is published, it can be pulled by any authorized user.
Docker Hub is a hosted repository service provided by Docker for finding and sharing container images. Typically enterprises need private repository to push and pull the container images for their services and applications. There are many hosted providers providing this feature including Docker Hub. You can also create the repository in your local data center. Irrespective of where the repository is, lets see how we can publish the image so that its available to the other users and groups in your enterprise.
If we are publishing the image first time, we must save the image by passing its container id. This can be found with the help of docker ps command. Considering our Product Catalog Service , we can use commit command to achieve this.
$ docker commit c16378f943fe product-catalog
Now, push the image to the registry using the image ID. In this example the registry is on host named ecommerce-doc-registry and listening on port 5000 . To do this, tag the image with the host name or IP address, and the port of the registry
$ docker tag product-catalog ecommerce-doc-registry:5000/spring-boot-examples/product-catalog $ docker push ecommerce-doc-registry:5000/spring-boot-examples/product-catalog
Once you get the confirmation, other people or groups such as testing team can use pull command to get the image and use it for starting the service as discussed previously.
docker pull ecommerce-doc-registry:5000/spring-boot-examples/product-catalog
Running Multiple Instances
We successfully created and ran the container image of Product Catalog Service. Its time to see some real benefits of containers. In this section, we will try to run multiple instances of the service. Lets start the two instances of the service as below.
docker run --name instance-1 -p 8081:30001 spring-boot-examples/product-catalog docker run --name instance-1 -p 8082:30001 spring-boot-examples/product-catalog
The docker run command is bit different than we saw above. Here we are specifying names for each of the running containers — instance-1 & instance-2 . Also we are using different host ports for each of these containers. So the instance-1 is running at 8081 port on the machine whereas instance-2 is running at 8082. If you do docker ps , you will see the instances running as below.
You can access each of the instance separately. For instance, you can call http://localhost:8081/product/test-product-123 to get the product details for the product id — test-product-123 . Both the instances are completely isolated. They are using their own runtime environment. You can also check the resource consumption with the help of docker stats command.
Multiple instances of a service demands additional support like Service Discovery, Load Balancing and Container Orchestration. Docker Swarm and Kubernetes provide the required platform to cater these needs.
Similar to the multiple instances, we can also run multiple services on the same machine. It does not matter what the technology platform of other service is. For instance you might develop Inventory service based on Java 14 or you might completely change the technology stack to Node.js. Containers can provide the isolation to each of these services so that all of them can coexist without impacting each other.
Next Steps
Docker is a very wide topic and touches upon multiple aspects of application deployment. The objective of this article is to give you a high level view on containerization. This should be good enough from a developer perspective.If you are into software architecture, you must check out more specific docker topics to develop an holistic approach towards the technology.
You can check Docker Compose which is a tool for defining and running multi-container Docker applications. With Compose, using a single command, you can create and start set of services.
You can check Docker Swarm, which provides many features to build docker cluster and orchestration capabilities. You can also check Kubernetes which is one of the most popular container orchestration framework. I have already covered this topic in detail and can be checked out here — Spring Boot Microservices — Kubernetes-ization. | https://medium.com/swlh/spring-boot-microservices-containerization-963d77348fa0 | ['Lal Verma'] | 2020-10-19 13:44:38.279000+00:00 | ['Virtualization', 'Software Engineering', 'Docker', 'Spring Boot', 'Containers'] |
Hear That? It’s the Sound of Millions of Disappointed Warren Supporters Grieving | Hear That? It’s the Sound of Millions of Disappointed Warren Supporters Grieving
American women absorb yet another crushing blow to their hopes and dreams for the future
I’m sure she’ll understand one day. Or maybe she’ll burn it all down. Image by ulkas, licensed from Canva Pro
I don’t know about you, but I’ve been walking around for months now in a state of low grade, chronic anger with intermittent flare ups of acute frustration and indignation.
To put it in perspective, I’m a sheltered, middle class, cis hetero, white woman. In spite of everything, my family and I are doing just fine. Still, I want this country to be better than it is — for everyone.
I dare not pretend to know the half of what people of color, LGBTQ folks, muslims, jews, and other marginalized groups are feeling right now. All I can think is that if I feel the way I do in the relatively secure position I’m in, there must be a great ragey stew a-simmerin’ all over this land and one of these days it will boil over.
I’m sad
Why the fuck can’t an amazingly intelligent, highly competent, qualified, personable woman catch a break in this shit-hole country? Or just have a chance?
Frustrating doesn’t even begin to describe the feeling one gets listening to NPR and reading article after article that barely even mentions Elizabeth Warren. It’s like she’s not even there.
Well, she’s gone now and god dammit, it’s devastating.
It feels like a death.
I thought it was devastating when Hilary lost even though she won. She was a flawed candidate, they told us. But her emails, they claimed.
Elizabeth Warren did one dumb little stunt with a DNA test — for which she sincerely apologized and owned the consequences of her actions. I admired her even more. She is not a flawed candidate and still, she had no chance.
Her debate performances should have won awards. Maybe they still will. She’s brilliant. She’s quick witted. She genuinely cares. She’s got a real, well-thought out plan for everything. Literally everything.
But it’s not enough.
It’s never enough.
And that’s damn sad.
How could we pass up an opportunity to have a leader like Elizabeth Warren in the highest office in the land?
I’m mourning the loss of what could have been.
And I’m not alone. My friends and I are sad.
Yes. For fuck’s sake, yes, I’ll vote for whatever old, crusty, yelling white guy isn’t Donald Trump. Thank god it won’t be Bloomberg — perhaps there are small mercies after all.
I won’t say all, but the large majority of my women friends wanted Warren. And we just found out we can’t have her.
We are crushed.
It feels like a death.
I’m not sure I even let myself truly imagine her getting there, because, well, I’ve been here a while now, and I know what this country is like. But, my god, what a beacon of light she is, was, would have been, if only we would have opened our eyes.
I’m exhausted
Even my husband, whose views are progressive and align very well with mine, keeps telling me there’s no talking to me right now.
Every time he tries to have a political discussion with me, I interrupt him and get angry and ranty. I cannot help it. Who else will hear me?
It’s exhausting trying to make myself understood even to the person who knows me and loves me best and gets it, for the most part.
I’m so tired of worrying about and fighting for equality, justice, healthcare, the environment, the world my children will inherit, the social safety net, everything that garbage human in the White House is doing his damndest to destroy out of pure spite every single day.
But it didn’t start with him. He’s just the rancid cherry on the top of this shit sundae the patriarchy has been serving up for decades.
I’m mad as hell
Remember that roiling growl in the core of my being from the beginning of this article?
I do not, for the life of me, know what to do about it.
The worst is that now I, and many millions like me, have to shove it down and work for a candidate I don’t care about.
I don’t care about Joe Biden. He’s fine. But he’s not what I want. And now I may have to summon all my strength to make sure he gets elected.
I don’t care about Bernie Sanders. He’s fine. I respect him, but I just don’t want him. And now I may have to force myself to work for him to be elected because the alternative is simply unthinkable.
I want to rage against our collective ignorance and resistance to real passion, knowledge, expertise, competence, humanity, especially in the form of a woman.
I’m gonna keep fighting, but…
Joe and Bernie, like I said, are fine. Whatever. I’ll pull myself together, just like those millions of others. I’ll write the post cards and knock the doors.
I’ll be truly glad and relieved if we manage to get Trump out of office, but right now, I’m just so deeply, viscerally, sad.
Thanks for reading.
When I’m in a better mood, I’m out celebrating the publication of my first children’s book, Opossum Opposites. Click here to find out more.
You may also enjoy: | https://medium.com/the-partnered-pen/hear-that-its-the-sound-of-millions-of-disappointed-warren-supporters-grieving-a799f2771fcd | ['Gina Gallois'] | 2020-03-06 04:35:56.456000+00:00 | ['Equality', 'Women', 'Politics', 'Grief', 'Feminism'] |
Analog Soul — Live From The Chronograph | Analog Soul — Live From The Chronograph
Stretch & Bobbito — courtesy of Dan Lish
All the poets & the part-time singers always hang inside
Along with the microphone fiends
& spoken wordsmiths
Who usually congregate on the corner of the Neworld Cipher
But on this Saturday night
Are transmitting live
From the Chronograph, Uptown, where overground meets
Underground. & the deejays play till dawn,
Airing their motto:
Time Marches On –
To a sanctuary of bassheads & beatfreaks who interface
With the bass of soul vibrations
Happening everywhere,
Even down here on the ground where wonders & signs
Speak like a child. where consciousness
Is its own revival
& “the only preacher is a poet,” conversing with the collective
Conscience of the universe with words
Born to speak rivers
As Sonia would say…
Remembering what we have learned from history
& all that the world has since forgot
So as not to repeat it, but
Instead be it as we evolve into an everlasting song
Colored by the indigenous magic
Of our own thoughts
& reflections. never forgetting that our greatest
Weapons will always be our words
But actions always speak louder.
2020 MDSHall | https://medium.com/scuzzbucket/analog-soul-live-from-the-chronograph-938a9fed3762 | [] | 2020-12-12 16:03:00.256000+00:00 | ['Turntables', 'Words', 'Music', 'Poetry', 'Chronograph'] |
Is Yuval Noah Harari a cyber Cassandra? | Hacking collective The L0pht at the first Congressional cybersecurity hearing in 1998
The renowned author Yuval Noah Harari is worried that the next global pandemic will be a digitally inspired catastrophe. Is he right to be concerned about ‘cybergeddon’, or is he just the latest cyber doomsayer?
Introduction
According to author Yuval Noah Harari, science and technology have, for the first time in history, enabled mankind to effectively manage a global pandemic. Writing in the Financial Times in February 2021, Harari, who is known for best-selling works such as Sapiens and Homo Deus, makes the case for technology during Covid-19. He cites biotechnology, which has enabled us to sequence the virus genome and develop vaccines to defeat it. Digitisation and automation have enabled trade and agriculture to continue with direct input from very few human beings. Information technology has enabled us to monitor the spread of the virus and to isolate outbreaks. Financial markets have continued to function, and many of us have been able to work remotely. Humanity has thus avoided economic meltdown and the slowdowns that will follow are likely to be recessions rather than depressions. Such tools were not available during the three other pandemics to have occurred since 1918. Previous generations had no choice other than to go to their offices and factories.
Whilst the article is decidedly optimistic, Harari also points out that we are now perilously dependent upon the Internet and the digital infrastructures built upon it. Known for the disturbing accuracy of his predictions, Harari suggests that the prime candidate for the next global catastrophe is an attack on the very technology that has enabled us to manage Covid-19. In making this apocalyptic assertion Harari is entering a crowded market. The pantheon of cyber doomsayers includes hacking collectives, Government officials and tech luminaries, most of whom have been ignored. The dangers were first pointed out to Congress in 1998 by hackers Brian Oblivion, Kingpin, Mudge, Space Rogue, Stefan von Neumann, Tan and Weld Pond. Collectively known as L0pht Heavy Industries, the group famously testified that they would be able to disable the Internet, a network of networks designed to withstand a nuclear war, within thirty minutes. Those predicting cybergeddon — that is to say digitally created chaos, destruction and societal breakdown — are, happily, still waiting for their told-you-so moment. Leon Panetta’s cyber Pearl Harbour has failed to materialise, leading many to conclude that our fears are overblown.
Twenty years on from L0pht
Nearly twenty years after their testimony to Congress, some of the original L0pht founders re-grouped for a seminar hosted by the Congressional Internet Caucus — A Disaster Foretold and Ignored: Revisiting the First-Ever Congressional Cybersecurity Hearing — by which time most of them had either worked at DARPA or Google, or were holding C-level positions at corporations such as IBM, Veracode and Stripe. They concluded that while many of the same issues and risks that existed twenty years ago persisted, progress had been made.
Cybersecurity is now mainstream, and hackers now routinely work with vendors, which was unheard of in 1998 (hence the pseudonyms, which were a means to avoid lawsuits from vendors whose vulnerabilities were being exposed by L0pht). Less positive developments include the proliferation of the nation-state threat, which they cited as the predominant threat facing many organisations. Another issue is the diversification of threat actors and their access to nation state cyber weapons. As if that isn’t enough to worry about, the exponential increase in the attack surface thanks to the advent of the Internet of Things, or IoT, is creating new opportunities for attackers to take control of hardware devices, allowing them to gain access to networks, to hijack them for other purposes, or to simply destroy them.
The L0pht testimony demands that we take Harari and others labelled as Cyber Cassandras seriously, and forces us to ask fundamental questions about the likelihood, impact and mitigations for such an eventuality: is cybergeddon feasible, what would be its consequences, how likely is it, and what can be done to either mitigate or prevent its occurrence?
The Internet
The obvious place to start is the Internet. A sustained attack against the Internet would, if successful and widespread, be catastrophic. Many smart energy grids would fail, as would electronic payments and other financial systems. Cloud storage would be unavailable and remote working would no longer be possible. Social media and mobile apps would be offline, as would most TV. Markets, trade and transportation would cease. Hospitals and other essential services would be unable to function. Without the Internet modern life would grind to a halt. As Harari points out, this is because we would struggle to fall back onto analogue infrastructures, many of which have either been removed completely or, where they still exist, lack the capacity to provide continuity. Organisations entirely dependent upon the Internet for their operation would be forced to close while attempting to make the reverse transition from digital to analogue. If the Internet was likely to be unavailable for any significant length of time this trend would be repeated across the economy. The resultant societal turmoil would potentially be overwhelming.
Thankfully, disabling the Internet would not be so easy today. Whilst the choke points in its architecture were clearly vulnerable to attack in 1998, the Internet has grown up being attacked. Today’s Internet is highly resilient, as Covid-19 has demonstrated. If an actor was intent upon causing cybergeddon there would be easier things to attack and, for as long as the world shares a common Internet, any attack against it would be an attack against us all, including the attackers. An attack against the Internet would therefore be the ultimate act of nihilism.
Cyber warfare
If, therefore, relative to other cybergeddon scenarios, there is less likelihood of the Internet being disabled, a more plausible scenario might be a sustained, large-scale attack against the infrastructures that are built upon it. A shortlist of pertinent historical examples includes: Russian attacks against the energy grid in Ukraine in 2015 and 2016; the unattributed destruction of a German blast furnace in 2016, reported by German media as belonging to a ThyssenKrupp facility in Duisburg; the Iranian-attributed Shamoon attacks of 2012, resulting in damage to Qatar’s Rasgas and the destruction of Saudi Aramco’s corporate network, comprising 33,000 machines; the Russian-attributed NotPetya attacks which crippled Danish shipping giant Maersk in 2017; the Russian-attributed Sunburst/C2 attack against SolarWinds customers in 2020; and the Chinese-attributed attack against Microsoft Exchange customers in 2021.
This list is far from complete. In economic terms these attacks represent trillions of dollars’ worth of damage, largely hidden from view. In media terms they are little more than a string of increasingly unnewsworthy headlines. The consequences of these attacks, were they to arrive simultaneously, are potentially imponderable. Thanks to the network phenomenon by which small inputs can create large outputs, such attacks might not even need to occur simultaneously in order to have disastrous consequences. In 2015 Lloyds of London and Cambridge University published a study to examine the insurance implications of a cyber-attack on the US power grid. In this case the scenario involved shutting down just 50 strategic generators — fewer than were shut down in Ukraine in December of that same year — which would cost the US economy an estimated $243bn, rising to $1trn in the most extreme version of the scenario. This list of attacks is a breadcrumb trail that offers us hints of the calamity that could befall us.
That such attacks have already taken place demonstrates their feasibility. That such attacks have not occurred concurrently as a coordinated large-scale campaign is, as Jason Healey suggests, evidence that there is a de facto norm, albeit an unsatisfactory one, emerging between the great cyber powers: a ‘cyber peace’ based upon mutual vulnerability. Nation states are aware that anything they do in cyberspace can be reciprocated, potentially with devastating consequences for their own digital infrastructures. This perhaps explains why, to date, most cyber powers have restricted their sustained or persistent attacks to information gathering, information operations, espionage and theft. Oxford University Professor Lucas Kello characterises these mid-spectrum activities that sit between traditional definitions of war and peace as Unpeace.
As Healey points out, just because destructive, state-on-state cyber warfare hasn’t happened yet, doesn’t mean that it can’t or won’t happen. It could, and it might. Its impact would potentially be as catastrophic as that of an attack against the Internet. Russia’s claim in 2019 to have tested its ability to disconnect from the Internet, suggests that the Kremlin is taking the threat seriously. If we assume for the time being that cyber deterrence holds, and that as happened during the Cold War with nuclear weapons, an uneasy albeit imperfect peace prevails, does that mean that cybergeddon is off the cards? Possibly not.
Cyber accidents
There is a further scenario that requires consideration. The diversification of actors and the proliferation of cyber weapons cited in 2018 by the L0pht founders, combine to form the most likely scenario, which is that of an accident.
The potential for unintended consequences in cyberspace has been apparent since the Morris worm of 1988, and it was demonstrated again following the 2010 Stuxnet attack against the Iranian nuclear facilities at Natanz. Despite the fact that this was an attack against an air gapped or non-Internet connected network, the code subsequently spread globally, after an Iranian engineer is reported to have connected an infected machine to the Internet. Stuxnet has subsequently caused damage in more than 115 countries. The fact that it has not caused strategic damage to the world economy is probably due to the fact that the code was specifically designed to attack the Iranian nuclear infrastructure. Nonetheless, the warning from Stuxnet is clear: cyber weapons, once deployed, can behave unexpectedly and are, effectively, in the public domain. The theft of cyber weapons also represents a major risk, as witnessed in 2016 by the Shadow Brokers’ apparent theft and sale of US Government zero-day exploits.
NotPetya and WannaCry are examples of the invocation of the law of unintended cyber consequences. WannaCry, which is estimated to have done $4bn worth of damage globally, including £100m of damage to the UK National Health Service, is based upon a stolen NSA cyber weapon, re-purposed by the North Korean hackers and deployed for financial motives. Having probably been sold to the North Koreans by brokers affiliated to the Russian Government, WannaCry significantly impacted Russia including, ironically, the Russian Ministry of the Interior, whose remit includes protection from cyber-crimes. NotPetya, the worm widely attributed to Russian attacks against Ukraine in 2017, and which nearly destroyed Maersk, also did significant damage to Russian energy giant Rosneft. A world in which cyber weapons proliferate amongst actors with either little understanding of, or little care for the potential consequences of their use, runs the risk of catastrophic damage to its digital infrastructures.
Gazing into the cyber crystal ball
If this hierarchy of scenarios is accurate, an attack against the Internet is, for the time being, the least likely to materialise. That could potentially change with bifurcation of the Internet, something that Eric Schmidt has estimated will take place within 10–15 years, and which former NCSC CEO Ciaran Martin has also deemed as probable, albeit not inevitable. At this point, the world would effectively have two competing Internets: one led by the US and the West, and another led by China and non-Western countries. China’s Belt and Road Initiative has in some respects already begun this process, with many of the 60 or so countries involved, trading certain freedoms in return for infrastructure.
The reliability of norms governing the behaviours of nation states in cyberspace is open to question. To date, Thomas Rid’s assertion that Cyber War Will Not Take Place appears to hold true, albeit within narrow and arguably outdated definitions of what constitutes violence and war. Whilst the most capable cyber nations have been willing to engage in irregular cyber conflicts, they have steered clear of conducting full-scale strategic cyber warfare. Where cyber conflicts have occurred in Estonia, Georgia and the Ukraine, attacks have followed geopolitical events or as an element of mainstream military activities.
How long this ‘unpeaceful’ cyber peace will hold is unknowable. Cold War deterrence worked because nuclear weapons possessed certain characteristics that made deterrence theory possible, notably their easily understood effects and our ability to observe their infrastructures. Cyberspace is less accommodating. Capabilities, intentions, attacks and attribution are much less discernible. Outcomes are far from predictable.
Covid-19 has laid bare both the fragility and the resilience of economies and societies. Yuval Noah Harari is right to warn of the opportunity for a digitally inspired, human catastrophe. For as long as the Internet remains resilient and our nascent strategic norms hold, it seems that an accident, brought about by human error or ignorance, poses the greatest threat to the technologies we rely upon for our daily lives.
Avoiding cybergeddon
The starting gun on the race between the Internets of the future has already been fired. Ciaran Martin’s view is that the survival of an Internet run along open ‘Californian’ lines will require alliances across the Five Eyes and their partners to take on a commercial dimension spanning software and hardware manufacturers, difficult though this will undoubtedly be.
As regards the protection of our wider critical national infrastructure, one of the chestnuts perennially offered by the cyber cognoscenti is that there are no silver bullets on offer to solve these issues. Whilst businesses seek simple and quick solutions to problems, the problem of cyber insecurity is too big and too complex to lend itself to simple and quick solutions. Facebook CISO and industry veteran Alex Stamos believes that the problem will be solved by software, but not for several decades. Despite this we have a cybersecurity industry that appears to be intent on attempting to provide silver bullets in the form of eye-catching products and services many of which, in the opinion of Stripe’s Peter ‘Mudge’ Zatko, actually do more harm than good. In his view, the organisations with the best cybersecurity are those that eschew shiny products in favour of good security design implemented from the ground up. This is also problematic — there are only so many former L0pht members (or folks with similar knowledge and experience) to go around, and they tend to work for the big companies like Stripe.
Whilst Governments typically operate relatively little critical national infrastructure, and therefore own just a small part of the cybersecurity problem, Government intervention currently offers the best means of preventing harm at scale. More action will be required to build resilience by baking cybersecurity into regulatory models, as has been done in the UK by the Bank of England. Energy and telecommunications are likely candidates to follow suit. Banning payments to criminals by companies falling victim to ransomware, 58% of whom in the UK admit to paying ransoms, is another possible step.
Governments are traditionally loathe to interfere in markets, but there are many who take the view that the cybersecurity industry is something of a market for lemons. Faced with market failure, it would make sense for Governments to tackle the low hanging fruit, as the UK NCSC has done, by offering to ‘protect from the centre’ with a range of free-to-use protective services, affording a degree of protection to those typically unable to protect themselves. The Active Cyber Defence (ACD) programme’s pithy aim is to ‘Protect the majority of the people in the UK from the majority of the harm caused by the majority of the cyber-attacks the majority of the time.’ The programme is now in its third year and its initiatives are being emulated internationally.
Until the cybersecurity industry works out how to demonstrate greater efficacy, and/or until industry works out how to do cybersecurity, the world is likely to require more initiatives like ACD. This will attract a different set of debates and challenges related to transparency, censorship and privacy.
The focus on cybergeddon potentially misses another, potentially bigger issue, namely the extent to which non-destructive cyber-attacks that don’t always grab the headlines, are set to challenge our existing notions of power and international order. It is in this regard, in technology’s potentially transformative effects upon geopolitics, that we are in the foothills of our understanding. In his 2014 book, World Order, Henry Kissinger put it thus:
Nuclear weapons… catastrophic as their implications were… could still be analysed in terms of separable cycles of war and peace… cyberspace challenges all historical experience. | https://medium.com/@davidjcarroll/is-yuval-noah-harari-a-cyber-cassandra-a2a77a0b9919 | ['David Carroll'] | 2021-03-18 14:47:09.876000+00:00 | ['Society', 'Cybersecurity', 'Technology', 'Geopolitics', 'Risk'] |
entitled. | There was a humorous viral video of a male while riding a bike, stopped in his ride to attempt to tear down a pride flag on someone else’s property. The homeowners took video of the male’s attempts and found it humorous that he took time out of his day to try to destroy something that identified them as members of the LGBTQIA community. This video closed with the owners showing the fixtures that kept the flag in place. The male rode off in a rage at his determination for destroying something that he found offensive not meeting his bully justice in this simple act of property damage.
What is mildly shocking is a case of a member of a left-leaning political party being charged with a hate crime for destroying the signs supporting an extreme right-leaning political stance on the supporters' lawn. This case opens a level of discourse in similar justifications of the attempted coup on the nation’s capital on January 6th being met with mixed criticisms wavering from open thanks to outrage depending on the individual’s political position. As a country recovering from a deep dive into our systemic racism, one has to come to grips with their righteous rage against the open racists that plagued our nation. There has been a level of coded freedom afford to the communities that want to reaffirm a state of racial class superiority that has been put in a shallow grave since the late 60s.
The previous administration allowed America’s systemic racism to shuffle its way to the surface in a way that beneficiaries hoped to return to their racial heyday- openly killing members of another race and having a family outing around the body. At least, that is what the dead-eye stare of the officer murdering George Floyd mirrored- pictures of white community members taking photos of themselves with a hung Black man in the background. Photos like these have not been circulated in a few decades in public spaces but members of the white race in Black face have been met with controversy that this essay hopes to examine today.
Looking at the resurgence of open hostility against members of color could make white people examine themselves but it only makes them strive for the comforts of a previous period of financial stability, open racial superiority that permitted them to have people of color openly humble themselves in open fear for their lives or well being. It should bother them that this is what they seem to be missing- an open reminder that they should be feared for their white skin and members of color should be ashamed for not being them. And yet, the response to the tests of white privilege- open hostility and unbridled rage against people that fear white skin for being white in concert with loathing individuals of color for reminding them that their skin is the thing being feared and not the individual- makes the act of murder on their end, empty.
Looking at these photos of white men congratulating themselves on banding together as a group to kill one person of color only to go back home and have their partners really fear them the way the Black person already did, why would they bother taking out their bully rage on anyone when they could not stomach the aftermath among their own people. How many of them keep to the circles of individuals that participated in the actions to ensure their actions were justified and honorable? It should come as no surprise that these people rarely leave their communities to ensure they are comforted by like-minded individuals.
Alas, this social stance is being challenged by every generation. The ‘good old days’ were based on a level of financial security that no longer exists for the middle class in any capacity. That social status and privilege has been a blinder to these entitled people while the sources of their wealth were whittled away. As COVID enforces these people to huddle in their own communities and ensure no one comes to visit in a Sodom and Gomorrah homage, their bully-based fury is festering.
Why take the time to observe this population? For fear of using it as an example of what the oppressed will fall into with turned tables. The woman facing hate crime charges is a supporter of my personal political agenda; however, I would not condone anyone destroying another’s property because it was offensive to me. Nor would I take the time out to educate anyone who lived in that property. I would deny them the opportunity to expand beyond a narrow point of view and hope their personal ideals choked them on their unsupported obscurities. That is what Facebook allows with the self curation of one’s feed- suggesting more things that relate to your previous queries. That is what the AI’s Google has built showcase- extreme bubbles of unlanced hatred against members of society one may never be exposed to in certain states. That is something Black Americans struggle with as they attempt to reclaim a level of community and culture that is not a mere replacement of a white face with a Black man.
It is an open fear that the organizations supporting the left-leaning political agenda will turn into its enemy should it obtain the political clout it needs to open doors to all America’s peoples. We should not mirror our bullies. And yet, is that not what happened during Woodstock and the Women's March? | https://medium.com/@data-dumping/entitled-c5e958a1d5f2 | ['Data Dumping'] | 2021-08-10 19:13:17.975000+00:00 | ['Minorities', 'Racism', 'People Of Color', 'Bullying'] |
Is NIO a buy stock? Bullish analysis | In my latest post on $NIO (Nio Inc.) Q3 earnings resume, there were a lot of comments and arguments on which company is best or which will grow, etc, almost like there is a war between the two companies (and there is not), which leads me to make this in-depth analysis on NIO.
🔋 𝐁𝐚𝐚𝐒 (𝐁𝐚𝐭𝐭𝐞𝐫𝐲 𝐚𝐬 𝐚 𝐒𝐞𝐫𝐯𝐢𝐜𝐞)
NIO pioneered the BaaS model, which stands for Battery as a Service.
Basically, this technology allows customers to drive to an NIO swap station and just in 3 minutes (or less) be able to swap their battery to a fully charged one.
Earlier in October NIO announced that the cumulative number of its EV battery swap service in China has reached 1 million. Since the beginning of the year, NIO also expanded the size of its battery swap network from 131 stations to 155. Being NIO the only company that adopts battery swapping technology among the new vehicle production companies, that makes many customers opt for NIO, rather than its competitors.
There is more to that. NIO has just introduced a new 100 kWh battery and for existing NIO customers, this means they can upgrade, through the swap station, to a larger battery at an affordable price. Let’s say you need an extra range for a month or two, you can use the 100 kWh as long as necessary, and then when you are done, the original battery can be re-installed. 𝗡𝗼 𝗼𝘁𝗵𝗲𝗿 𝗘𝗩 𝗺𝗮𝗻𝘂𝗳𝗮𝗰𝘁𝘂𝗿𝗲𝗿 𝗼𝗳𝗳𝗲𝗿𝘀 𝘀𝘂𝗰𝗵 𝗳𝗹𝗲𝘅𝗶𝗯𝗶𝗹𝗶𝘁𝘆.
These swap stations are really a differentiating factor in China because due to overpopulation and lack of land to build new houses, the majority of people in China live in large buildings/apartments where there is no infrastructure to support electric chargers to charge the electrical cars overnight.
🧐 𝗡𝗜𝗢’𝘀 𝗹𝗼𝗻𝗴-𝘁𝗲𝗿𝗺 𝗮𝗱𝘃𝗮𝗻𝘁𝗮𝗴𝗲 𝗶𝗻 𝗖𝗵𝗶𝗻𝗮.
As you might already know, the Chinese government is well known for censoring the content that is allowed to be viewed in China, by blocking every social media platform outside of China. Not only that, China has a culture where the government knows almost everything about its citizens through technology and data. It’s clear that they don’t want American companies to have much data on the Chinese population.
𝐍𝐨𝐰, 𝐢𝐟 𝐰𝐞 𝐭𝐡𝐢𝐧𝐤 𝐚𝐛𝐨𝐮𝐭 𝐭𝐡𝐢𝐬, 𝐓𝐞𝐬𝐥𝐚 𝐢𝐬 𝐚𝐥𝐬𝐨 𝐚 𝐝𝐚𝐭𝐚 𝐜𝐨𝐦𝐩𝐚𝐧𝐲, where they collect data through its self-driving software and I don’t really believe that China will allow that to happen in its country.
Although $TSLA (Tesla Motors, Inc.) (Tesla Motors, Inc.) has exported cars to China for years, as NIO and other local companies start to get traction in China, the Chinese government will most likely put more implications on Tesla, giving more power to local companies (NIO and Xpeng for example).
✌️ 𝗡𝗜𝗢 𝗪𝗲𝗹𝗰𝗼𝗺𝗲𝘀 𝗧𝗲𝘀𝗹𝗮
NIO and Tesla have no war going on, it’s quite the opposite. They welcome Tesla.
In 2018, as the Chinese government announced its plans to allow foreign Electric Carmakers, Lihong Qin, co-founder and president of NIO said: “We welcome big players and nice companies like Tesla.”
Even recently, in their last earnings call, they were asked about their opinion on Tesla introducing the Model Y in China competing with their SUV’s, and simply William Li, Founder and CEO of NIO, answered “We believe that this is actually good for the users because if we have more options for the users, then this can help us to accelerate the popularization of the EV in the market.”
This is very smart from NIO, since the EV market is growing, if Tesla grows, they grow, it’s just simple as that. We should not forget that over the following years, there will be an ongoing transition from ICEs to EV cars and NIO will take a percentage of that market share.
🚗 𝗡𝗜𝗢’𝘀 𝗥𝗮𝗻𝗴𝗲 𝗼𝗳 𝗖𝗮𝗿𝘀
NIO range of products is limited to SUV’s for now, which are the EC6, ES8, ES6 models. This is about to change as they are planning to introduce 2 new Sedan models by 2021, in order to compete with Tesla Model 3.
William Li, Founder and CEO of NIO, stated in the last earnings call “With the launch of the next two new products (two sedan vehicles), we believe we can complete our product portfolio.”. This will be critical for NIO’s next year’s market development, as this will probably attract thousands of new customers.
🤓 𝗠𝘆 𝘁𝗵𝗼𝘂𝗴𝗵𝘁𝘀 𝗼𝗻 𝗯𝗼𝘁𝗵
I’ve no doubt that Tesla will be the biggest brand in the EV market in the next few years. Although we can now understand that NIO is doing some incredible stuff, solving its customers’ pain and we should not forget that the EV market is just growing so they are very likely to grow as well.
This might be a lifetime opportunity to be able to invest in such 2 innovative and disruptive companies in the car industry, so I’ll not want to waste it and will take the train of the two. Not only to say that by investing in these two, and/or any other EV maker, we are diversifying our portfolio and reducing our investment risk.
𝐀𝐟𝐭𝐞𝐫 𝐫𝐞𝐚𝐝𝐢𝐧𝐠 𝐭𝐡𝐢𝐬 𝐚𝐫𝐭𝐢𝐜𝐥𝐞, 𝐰𝐡𝐚𝐭 𝐚𝐫𝐞 𝐲𝐨𝐮𝐫 𝐨𝐩𝐢𝐧𝐢𝐨𝐧𝐬?
Let me know your thoughts in the comments.
𝐃𝐢𝐬𝐜𝐥𝐨𝐬𝐮𝐫𝐞:
I am completely bull on Tesla. I’m fully aligned with their vision and happy with the progress they have made so far. If they manage to maintain this pace of growth and innovation, I believe that Tesla will remain the most valuable EV brand in the world in the next 10–20 years. | https://medium.com/@pgsinvest/is-nio-a-buy-stock-bullish-analysis-3ea0affe824d | ['Paulo Sá'] | 2020-12-07 10:58:30.976000+00:00 | ['Tesla', 'Electric Vehicles', 'China', 'Nio'] |
Interactive Data Visualization became much easier with help of Plotly-express. | Difference between Plotly and Plotly-express (in terms of plotting).
Plotly
Please Note: Plotly has been updated recently. Plotly as well as any sources from plotly updates frequently (as they are new libraries when compared to other libraries in python).
Till Plotly 3, we have two modes in plotting with plotly (Online and Offline).
Plotly online
When plotting online, plot and data will be saved to your plotly’s cloud account. There are two methods to plot online. plotly.plot() — used to return the unique URL and optionally open the URL. plotly.iplot() — used when working in jupyter-notebook, to display the plot within the notebook. These both methods create a unique URL for the plot and save it in your plotly account and internet connection is required to use plotly online.
Plotly offline
Plotly offline, allows to create plots offline and save them locally (which doesn’t require any internet connection).There are two methods to plot offline. plotly.offline.plot() — used to create a standalone HTML, that is saved locally and opened inside your web browser. plotly.offline.iplot() — used when working offline in jupyter-notebook, to display the plot in the notebook. When we intend to use plotly.offline.iplot(), we need to run an additional step, i.e., plotly.offline.init_notebook_mode() at start of each session.
From Plotly 4, (which is updated and recent version of Plotly)
Fig-1 : plotly3_vs_4
Plotly 4 made life much easier as it is completely offline (So, there is NO plotly online from plotly 4).
Whoever loves to work with plotly.offline in jupyter-notebook, they can avoid connection statement in their code (which includes connecting plotly in offline mode to their notebook ), now they can directly import plotly rather than importing plotly.offline.
plotly.graph_objs — This has several functions, which is useful in generating graph objects. grpah_objs — It is a class, which contains several structures that are consistent across visualizations made in python regardless of type.
From plotly 3 to plotly 4, plotly.graph_objs package has been aliased as plotly.graph_objects “because the latter is much easier to communicate verbally “— according to official documentation.
Plotly Express
Fig-2:Importing plotly express (careful with versions)
Plotly Express was separately-installed using plotly_express package but it is now part of plotly. Plotly should be updated to plotly 4 before using it Or you will be encountered with error as shown in Fig-2.
Comparing Scatter plot with plotly and plotly express
Scatter plot allows the comparison of two variables for a set of data. Depending on the trend of the scatter plot, we could interpret a correlation.
With Plotly:
Fig-3:plotting between sepal_length and sepal_width
Plotly follows a particular syntax as seen in Fig-3. Initially, a variable is to be created to assign a plot (Note:plot type should be given in the form of list, as shown in first line in Fig-3). In this case, we named it “data” (most followed notation), variable name can be of your choice. This “data” variable contains a plot type call. go.Scatter is one among many graph objects, each plot type has it’s own graph object. These objects typically accepts few parameters. For instance, scatter graph objects requires two mandatory parameters (assigning x-axis and y-axis). go.Layout(this is also one of the graph object), which is used to define layout for the plot. Then figure object from graph objects, is created to use both data and layout variables to plot.
The dataset I have used is very famous and best known database to be found in the pattern recognition literature .This dataset contains 3 classes(Setosa, Versicolour, Virginica) of 50 instances each, where each class refers to a type of iris plant. In Fig-3, we have plotted all classes to find relation between (Sepal_length and Sepal_width). But, as all the datapoints are represented in same color, we are unable to draw any conclusions from the plot because plotly doesn’t give you hue in a plot (which is a parameter in seaborn). So, alternative to it is to either data should be plotted using grouped-by method or individual traces should be created for each class variables.
Grouping by data to plot with variation in each class | https://medium.com/analytics-vidhya/interactive-data-visualization-became-much-easier-with-help-of-plotly-express-64c56e781b53 | ['Chamanth Mvs'] | 2020-10-06 02:07:34.860000+00:00 | ['Plotly Express', 'Plotly', 'Data Visualization', 'Python'] |
Running in Circles | Running in Circles
Why Agile Isn’t Working and What We Do Differently
Agile started off as a set of values. Values are subtle and abstract, so as agile spread, what spread wasn’t the values but the practice of working in cycles. Cycles are easy to explain and easy to copy.
People in our industry think they stopped doing waterfall and switched to agile. In reality they just switched to high-frequency waterfall.
Agile became synonymous with speed. Everybody wants more, faster. And one thing most teams aren’t doing fast enough is shipping. So cycles became “sprints” and the metric of success, “velocity.”
But speed isn’t the problem. And cycles alone don’t help you ship. The problems are doing the wrong things, building to specs, and getting distracted.
Claiming there’s a True Agile™ somewhere in the past or future doesn’t help either. When your team struggles with shipping, you need practical steps you can apply here and now. Not just an ideal.
Cycles are good. We work in cycles at Basecamp. But in addition to cycles you need three other practices to ship on time and in good health.
Deliberate resource allocation
Designers and developers can’t make progress if people constantly pull at their attention. It doesn’t matter if support finds a bug or sales needs a new feature. Allocating resources means dedicating resources. Whoever allocates the time and money to build a feature must also protect the team so they can do what was asked. It takes a mandate from above to secure the team’s time and attention. The team is doing this and only this during the cycle.
At Basecamp we start each cycle of work with a team of three: one designer and two programmers. They have nothing to do but this project. If you feel you must fix bugs the moment they arise, then dedicate resources for that. If you have tension between sales and product, make a choice for this cycle. If you don’t have enough people, rotate cycle time among departments.
Only management can protect attention. Telling the team to focus only works if the business is backing them up.
Mutable requirements
If a team works to a spec, there’s no point in iterating. The purpose of working iteratively is to change direction as you go. Defining the project in advance forces the team into a waterfall process. If every detail of the plan must be built, teams have no choice when they discover something is harder than expected, less important than expected, or when reality contradicts the plan.
At Basecamp we kick off each project with a core concept. We do our homework on the strategy side to validate that some version of the idea is meaningfully doable in the time we’re allocating. We’re also sure that less than 100% of the concept will ship. Not everything will make it but the important things will. If we aren’t sure, we’ll slot something else into the cycle and come back when we’ve honed the concept enough.
To start teams off with a concept like this, you have to separate the core from the periphery. Separate the things that are absolutely important from the things that were just “ the idea we had for how to do it.”
A single UI decision can cause a week of unnecessary work. A programmer could struggle to refactor a bunch of Javascript only to discover that the detail wasn’t core to the concept. The designer just happened to pick that interaction without knowing the cost.
In practice, this means giving the teams power to redefine scope. Some things are essential and other things aren’t. The team must be able to know the difference and make calls accordingly. To reinforce this, we give teams low fidelity hand-drawn sketches when a cycle starts and spend more time on motivation than the specifics of design and implementation.
One of Jason’s sketches for the team that built To-Do Groups. They ended up choosing not to build the “add” buttons below each group.
Uphill strategies
Teams that track “velocity” and “story points” treat development as if it’s linear labor. Software development is not like moving a pile of stones.
If work was like that, you could count the stones, count the time to move one, do the math and be done.
Work that requires problem solving is more like a hill. There’s an uphill phase where you figure out what you’re doing. Then when you get to the top you can see down the other side and what it’ll take to finish.
The uphill phase is full of false steps and circles and dead ends. It’s where you encounter the unexpected. The programmer says “yeah that’ll take two days” but then they start touching the code and the reality is more complex. Or the designer says “this interaction will be perfect” and they test it on the device and it’s not what they hoped.
The most important question for a team isn’t “what is left?” but “what is unknown?” Can you see the edges? Have you gone in there and seen everything that needs to change? The only way to gain certainty is to roll up your sleeves and engage with the reality of problem.
At Basecamp our teams seek out the areas with the scariest unknowns and work on them first. This uphill work requires strategies. We wrote about these in Getting Real. Open the code, spike something that works, load it with real data and try it. When the whole feature is too big to prototype, factor out the most important pieces and spike them.
Different phases of the uphill and downhill work
The uphill work is where you learn what’s hard and what’s possible and make value judgements. Here’s where you make decisions about those mutable requirements because you’re seeing the real costs and opportunities in the implementation. Learning uphill requires the focus and time given to the teams by deliberately allocated resources.
We’ve done this informally for years, focusing on unknowns and chipping at them first. We recently started formalizing this with the Hill Chart. A question we often ask these days is “where is that on the hill?”
Here’s a snapshot from the Search in Place project that shipped in October.
First reworking search results, then moving them into the nav
And here are some moments in time from the To-Do Groups project.
The three most important pieces went over the hill first
Wrapping up
It takes a whole business to ship
Whether teams work in cycles or not is just one part of the story. An “agile” team isn’t going to get very far if management doesn’t protect their time. And if they don’t have flexibility to change requirements as they learn, late nights and late delivery are guaranteed.
Designers and developers can learn the uphill strategies from Getting Real to gain certainty instead of crossing their fingers. Whoever sets requirements can give teams the room to push back in the uphill phase. And resource allocators can take more responsibility to guard the focus of their teams.
We’ve been doing it for 15 years. Hopefully sharing some of these techniques will help you do it too.
UPDATE:
Since this article was written, we built and released a feature in Basecamp 3 so you can make your own Hill Charts. See all the details here: | https://medium.com/signal-v-noise/running-in-circles-aae73d79ce19 | ['Ryan Singer'] | 2018-06-07 21:31:05.838000+00:00 | ['Product Management', 'Software Development', 'Agile'] |
The Myth of The Polite And Orderly Protest | Last week, the Milwaukee Bucks refused to take the court to play their scheduled NBA playoff game against the Orlando Magic in protest against systemic racism in this country and more specifically to stand in solidarity with protesters in the wake of the murder of Jacob Blake by police. This move promulgated through the sports ecosystem with the Lakers refusing to play, Naomi Osaka refusing to play her Western & Southern Open semi-final match the next day, and teams in Major League Soccer, Major League Baseball, and the WNBA refusing to play their matches that week.
Bucks guard, George Hill was the catalyst for all of it, refusing to play on Wednesday, when the rest of his team joined them. He’s quoted by Yahoo Sports as saying, “It’s just sickening. It’s heartless. It’s a f — ed up situation…Like I said, you’re supposed to look at the police to protect and serve. Now, it’s looked at harass or shoot.” Sports has seen a monumental shift when it comes to activism in recent months largely due to COVID-19, and players all across are making their voices known. This represents a significant shift since before March when Colin Kaepernick and Megan Rapinoe were at times very much alone in their continued protests against racism.
Predictably, Jared Kushner, a man who hasn’t worked a single day in his life, was quick to cast aside the sports protest according to Politico saying, “Look, I think that the NBA players are very fortunate that they have the financial position where they’re able to take a night off from work without having to have the consequences to themselves financially.” Callous, dismissive, and condescending, Kushner’s remarks represent the latest in a trend that has rampantly metastasized during the Trump years, the notion that if only the players/activists/protesters/agitators would just protest in the right way, then we can take them seriously.
Colin Kaepernick silently kneeling is disrespecting the flag; protesters blocking streets are inconveniencing drivers; activists confronting police are violent thugs; this athlete is to rich to protest; this actor is to out of touch to protest; if only they could protest over there where I don’t have to see them, then I can listen.
Out of sight, out of mind, and only then will we even begin to consider your request, except even that is a fantasy due to the ever-changing goal posts imposed upon protesters.
The myth of the acceptable protest.
When I hear about protestors blocking roads and stopping traffic, the first instinct I have is to rail at the audacity of the inconvenience. Why are they deciding to inconvenience people who are just trying to get home? It’s an easy thought to have, and on the surface, there is a logic to it. I’m sure most people of thinking something similar at some point.
But the thing is that when you analyze and attack those thoughts, you can realize pretty quickly that the whole point of a protest is to make you uncomfortable, to attack the comfort and make it hard to look away, and a surefire way to recognize how effective a protest can be is to measure the remarkable speed with which the system snaps back. In response to the most recent sustained protests against police brutality, the system has snapped back hard. All of a sudden all of these conditions have been imposed and just like that the thrust of the protest is shrouded in a cloud of punditry that tries to dictate the most appropriate form of protest.
“If only the orderly protester could layout their demands politely,” Tucker bemoans, “then I might consider taking them seriously.”
“If only this global protest movement could perfectly police every single on of its members,” Hannity wines.
Of course, the catch is that this mythical form of polite and orderly protest has never worked for a reason. Once that thing you are protesting puts you in a box, then that’s the game. It’s all over. Once they start erecting walls around what’s acceptable, then you’re done. There’s no incentive for them to listen to you ever again, and as the old saying goes, when has power ever been given up willingly?
So the whole point is to be unruly and loud and in your face because the one thing protesters can wield effectively is human emotion. If they can generate feeling, if they can put it right in front of you, then there’s hope for change. That’s why protesters show up at houses and get in politicians faces. It’s in the hopes of sparking either empathy, shame, or both.
Telling people how to protest is privileged and wrong.
The history of successful protests in this country and around the world has always been about disruption. That’s the only way for it to be effective and if the concern trolls in the media and our government are worried about protests, then two things are true: the protests are working and the only way to stop them is to honestly and thoroughly address their concerns.
You want the protests to end? Attack the issues that Black Lives Matter and others are reacting too. The right to petition government for redress is enshrined within our Constitution. And if they are serious with their chants, there will be no peace until there is justice. If I were you government, an easy first step would be to remove the antagonizing forces that seek out violence and harm against protesters, namely police thuggery, white supremacist groups, and the biggest antagonizer of all: Donald Trump. | https://robertthepotter.medium.com/the-myth-of-the-polite-and-orderly-protest-de7ed812b84 | ['Robert Potter'] | 2020-09-02 10:01:05.340000+00:00 | ['Politics', 'Protest'] |
匯豐Payme疑故障 所有功能均未能使 | A columnist in political development in Greater China region, technology and gadgets, media industry, parenting and other interesting topics. | https://medium.com/@frederickyeung-59743/%E5%8C%AF%E8%B1%90payme%E7%96%91%E6%95%85%E9%9A%9C-%E6%89%80%E6%9C%89%E5%8A%9F%E8%83%BD%E5%9D%87%E6%9C%AA%E8%83%BD%E4%BD%BF-cefdd7493de8 | ['C Y S'] | 2020-12-23 04:20:57.370000+00:00 | ['Hsbc', 'Hong Kong'] |
Introduction to Blockchain With Implementation in Python | Cryptocurrency has revolutionized the way we conduct transactions and this is just scratching the surface.
Photo by André François McKenzie on Unsplash
Founded in 2009 by an unknown individual(s) named Satoshi Nakamoto, Blockchain Technology is a breakthrough and has shown immense potential as being a fast, safe, and easy to use method for transferring or receiving funds and how is that you might ask?
Well, for starters, there is no “central entity” monitoring transactions like a Bank would.
All details are kept private between the recipient and the sender and there is no “middleman” needed. Moreover, The ease of access Blockchain offers is amazing, to say the least for example There are still countries in the world where conventional payment websites(PayPal for example) are banned and this creates a lot of hurdles for people who are trying to receive or send funds to foreign nations.
Cryptocurrency, on the other hand, knows no boundaries. It can be sent from anywhere in the world to any place in the world.
But the biggest benefit of cryptocurrency in my mind is the protection it provides against inflation. Yes, Blockchain Technology is in it’s developing phases and that’s why it is volatile but unlike conventional currency which can be printed to oblivion, Cryptocurrency is REALLY HARD to mass-produce.
It takes a lot of resources to “mine” one Crypto coin and that’s where it’s value comes from. So to sum it up, Cryptocurrency is a good defense against inflation like Gold but unlike Gold, it can be very easily stored, transferred, and is virtually impossible to steal.
How does Blockchain work?
Photo by Clifford Photography on Unsplash
So how does Blockchain work? What makes it so secure? Well, In this article, I will be explaining how a Blockchain works and will write the code side by side which goes into creating a Blockchain. So without further due, Let’s get started.
As the name suggests, A Blockchain is a “chain of Blocks”. Each block contains a “hash” an “index” and information about the particular transaction that took place.
All the Blocks in the Blockchain are linked to each other with the “hash” variable. A “hash” contains information of the previous block in the chain and that’s what keeps the entire chain-linked and connected.
Basic Blockchain diagram
If any value in any of the blocks in “tampered with”, This will cause the hash to change as well and it won't match the hash in the Block after it, which will alert the network of “tampering” and will render the entire chain useless. So for any hacker to successfully “hack” into a chain, He/She will have to not only change values of one single block but all the blocks before it and after it which are virtually impossible.
Moving to the Python part, I will first define my sender as a random individual, my Blockchain as a list with a “dummy genesis block”, the transactional information as an empty list, and import a library by the name of “hashlib” (more on that later)
from hashlib import sha256
sender = 'Moeed'
blockchain = [genesis_block]
genesis_block = {'previous_hash': 'XYZ',
'index': 0,
'transactions': [], 'proof': 0} open_transactions = []
I will then define a function that will ask the sender to enter his details such as the transaction amount to send and the address of the recipient.
def get_transaction():
recipient = input('Enter your recipient')
amount = input('Enter your amount')
return (recipient,amount)
Then I will try to append the transactional informational into the “open_transaction” list defined earlier.
while True:
I = input('Enter your choice')
if I == '1':
data = get_transaction()
recipient, amount = data
transaction = {'sender': sender,
'recipient': recipient,
'amount': amount}
open_transactions.append(transaction)
print(open_transactions)
Note: The transaction information is stored in a dictionary so we can access the values using the “keys” when we feel the need.
Now I will try to create a function that will create the “Block” and append it to the Blockchain. But before I do that, I would like to go over where a Blockchain gets its security from.
SHA-256
The SHA-256 is a hashing algorithm created by the NSA back in 2001 which takes an input of any size and converts it to an output of fixed size. The beauty of the SHA-256 is that the Output is close to impossible to decode thus ensuring security.
The “hash” part of the Block in the Blockchain is nothing more than the Output of the SHA-256 algorithm whereas the input is the “hash of the previous block” the “index” and the “transaction information” all combined into the string format.
Let's look at the code here:
I have already imported the “hashlib” library.
I will try to create a function which will take the “hash”, “index” and “transaction information” of the previous block and convert it into a hash for the block after it.
def hash_block(last_block):
previous_hash = ''
for keys in last_block:
previous_hash = previous_hash + str(last_block[keys])
hash = sha256(json.dumps(previous_hash).encode('utf-8')).hexdigest()
return hash
As our Blockchain starts from the “genesis block” defined earlier, we give it a dummy “previous_hash” value of XYZ.
The “Proof of Work”
Photo by Bermix Studio on Unsplash
We know that the hash is generated using the SHA-256 algorithm by a set of inputs. But that makes adding a Block to the Blockchain a little too “easy”.
Blockchain mining is on purpose hard so that we don’t mass produce coins and the supply is kept in check as per the demand. This is accomplished using the “proof of work” algorithm.
The “proof of work” uses the same SHA-256 algorithm but instead of taking an input and giving a single output, there is a condition placed for example The output hash needs to have the first two characters as zeros.
The hash is calculated using the SHA-256 algorithm with the same inputs (previous_hash, index, transaction information) but with an addition of the iteration number also known as “proof”. So basically, the “proof” will be incremented by one until the condition we have placed ( first two zeros) is satisfied.
Once the condition is satisfied, The resulting hash from the “hash_block” will be added to the Block and then the Block will be added to the Blockchain.
Now let’s look at the code for the “proof of work” function
def proof_of_work():
previous_hash = ''
proof = 0
last_block = blockchain[-1]
for keys in last_block:
previous_hash = previous_hash + str(last_block[keys])
guess_hash = previous_hash + str(proof)
hash = sha256(hashed.encode('utf-8')).hexdigest()
while hash[0:2] != '00':
guess_hash = previous_hash + str(proof)
hash = sha256(guess_hash.encode('utf-8')).hexdigest()
proof = proof + 1
print(hash)
return proof
As already explained, the above code will perform iterations until the “zero” condition is satisfied. Let’s move forward and look at how a Block is mined.
Mining a Block
Photo by Aleksi Räisä on Unsplash
We have already gone over the proof of work algorithm. So how is a Block mined? Well, Let’s look at the code.
def hash_block(last_block):
previous_hash = ''
for keys in last_block:
previous_hash = previous_hash + str(last_block[keys])
hash = sha256(json.dumps(previous_hash).encode('utf-8')).hexdigest()
return hash def mine_block():
last_block = blockchain[-1]
hashed_block = hash_block(last_block)
proof = proof_of_work()
block = {
'previous_hash': hashed_block,
'index': len(blockchain),
'transactions': open_transactions,
'proof': proof
}
blockchain.append(block)
print(blockchain)
print(hashed_block)
print(proof)
return True
As can be seen from the above code, We first define a “hash_block” function which takes an empty string by the name of “previous_hash” and adds the other values of the block such as index and transaction information as strings and creates one big string. This string is then returned as an encrypted string using the SHA-256 algorithm.
Then we begin the “mine_block” function which uses the “hash_block” function to return a hash and the “proof of work” to calculate the hash which has two zeros at the start.
The results (proof number and the hash) are then added to the dictionary by the name of “block” which is further appended to the blockchain list.
This creates the first Block. But there’s one thing that is still missing
What if someone were to tamper with the blockchain? Let’s say what if someone were to change values in the Block. How can we stop that from happening? It’s simple, We put a check on the chain which compares “previous_hash” of one chain with a calculated hash of the Block before it. Let’s look at the code
def verify_chain():
for (index, block) in enumerate(blockchain):
if index == 0:
continue
if block['previous_hash'] != hash_block(blockchain[index - 1]):
print(block['previous_hash'])
print(hash_block(blockchain[index - 1]))
print(block)
print(blockchain[index-1])
return False
return True
and then we add a condition at last which implements the function above.
if not verify_chain():
print('Invalid block')
break
If any value inside our Blockchain is tampered with, The condition above will return “Invalid chain”
and Finally, Some code for the user to interact with the chain
while True:
I = input('Enter your choice')
print('1 to recieve transactions')
print('2 to mine block')
print('3 to alter block')
if I == '1':
data = get_transaction()
recipient, amount = data
transaction = {'sender': owner,
'recipient': recipient,
'amount': amount}
open_transactions.append(transaction)
print(open_transactions)
save_data()
if I == '2':
mine_block()
open_transactions = []
if I == '3':
alter_block()
if not verify_chain():
print('Invalid block')
break
The above code is pretty self-explanatory. Input 1 is used to receive transactions from the input. Input 2 is used to mine the block and input 3 is used to “alter the block”.
We use “alter_block” to test whether the “verify_chain” is working or not.
The code for “alter_block” function is:
def alter_block():
blockchain[0] = {'previous_hash':'ABC',
'index':0,
'transactions':[],'proof':0}
Note: In the above function, we alter the first block by changing the “previous_hash” from “XYZ” to “ABC” which will give us a wrong hash, and hence the condition “verify_hash” will come into play.
Conclusion
In this article, I went over the basics of Blockchain Technology with a simple implementation in Python. There’s definitely more to Blockchain than what I covered such as Node Networks, APIs, and OOP.
To sum it up, Blockchain Technology has immense potential and is already showing a promising future. Let’s see what the future holds | https://medium.com/swlh/introduction-to-blockchain-with-implementation-in-python-c12f8478a3c4 | [] | 2020-09-29 19:51:13.040000+00:00 | ['Data Science', 'Machine Learning', 'Technology', 'Finance', 'Blockchain Technology'] |
Every Capital, Australia’s first retail cryptoasset hedge fund | Giving every Australian access to crypto and quality ICOs
We’re announcing the upcoming launch of Every Capital, Australia’s leading retail crypto investment fund.
As a retail fund, our investments are made up of contributions from everyday individuals, not big institutions.
We’re building Every because we believe there’s a better way for Australians to get involved with cryptocurrency.
Let us tell you about Every:
Every is easy
Getting started with crypto is not straightforward, no matter where you are in the world. There’s a steep learning curve when it comes to understanding wallets, exchanges, mining and blockchain technology itself.
In Australia it’s even more difficult, since there’s only a small handful of gateways into cryptocurrency. Local exchanges trade at a significant premium to international sellers, and most international sellers won’t allow Australians to buy in directly — or charge us a premium too.
That’s where Every Capital comes in. You will be able to sign up quickly and invest into the cryptocurrency and ICO fund with Australian Dollars. Every is set up as a Managed Investment Scheme (MIS), investing in a diverse portfolio of crypto-assets and select Initial Coin Offerings (ICOs).
Every is secure
Cryptocurrency turns traditional money on its head. All of a sudden, you become your own banker. You hold the keys to your money — and that means you own the responsibility of looking after it too.
Securing your private keys, avoiding scams, and protecting yourself from hackers are just a few of the challenges that appear once you start investing in cryptocurrency.
Every handles that for you. We utilise cutting edge security protocols and technology to protect your investment. Cold storage, multi-sig wallets — we’ve got your back.
Every has it all
We’re not just a new way to invest in cryptocurrency. Every brings a whole suite of tools to help you manage your money.
Every will get you started with easy investment and technology guides. We also provide tools to track your portfolio and manage your investments.
Finally, we’ve partnered with one of Australia’s leading fund managers to make your financial life easier too. Every will provide you with clear end of year reporting, including tax reporting, so you can make sure you’re getting things right.
Every is for everyone… In Australia, anyway.
We really believe in the potential of blockchain, both as a technology and investment.
Australian companies are already using blockchain technology to do incredible things; disrupting industries as broad as agriculture and renewable energy, and even rethinking fundamental ideas like voting and democracy.
We‘ll give experienced users a new point of entry to crypto that makes good financial sense. We‘ll give interested beginners a better way to get started.
And if you know someone who has heard about cryptocurrency and is looking for a way to get involved, consider pointing them to us — it’ll save you trying to explain public key encryption around the dinner table.
Every will begin rolling out to select customers in Q3 2018.
Reserve your place on our waitlist now at www.every.capital | https://medium.com/everycapital/every-capital-is-australias-first-retail-cryptoasset-hedge-fund-c190e650958b | ['Jack Baldwin'] | 2018-06-06 06:06:05.097000+00:00 | ['Investment', 'Bitcoin', 'Ethereum', 'Australia', 'Cryptocurrency'] |
Dfm Design for Manufacturing | A carbon-carbon factor for creation has become an advance of composite with extreme traits. The making procedure is not affordable with very high temperature and pressure will be demanded. The stabilized fiber will be placed in an inert surrounding and Carbon-carbon Composite manufacturing is emerging drastically. The implementation of the carbon basis factors starts with the development of the carbon threads.
The dfm design for manufacturing is the method of creating components, parts, or products for the convenience of manufacturing with a final goal of making a good product at a lower cost. The full form of DFM is Design for Manufacturing-East-West Manufacturing. It is essential to examine the five Principles. They are:
Process:
The creation process selected must be the appropriate one for the item or a section of it. It is very easy as one need not to use costly process like injection molding which consists of equipment and goes to make a low volume section that could have been created using a lower-capitalized process like thermoforming.
Creation:
Design or creation has a great significance. The actual drawing of the section or product has to conform to good manufacturing principles for the creation process one has gone through.
It includes a check of constant wall thickness which permits for consistent and quick part cooling. Texture-need 1 degree for every 0.001 of texture depth on sidewalls. Ribs-60 percent of the nominal wall or a rule of thumb.
Raw Material:
It is magnificent to select the correct material for the product. Some material attributes one must consider during DFM consists of:
Electrical Attributes:
Does the material demand to act as an insulator?
Atmosphere:
The section or item must be created to feel the atmosphere it will be subjected to. All the types around the globe will not mean if the section will execute astoundingly under its normal operating situations.
Testing:
All items must comply with quality and safety parameters. These industries have their own quality, others are third-party standards, and some internal, company special parameters.
Initially, DFM needs to take place early in the design process. It is properly-created DFM needs to add all engineers, designers, contract, manufacturers, mold builders, and product suppliers. The dfm intends is to look at the creation with all components like sub-system, factor, holistic level, and system. It is to assure the design will be highlighted and does not have some irrelevant cost used in it. With the support of DFM, it is easy to permit design changes in a speedy way in the least costly situation. There are some questions to keep in mind about DFM and the injection molding procedure:
Are there features or undercuts that will get the trapping?
Undercuts are protrusions or recesses in the creation that stops the mold from sliding away from the section. They can get caught in the equipment and will face damage.
How regular is the wall thickness?
The thick sections on plastic sections are created that section for power.
https://alpha-centaurillc.com/our-services/design-for-manufacture/ | https://medium.com/@alphacentaurillc81/dfm-design-for-manufacturing-c898d9e51a7f | ['Alpha Centaurillc'] | 2019-11-15 08:14:02.288000+00:00 | ['Manufacturing'] |
Simplifying Redux with Redux Toolkit | Main features of Redux Tool Kit API?
The following API function is used by Redux Took Kit, which is an abstract of the existing Redux API function. These function does not change the flow of Redux but only streamline them in a more readable and manageable manner.
configureStore : Creates a Redux store instance like the original createStore from Redux, but accepts a named options object and sets up the Redux DevTools Extension automatically.
: Creates a Redux store instance like the original createStore from Redux, but accepts a named options object and sets up the Redux DevTools Extension automatically. createAction : Accepts an action type string and returns an action creator function that uses that type.
: Accepts an action type string and returns an action creator function that uses that type. createReducer : Accepts an initial state value and a lookup table of action types to reducer functions and creates a reducer that handles all action types.
: Accepts an initial state value and a lookup table of action types to reducer functions and creates a reducer that handles all action types. createSlice: Accepts an initial state and a lookup table with reducer names and functions and automatically generates action creator functions, action type strings, and a reducer function.
You can use the above APIs to simplify the boilerplate code in Redux, especially using the createAction and createReducer methods. However, this can be further simplified using createSlice, which automatically generates action creator and reducer functions.
What is so special about createSlice?
It is a helper function that generates a store slice. It takes the slice’s name, the initial state, and the reducer function to return reducer, action types, and action creators.
First, let's see how reducers and actions look like in traditional React-Redux applications.
Actions
import {GET_USERS,CREATE_USER,DELETE_USER} from "../constant/constants"; export const GetUsers = (data) => (dispatch) => {
dispatch({
type: GET_USERS,
payload: data,
});
}; export const CreateUser = (data) => (dispatch) => {
dispatch({
type: CREATE_USER,
payload: data,
});
}; export const DeleteUser = (data) => (dispatch) => {
dispatch({
type: DELETE_USER,
payload: data,
});
};
Reducers
import {GET_USERS,CREATE_USER,DELETE_USER} from "../constant/constants"; const initialState = {
errorMessage: "",
loading: false,
users:[]
}; const UserReducer = (state = initialState, { payload }) => {
switch (type) {
case GET_USERS:
return { ...state, users: payload, loading: false };
case CREATE_USER:
return { ...state, users: [payload,...state.users],
loading: false };
case DELETE_USER:
return { ...state,
users: state.users.filter((user) => user.id !== payload.id),
, loading: false };
default:
return state;
}
}; export default UserReducer;
Now let's see how we can simplify and achieve the same functionality by using createSlice.
import { createSlice } from '@reduxjs/toolkit'; export const initialState = {
users: [],
loading: false,
error: false,
}; const userSlice = createSlice({
name: 'user',
initialState,
reducers: {
getUser: (state, action) => {
state.users = action.payload;
state.loading = true;
state.error = false;
},
createUser: (state, action) => {
state.users.unshift(action.payload);
state.loading = false;
},
deleteUser: (state, action) => {
state.users.filter((user) => user.id !== action.payload.id);
state.loading = false;
},
},
}); export const { createUser, deleteUser, getUser } = userSlice.actions; export default userSlice.reducer;
As you can see now all the actions and reducers are in a simple place wherein a traditional redux application you need to manage every action and its corresponding action inside the reducer. when using createSlice you don’t need to use a switch to identify the action.
When it comes to mutating state, a typical Redux flow will throw errors and you will require special JavaScript tactics like spread operator and Object assign to overcome them. Since the Redux toolkit uses Immer, you do not have to worry about mutating the state. Since a slice creates the actions and reducers you can export them and use them in your component and in Store to configure the Redux without having separate files and directories for actions and reducers as below.
import { configureStore } from "@reduxjs/toolkit";
import userSlice from "./features/user/userSlice"; export default configureStore({
reducer: {
user: userSlice,
},
});
This store can be directly used from the component through redux APIs using useSelector and useDispatch. Notice that you don’t have to have any constants to identify the action or use any types.
Handling async Redux flows
To handle async actions Redux toolkit provides a special API method called createAsyncThunk which accepts a string identifier and a payload creator callback that performs the actual async logic and returns a promise that will handle the dispatching of the relevant actions based on the promise you return, and action types that you can handle in your reducers.
import axios from "axios";
import { createAsyncThunk } from "@reduxjs/toolkit"; export const GetPosts = createAsyncThunk(
"post/getPosts", async () => await axios.get(`${BASE_URL}/posts`)
); export const CreatePost = createAsyncThunk(
"post/createPost",async (post) => await axios.post(`${BASE_URL}/post`, post)
);
Unlike traditional data flows, actions handled by createAsyncThunk will be handled by the section extraReducers inside a slice.
import { createSlice } from "@reduxjs/toolkit";
import { GetPosts, CreatePost } from "../../services"; export const initialState = {
posts: [],
loading: false,
error: null,
}; export const postSlice = createSlice({
name: "post",
initialState: initialState,
extraReducers: {
[GetPosts.fulfilled]: (state, action) => {
state.posts = action.payload.data;
},
[GetPosts.rejected]: (state, action) => {
state.posts = [];
},
[CreatePost.fulfilled]: (state, action) => {
state.posts.unshift(action.payload.data);
},
},
}); export default postSlice.reducer;
Notice that inside extraReducers, you can handle both resolved (fulfilled) and rejected (rejected) states.
Through these code snippets, you can see how well does this toolkit simplifies the code in Redux. I have created a REST example that leverages Redux Toolkit for your reference.
Final thoughts
Based on my experience, Redux Toolkit is a great option to use when getting started with Redux. It simplifies the code and helps to manage the Redux state by reducing the boilerplate code.
Finally, just like Redux, Redux Toolkit is not built just for React. We can use it with any other frameworks such as Angular.
You can find more information on the Redux Toolkit by referring to their documentation.
Thank you for Reading !!! | https://blog.bitsrc.io/simplifying-redux-with-redux-toolkit-6236c28cdfcb | ['Madushika Perera'] | 2021-08-29 14:58:31.882000+00:00 | ['Redux Toolkit', 'React', 'Redux'] |
Maecenas ART Crowdsale Terms | We are excited to announce the terms for our upcoming crowdsale. You can learn here about the dates, amounts and price as well as how we will use the funds to build our decentralized art gallery democratizing access to fine art.
Maecenas is part of the platform of Cofound.it, a proven leader in supporting companies throughout the process. So our crowdsale will be accessible on https://cofound.it/en/projects/maecenas/ . Make sure you bookmark this URL!
The pre-sale is happening on 5th September and you will need a Priority Pass to participate (more info here). The public sale starts on 7th September and it’s a good idea to join our mailing list to make sure you stay up to date.
Maecenas will issue a total of 100,000,000 ART tokens. ART is an ERC20 token issued on the Ethereum network.
We pronounce our token by its letters “A-R-T”. (Thought you’d like to know!)
A total of 30,000,000 ART tokens will be available with the following schedule:
Starting on 5th September @ 5pm CEST and for 24 hours, 15,000,000 ART will be available to Priority Pass™holders. Individual caps will be announced by Cofound.it a few days before the crowdsale.
Starting on 6th September @ 5pm CEST and for 24 hours, another 15,000,000 ART will be available to Priority Pass™ holders, without any individual caps.
Starting on 7th September @ 5pm CEST and for four weeks, any leftover from the pre-sale will be available to everyone to acquire without caps.
To get more information on how to be part of the pre-sale and become a Priority Pass™ member click here.
Please note that contributions will only be accepted in ETH.
The Crowdsale will end on the 5th of October at 5pm CEST.
Maecenas Targets
The crowdsale seeks to collect ETH equivalent of US$ 3 million (soft cap) which will allow Maecenas to launch the platform and operate in our first market. That is highly likely to be Geneva, Switzerland.
There is a further milestone of US$ 10 million which Maecenas will use to accelerate growth and operate in additional markets in top art cities worldwide (e.g. New York, London, Paris, Luxembourg, Shanghai, Hong Kong, Singapore).
There is a further stretch goal of US$ 20 million (hard cap) which Maecenas will use to facilitate (underwrite) platform acquisition of high-grade artworks to be listed on the platform.
In summary:
Soft cap: US$ 3M (launch platform in one market)
Milestone: US$ 10M (additional markets)
Max cap: US$ 20M (art funding)
This target prices each ART token at US$ 0.66. The actual price in ETH will be set a few days before the sale starts.
Token Allocation
30% sold in the crowdsale
30% kept as reserve liquidity fund to guarantee ART available in auction transactions
20% allocated to incentivise partners and client acquisition (all tokens granted to external parties will have a long vesting period)
20% retained by Maecenas to incentivise existing and future talent, and to fund any future operations if required. Team tokens will have a 24-month vesting periods.
Use of Proceeds
35% Research & Development
20% Sales & Marketing
20% Legal & Compliance
15% Art Collections & Funds Financing
10% Operations
Please note that the percentage allocations are a model based on estimates and may vary depending on the actual amount of ETH raised, its dollar-value equivalent at the time of conversion as well as the wider art market’s opportunities. | https://medium.com/maecenas/crowdsale-sale-terms-1ec4f06c9055 | [] | 2017-08-23 12:00:29.339000+00:00 | ['Investment', 'Crowdsale', 'Ethereum', 'Art', 'Blockchain'] |
Lighting solutions demand 2020 | Emergence of advanced lighting technologies along with advent of smart components to impact the global industry landscape | Lighting solutions demand 2020 | Emergence of advanced lighting technologies along with advent of smart components to impact the global industry landscape Shankark Jan 8, 2020·6 min read
Source: pixabay
Lighting solutions — an intrinsic part of everyday life, have become even more prominent in recent times. Lighting technology has evolved to a great extent — to the point that it is being used for biological marvels like the production of plants.
Over the past few years, the lighting market has witnessed profound technological transformations. At present, the rapid emergence of new technologies is facilitating possibly the biggest strides in the advancement of lighting since the advent of the LED bulb.
The past few decades have proved to be an exciting period with respect to lighting technology progression and with the breakneck speed at which technology is evolving, this period is likely to become more fascinating in the years ahead.
Modern lighting technologies have observed great development across the whole spectrum, including fluorescent, metal halide, incandescent and LED, among others.
Solid State Lighting
Lighting systems that leverage LEDs (light emitting diodes), OLEDs (organic light emitting diodes) or light emitting polymers are known as solid-state lighting or SSL. Even though LEDs have been in existence for over 50 years, it was the emergence of solid-state lighting technologies that took them beyond their initial use as indicator lights in electronic devices.
Solid-state lighting is a technology where LEDs replace traditional fluorescent or incandescent lighting systems. These lighting systems are used for general lighting applications. Solid-state lighting systems have numerous advantages over their conventional counterparts, given their low heat emission, low energy dissipation, absence of hazardous components and ability to withstand impact without shattering.
In the current scenario, solid-state lighting market has evolved to such an extent that LEDs are rapidly becoming the preferred lighting solution across myriad applications. Furthermore, the surge of technological innovations over the past two decades had allowed LEDs to transcend their initial purpose and expand their application scope to signal devices like traffic signs, limited lighting applications like flashlights as well as more general lighting solutions like industrial or residential lighting.
The latest development in the SSL market is the incorporation of IoT (internet of things) and 3D printing. For instance, the Lighting Research Center (LRC), which is a part of the Rensselaer Polytechnic Institute, recently collaborated with Eaton Corporation to create a completely 3D printed luminaire with integrated LEDs. This innovation is part of a DoE (Department of Energy) funded project, designed to study the application of 3D printing technology with respect to solid-state lighting.
LED Lighting
Even as multiple industries gain massive traction with regards to new trends and technologies, the LED lighting market is one of the eminent leaders of the pack. This leading role has been characterized by prolific developments in the LED sector over the years, ranging from the emergence of high brightness LEDs, to blue light to the upcoming white light LEDs.
Modern LED systems demonstrate a lifespan that is nearly 2–4 times that of conventional lights, without compromising on the quality of light and efficiency.
The Department of Energy is also taking consistent steps towards developing longer lasting and highly efficient LED lights. These efforts are backed by the Office of Energy Efficiency & Renewable Energy’s claim that the use of LED lights could cut down lighting consumption in the U.S by half, ultimately presenting a possible solution to climate change. Studies suggest that widespread implementation of LED systems could save nearly 348 TWh of electricity by 2027.
The LED industry had undergone great transformations over the past decade, with higher efficiencies, lower costs and a broader application scope throughout. This transformation is in turn presenting various businesses several growth prospects for the years to come.
Leading LED technology innovator Seoul Semiconductor Co. Ltd.’s natural spectrum LED lights, Sunlike Series has recently been chosen by WalaLight™ to be integrated into its Healthy Circadian LED Lighting System. The Sunlike LED Series is known for its ability to emit light that matches the spectrum of natural sunlight, making it ideal for the passive adaptive nature of the Healthy Circadian lighting system. The system is able to deliver the ideal light spectrum with the use of an intelligent Kelvin-changing technology, designed to enhance human health, by replicating the sun’s spectral nuances.
Smart Lighting
The once-foreign idea that homes could not just match but adapt to changing human preferences is now a reality with the advent of the smart homes concept. Smart locks, video doorbells, voice assistants and a plethora of other technologies help homes become smarter and in turn make life easier. One of these systems is smart lighting.
The smart lighting market involves various internet-linked solutions that enable the management, control and monitoring of the home’s lighting systems. These solutions can include smart lamps, smart bulbs as well as smart plugs.
Since installing a complete smart lighting solution is relatively expensive, due to the need for special smart sockets and equipment, manufacturers have developed a more economical solution to give customers access to smart lighting, i.e. smart plugs.
Smart plugs enable customers to turn their regular light fixtures into smart lighting systems and experience the benefits of smart lighting, albeit at a more affordable rate. These plugs also offer customers a way to try out a smart lighting system in their home before they commit to a complete switch.
To illustrate, ConnectSense, which is known for its series of home automation products, dubbed HomeKit, have recently introduced a new Smart In-Wall Outlet. This outlet, compatible with HomeKit solutions, enables the control of plugged-in accessories by Siri voice commands as well as the Home application, effectively turning regular appliances into smart appliances. The product also delivers accurate power supervision, so that homeowners can track the amount of energy being used an appliance, in real-time.
Human Centric Lighting
As more and more benefits of light continue to emerge, mounting evidence indicates that light impacts not just human vision, but also the way they think, behave and feel. With these advantages coming to light, a new lighting technology has emerged — human centric lighting.
Human centric lighting refers to lighting systems that leverage the power of light to enhance the daily lives and health of humans. These systems help tune lights to facilitate the ideal circadian rhythms and emotional requirements to enhance human comfort, health and productivity.
To illustrate, studies have shown that primary school students exposed to 6000K — 100fc average maintained lighting showed a 36% improvement in oral reading fluency, in comparison with the 17% demonstrated by students exposed to control lighting. This is just one of many studies being conducted to unveil the benefits offered by human centric lighting to everyday lifestyles.
Many organizations are already working on testing HCL to gain a competitive edge. A notable example of this is the Seattle Mariners professional baseball team. The team’s locker room installed a human centric, circadian lighting system in 2013, to help alleviate the effects of jet lag and harmonize the energy levels of the team’s players, both pre- and post-game.
The prevalence of major players is bolstering the human centric lighting market outlook, with multiple R&D efforts, investments and product developments taking place.
For example, SCHOTT, a renowned lighting expert, is working in conjunction with jetlite, an on-board solutions provider for jetlag and MRO providers like Lufthansa Technik & Etihad Engineering to explore and develop a new technology for human centric lighting. The system, which will be jointly produced by all involved parties, will leverage SCHOTT’s HelioJet LED cabin illumination technology, to ensure even distribution of light and superior color stability across the entire cabin. | https://medium.com/@groundalerts/lighting-solutions-demand-2020-emergence-of-advanced-lighting-technologies-along-with-advent-of-4128ed2a46f4 | [] | 2020-01-08 09:41:27.107000+00:00 | ['Lighting', 'Internet of Things', 'Oled', 'Led', 'Led Lighting'] |
Content Writing | Content Writing is about expressing your thoughts in text, video, photos, and infographics. Content writing is used by many brands to increase their businesses. Ideally used in digital marketing purposes or showing some unique searches. Content writing includes many things like — blogs, articles, technical content, and other information that can be shown to express thoughts about any product. Content writing has to unique so planning different strategies always helps in grabbing new viewers.
Using unique thoughts in text form, infographics help readers to understand the given information. I will talk about me. I have done a Master’s in Computer Application but while doing my masters. I have been working as a content writer for the last 3 Years. Mainly working for different companies.
Now I am looking for a freelance to show some of my skills. I have been working as Technical content writing. But finding in different writing styles and topics.
Feel free to contact me. As I am very keen to take new challenges in content writing. | https://medium.com/@add-rbansal/content-writing-b6aa37551c55 | ['Raghav Bansal'] | 2020-12-10 18:26:15.266000+00:00 | ['Content Writing', 'Content Creation', 'Content Marketing', 'Technical Writing', 'Content Strategy'] |
Solume’s Weekly Crypto Digest #3 | Solume’s Weekly Crypto Digest #3
Jan.22–26, 2018
Hi and thanks for following Solume.io! While market is recovering from last week’s bloodbath, check out this new edition of our Crypto Digest!
Ethereum is The New Bitcoin
“The Weiss rating is what happens when traditional bankers think they understand crypto.” —Ran Neuner twitted. And you can be sure, he wasn’t alone in this wave of outrage over the recently published first-ever rating of 74 cryptocurrencies from Weiss Ratings.
So what’s the madness behind it?
None of cryptocurrencies received A “excellent” rating.
Bitcoin gets C+ “fair” rating, as well as DASH and ARK.
The only coins rated B “good” are Ethereum and EOS.
Cardano with yet-to-be-released product gets B-, as well as STEEM which enjoys growing adoption of its Steemit platform.
Ripple received “fair” C rank along with Dogecoin that initially was launched as a Joke Coin.
Electroneum with plagiazed whitepaper gets C-, while Monero is rated C.
Weiss based their ranking on four factors: risk, reward, technology, and fundamental aspects of adoption and security. But if Weiss are so smart, they have known they had it coming from the crypto community:
But jokes aside, Charles Hoskinson from IOHK tweeted:
Some of the ranks were surprising, and social volume for featured coins went up 200%. Take EOS for an example here:
The strongest reaction without a doubt came from James Clayton who decided to take matters in his own hands and release his very own version of crypto rating awarding nine coins, including Bitcoin, Ethereum and Monero, with the highest A rating. Very generous of you James!
What are your thoughts? Are you a Weiss’ or a James’? Please comment below.
Bitcoin.org Update — The Latest BTC — BCH Drama
Since Bitcoin first hard forked in 2017, we’ve seen millions of “what-real-bitcoin-is” discussions between Bitcoin Core and Bitcoin Cash holders. Seems that pointing out slow transactions, poor adoption, high fees, lack of decentralization, etc never gets old in crypto.
Source: reddit.com
This week the world has seen another page of the BTC vs BCH saga. On Monday, users spotted minor update on Bitcoin.org, that eliminated “fast transactions” and “low fees” part of BTC’s features.
Source: reddit.com
Fair enough, Bitcoin’s transactions fees have been extremely high, and strong emphasis webside had made on “fast and cheap” operation doesn’t seem to meet the reality anymore.
Minor website update sparked major reaction in social media.
And here we go again:
Despite the fact that now description on the website is less misleading, social media voices still have something to complain about:
BTC supporters however pin their hopes on upcoming Lighting Network proposed as a solution to the Bitcoin scalability problems and said to be capable of millions to billions of transactions per second across the network at a low transaction fee.
Throughout this entire mess, our algorithms successfully spotted a 64% growth in BTC’s social volume:
The Dogecoin’s Story
If you’re around crypto for a while, you’ve probably also wondered what is up with that coin with a dog logo.
Started off as a meme-based joke coin, Doge unsurprisingly got lots of love from the crypto community (and a lot of humor, too). It’s supposed to be a fun and easy to use internet currency. And that’s it. There is a strong philosophy behind the coin: “1 Doge = 1 Doge, so why worry about USD/BTC exchange rates?”. And there is a strong community responsible for bringing much value to the coin. Such wow.
Despite the joke, Doge broke 2-billion market cap on early January. Later on, it lost more than half of its value during the last week horror show and is now touching 800M.
Feel like you need some Doges in your crypto pocket? That’s how you get them:
Buy them on one of the exchanges
Mine them
Get tipped in Goges.
According to our data, Dogecoin’s social volume reached amazing 2235 points on January, 8, which brought it to our hot TOP 5 for a few days (just check out the chart below).
Thanks for reading! Don’t forget to follow us on Twitter and have a great weekend!
Yours,
Polina
Community Manager at Solume.io | https://medium.com/solume-io/solumes-weekly-crypto-digest-3-1557603bc836 | ['Polina Krasniansky'] | 2018-01-26 10:16:24.406000+00:00 | ['Bitcoin', 'Weiss Rating', 'Ethereum', 'Cryptocurrency', 'Dogecoin'] |
Manually computing the coefficients for an OLS regression using Python | Python Implementation
Now, to the point of the article. To remain consistent with the commonly used packages, we will write two methods: .fit() and .predict(). Our data manipulation will be carried out using the numpy package. If you’re importing your data from another file, e.g. in a .csv format, you may use the pandas library to do so.
Let us import the modules:
*The matplotlib import will come in handy later if you decide to visualise the prediction
Next, we will create a class for our Model and create a method that fits an OLS regression to the given x and y variables — those must be passed in as numpy arrays. The coefficients are obtained according to the vector form derivation performed earlier (np.linalg.inv() is a numpy function for matrix inversion and @ notation represents vector multiplication):
Our .fit() method stores the computed coefficients in the self.betas attribute to allow them to be accessed later by other methods. Note that we have added an optional parameter intercept; if we decide to fit our model with an intercept, the method will add a vector of 1’s to the array of the independent variables.
We now create the .predict() method:
*Note the extra indentation due to the fact that this method is part of the Model class
Our (very) simple method makes use of vector multiplication to obtain the “predicted” values — y_hat.
As an additional (and optional) touch, we can add a method to visually output the prediction (Note: in the current form, it will only work for a univariate regression with an intercept):
Finally, let us execute the methods we created using sample data (if you don’t want to generate a graph, delete the call to plot_predictions() in line 7 and instead, add the return self.y_hat line at the end of our .predict() method):
If you followed everything along — you should have successfully computed the coefficients and have generated something similar to this:
If you’d like to play around with the code, the full version is available as a GitHub repository here.
Needless to say, this is a very basic exercise, which, nonetheless, efficiently illustrates where the OLS betas come from and what their (mathematical) significance is. Figuring this out has been immensely helpful in my further studies. Lastly, the statistical packages I have referred to earlier, in addition to the computed coefficients, typically calculate a variety of other measures — statistical significance, confidence intervals, R squared and so on. These can all be calculated numerically, however I would advise to rely on the (well-tested) libraries once you’ve understood the underlying concepts. | https://towardsdatascience.com/manually-computing-coefficients-for-an-ols-regression-using-python-50d8e413de | ['Roman Shemet'] | 2020-06-08 15:44:00.504000+00:00 | ['Python', 'Regression', 'Numpy', 'Ols', 'Data Science'] |
A/B Testing and its Pillars | A/B testing is a data-driven method to find out who is the winner among A and B. It’s a testing strategy where we can test 2 different hypothesis and get to know which is the best performing hypothesis that can enhance the conversion rates. This testing method is one of the weapons that is required for a company to make better decisions and uplift their growth. But, there can be multiple pitfalls that tag along if we don’t have enough data, don’t use proper statistics, do not know when to declare the winner of the test etc.
The biggest promise of A/B testing experimentation is to put the effectiveness on top. Let it be positive or negative but getting an impact on what we’ve proposed in important. The CXL institute course helps us to understand the most important pillars that are included in an A/B testing method.
The pillars of A/B testing are:
Introduction Planning Pillar Execution Pillar Results Pillar Outro Bonus
Each of these pillars is further classified into multiple sections. Let’s dive into the gist of all the most important factors that revolve around an A/B test.
Intro
The Intro is nothing but what we need to keep in mind before jumping into the A/B testing mastery. It’s really important to understand when and why Marketers started adapting A/B testing methods. It’s like reverse psychology. Some of us know what an A/B test is and why we’re using it but knowing when and why was this test adopted would trigger the in-depth knowledge about at what point should we choose to do the test. So, the introduction section mainly includes 3 parts and they are:
History of A/B testing The Value of A/B testing When do we have to use it?
Starting with the history, it was in 2010 that VWO and Optimizely launched the A/B testing software and all of a sudden every digital enthusiast could quite easily launch their digital experiments. And thereafter, the industry emerged with new and progressive techniques. When it comes to the value of A/B testing, the real value of the test is that it helps you to make better trustworthy decisions. And before we decide when to use the test, it’s really important to know whether we have enough data to conduct the A/B test. “Data rules above all”. One of the other things to keep in mind while using the test is to make sure that your deployment does not have a negative impact on the Key Performance Indicator show measuring.
Planning pillar
The planning pillar is all about Hypothesis, KPI’s, Research, Power and significance. The effective model that is adopted in the planning pillar is the ROAR Model which signifies Risk, Optimization, Automation and Re-think. The ROAR model provides information about how much data should we have to proceed with an A/B test.
For example, Just forget about running an A/B test if you’re below 1000 conversions per month. It would be really difficult to find the winner from the pond.
First of all, the most important part of the planning pillar is the Hypothesis Setting. If there isn’t a hypothesis, there isn’t a valuable test. A hypothesis gets everyone aligned to describe the problem, have a proposed solution and predict the outcome. This can also save time on having discussions during and after the experiment. The hypothesis makes sure that you’re going in the right direction doing research, experimentation and coming up with general theories.
Next up, When to pick what kind of KPI is something to keep an eye on. Do we know whether it would be the clicks, transactions or lifetime value that can determine the winner? What about the overall criterion? KPI is a goal metric and is not generally used for A/B testing. But, we’re just trying to change the behaviour to get more clicks, transactions etc.
So, let’s see how the KPI cone looks like.
The least important is the “Clicks”. It’s not a big deal to change the clicking behaviour. The “Potential Lifetime Value” is the Golden metric which gives you some indications about a lifetime value.
Once we finish analysing the KPI’s, it’s important to do some research for more quality and the higher winning percentage of the A/B tests. This includes the ultimate “6V Research Model”. This model is used to generate user behaviour insights. The 6 V’s are:
View of the Customer — The Analytics behaviour Voice of the Customer — Talking to the customer service Versus — Finding out the competitors Validated — Getting insights of previous tests Verified — Searching through scientific pieces of literature Value — Knowing the company
I’ll dive into a detailed version of the 6V Model in a different chapter. So, what is the role of Power and Significance here?
The Statistical power is nothing but the likelihood that an experiment will detect an effect when there is an effect there to be detected. If we’ve created something that really makes an effect, isn’t it logical to make sure that the effect needs to be detected? Thus the planning pillar gives us a success formula which is Relevant Location of the test X Relevant hypothesis X Chance on effect.
Execution Pillar
Once we have the planning done, there are certain things to keep in mind to execute the experiment and they are :
Designing of an A/B test Developing an A/B test Quality Assuring an A/B test Configuring A/B test in your tool Calculate the length of your A/B test Monitoring the A/B test
The change you’re making should be visible, scannable and usable. Do not use “What you see is what you get “ code editor because it’s an experiment. If it works, it works. So, why measure in your own analytics tool? It’s because its the best implementation analytics solution and you have the ultimate control over your experiment.
Results Pillar
The results pillar helps you to understand when your A/B test is a winner and what to do when your A/B test is not a winner. Before starting to analyse the outcome, we need to know the below-mentioned details.
The test duration How to isolate the test population — Does the test have an impact on all the population or a set of few users Test Goals and how to isolate those users
Once we start the analysis,
Analyse in the analytics tool and not in the test tool Avoid Sampling Analyse users and not sessions Analyse users who have converted and the total conversions Check if the population of users that have seen the test are about the same per variation.
When we have the analysed report, we need to focus on what information is valuable to present to which group of people. We should get the ability to create an A/B test outcome template that leads to action which means there should have been an impact.
Outro
As we near the end of this testing method, one of the key aspects to keep in mind is to maintain a balance between quantity and quality while scaling up A/B testing. Both Quality and Quantity are the centres of Excellence. We need to prioritize what to work on and understand when to increase the number of tests and what optimization do we need to apply on each test. Optimization is all about effectiveness and an organization focuses on efficiency. It’s our responsibility to combine effectiveness and efficiency to put the best out there.
Bonus
This section is just about knowing what if we have more data to use. If there’s a 20% increase in the number of users, would it affect the test?
Based on some analytics, it’s calculated that if there’s an increase in the number of users, obviously the conversion might uplift and there would be more noise in the returning users. ( Returning users are people who visit your website for the first time and leave after performing a particular action and then return to the same website later). | https://niksas.medium.com/a-b-testing-and-its-pillars-f17b13e16c0b | ['Nikitha Sasi'] | 2020-12-16 06:44:42.553000+00:00 | ['Growth Marketing', 'Marketing', 'Ab Test', 'Testing', 'Strategy'] |
Tables, not spreadsheets | From the very first napkin sketches of Coda (and yes, there were actual napkin sketches passed around at coffee shops, bars, or whatever office we happened to be squatting in), two things were abundantly clear: (1) we were obsessed with the idea of building a doc as powerful as an app and (2) at the center of that design had to be a table.
And when I say “table”, I don’t mean a collection of adjacent cells in a Google Sheet or Excel file, nor the “tables” you might find in Microsoft Word or Google Docs that allow you to align columns of text vertically or horizontally. We knew that real applications needed real tables ー tables where rows and columns are distinctly different. We needed the relational database table ー one that has the ability to reliably join against other tables at scale through primary keys, foreign keys, and indices.
Benedict Evans wrote a piece last summer about Machine Learning that articulated some of the relevant history around relational databases well:
Why relational databases? They were a new fundamental enabling layer that changed what computing could do. Before relational databases appeared in the late 1970s, if you wanted your database to show you, say, 'all customers who bought this product and live in this city', that would generally need a custom engineering project. Databases were not built with structure such that any arbitrary cross-referenced query was an easy, routine thing to do. If you wanted to ask a question, someone would have to build it. Databases were record-keeping systems; relational databases turned them into business intelligence systems. This changed what databases could be used for in important ways, and so created new use cases and new billion dollar companies. Relational databases gave us Oracle, but they also gave us SAP, and SAP and its peers gave us global just-in-time supply chains - they gave us Apple and Starbucks. By the 1990s, pretty much all enterprise software was a relational database - PeopleSoft and CRM and SuccessFactors and dozens more all ran on relational databases. No-one looked at SuccessFactors or Salesforce and said "that will never work because Oracle has all the databases" - rather, this technology became an enabling layer that was part of everything.
As Benedict identified, relational databases became the center of nearly all applications. So if we were going to build docs that could grow into apps, a solid foundation of relational data was a must. The primary question then was how were we going to do that. Did we have to abandon the spreadsheet model entirely, or could we keep aspects of it while introducing table-like concepts? Even subtle differences in these options would end up having pretty big implications on the formula language, on our ability to create a compelling mobile experience, and on the familiarity of the surface itself and the backwards compatibility with other tools.
A primer on spreadsheets
To really put the choice in context, you have to understand how spreadsheets actually work. At Coda, we’ve always been humbled by the power and flexibility of a spreadsheet ー regularly making the case that spreadsheet formulas are the most popular programming language in the world. However, most spreadsheet users don’t realize that there’s an alternate to the common A1:B14 style of referencing ー known as R1C1. You can read a bit more about the details of it on excel champs, but R1C1 is kind of like the metric system for spreadsheet formulas. By using R1C1 notation, it becomes easier to see how spreadsheets actually work ー based on the relative position of a cell, not just its absolute position (e.g. in B2). Said differently, when you write a spreadsheet formula of =B2 in cell C3, the underlying formula is actually an offset: =R[-1]C[-1] → “give me the value one row and one column before me”. This is why when you copy and paste the C3 cell to D3, the formula remaps to =C2 ー you’re actually pasting the same underlying value of the formula (offset by one column and one row), it just has a different display name.
Some simple spreadsheet formulas using R1C1 notation
To build docs as powerful as apps, we knew we’d need a powerful, yet flexible formula language to work over your data — like in a spreadsheet. But could we do that if we deviated too far from the spreadsheet grid? Ultimately, the options boiled down to two primary questions related to how we’d make that formula language work:
Where do formulas live? Do formulas live in cells like they do in a spreadsheet, or are they defined at the column or row level? Would we support formulas on the canvas? What is the reference model? Do formulas use an object name (e.g. the name of a table or column) or refer to a location (e.g. the position of a cell ー is it geometric)? Is there a workable hybrid where you point to a location, but pretty print with the names of the object in that spot (a la Apple’s Numbers)?
The original sketch of the options for formulas in tables vs spreadsheets
In working through the options, one of the biggest problems that comes up with geometric references is that you end up with some mucky looking formulas. Paraphrasing Rob Collie, an Excel muse we talked with after starting Coda, imagine you told a software developer that they weren’t allowed to name their variables ー they had to use system-generated variable names (e.g. A4). That would seem crazy. But in spreadsheets, we oddly accept it. So if you were trying to calculate the total cost for a list of inventory, rather than write something legible like = Price * Quantity, we’re okay with = B2 * C2.
There have been a few attempts to circumvent that awkwardness (e.g. named ranges, or the behavior found in Apple’s Numbers), but because the underlying substrate of a spreadsheet is a geometric grid, most users still rely on their system-generated variables like B2 or A14.
But by leaning into tables, rather than the spreadsheet grid, we’re able to unlock the real magic of relational tables that is not possible in geometric-based systems.
Layouts and Views
One of the primary benefits of a relational database powering your app is writable views ー the ability to see the same data in difference places, across multiple layouts, and still be able to edit the values. With tables, Coda brings that ability into your doc surface ー like having a table of names and dates and being able to edit the date in a table, or by dragging and dropping in a calendar view and having all the values stay up to date.
A simple table with a chart view and a calendar view
Humane Formulas
Another example is what we’re able to do with formulas now that we’re not bound to a grid. We can put formulas anywhere ー in the table, in the writing surface itself, in configuration panels ー all using the same namespace. That means from anywhere, you can refer back to your tables and data in more legible ways. In Coda, the formula is easy to read aloud from left to right ー ‘Take the tasks table, and count if the done column is equal to true: =Tasks.CountIf(Done = True). In spreadsheets, there are multiple levels of indirection ー you have to read from inside out and know that A:A represents a data set with some meaning. =CountIf(A:A, ‘TRUE’)
A canvas formula that counts the number of open tasks in the table
Further, it also means our formula builder auto-complete can be much smarter since we know the type of data being used and which formulas can go with that data.
Column types enable our formula autocomplete to make type specific recommendations
Relationships and the Performance of a Database
Finally, one of the most visceral benefits of tables over spreadsheets goes back to the point made by Benedict Evans in the article I referenced earlier ー it unlocks the abilities of relational database tables in your document. Gone are the days of VLOOKUP(…) or INDEX(MATCH(…)) and the embarrassment that ensues when you realize you didn’t specify an exact match in your formula.
In Coda, a lookup column defines a relationship with another table ー and then you’re able to simply project the fields of the underlying row (rather than count the number of columns to the right of your VLOOKUP range).
VLOOKUP in a spreadsheet
Say you want to pull in the person responsible for a given team into a task list. In a spreadsheet, you’d have two “tables” at arbitrary locations, and would need to create the connection between the two with a VLOOKUP. If you add a new task, be sure to copy the formula down. If someone reorders the columns, you’re toast. | https://blog.coda.io/tables-not-spreadsheets-c3b727ae79ae | ['Matt Hudson'] | 2019-08-02 18:04:38.264000+00:00 | ['Productivity', 'Product Management', 'Coda', 'Excel', 'Tech'] |
WATCH 2020-Leicester City vs Manchester United: Premier League — live! | Welcome To Watch Soccer 2020 Live Streaming Online Full HD Coverage On ESPN, CBS, FOX, SKY, TNT, NBC SN, TV, TBS Or Any Tv Show Channels Online, Here You Can Easily Watch Your All Favorite Soccer Match 2020 Live TV Show Free Online On Any Device as Desktop, Laptop, notepad, tab, smart phone, Mobile, iPhone, iPad, iPod, Apple, Mac Book, And all others.
::::Schedule::::
Soccer Live Streaming 2020
Date: Today
Live/Repeat: Live
Watch This Soccer Leicester City vs Manchester United Live Stream Online Soccer 2020 Free Coverage On ESPN, ESPN3, SONY SIX, FOX SPORTS, STAR SPORTS, HBO, ABC, NBC, ESPN2, TBS Or Any TV Channels Online, Here You Can Easily Watch Your All The Leicester City vs Manchester United match Live Stream On Any Device as Desktop, Laptop, scratch pad, tab, propelled cell phone, Mobile, iPhone, iPad, iPod, Apple, and all others. So keep watching and enjoy your time. | https://medium.com/@smsoykothasanpab9982/welcome-to-watch-soccer-2020-live-streaming-online-full-hd-coverage-on-espn-cbs-fox-sky-tnt-8af11370bd9d | [] | 2020-12-26 13:43:08.097000+00:00 | ['Soccer'] |
Umbilical cord care Do’s and don’ts for parents | An infant’s umbilical string stump regularly tumbles off inside around fourteen days after birth. Meanwhile, treat your child’s umbilical string stump delicately.
Can’t help thinking about how to really focus on your infant’s umbilical rope stump? Follow these tips to advance recuperating.
Why your infant has an umbilical string stump
During pregnancy, the umbilical string supplies supplements and oxygen to your creating child. After birth, the umbilical string is not, at this point required — so it’s cinched and cut. This abandons a short stump.
Dealing with the stump
Your infant’s umbilical line stump gets out and at last falls dry — typically inside one to three weeks after birth. Meanwhile, treat the zone tenderly:
Keep the stump dry. Guardians were once taught to clean the stump with scouring liquor after each diaper change. Specialists presently say this may eliminate microbes that can help the line dry and discrete. All things considered, open the stump to air to help dry out the base. Keep the front of your infant’s diaper collapsed down to try not to cover the stump.
Stick with wipe showers. While there’s no damage in getting the stump wet, wipe showers may make it simpler to keep the stump dry.
Allow the stump to tumble off all alone. Oppose the impulse to pull off the stump yourself.
Indications of an issue
During the mending cycle, it’s entirely expected to see a little blood close to the stump. Similar as a scab, the string stump may drain a little when it tumbles off.
Be that as it may, contact your child’s PCP if the umbilical territory overflows discharge, the encompassing skin gets red and swollen, or the region builds up a pink damp knock. These could be indications of an umbilical rope disease. Brief treatment is expected to prevent the contamination from spreading.
Additionally, converse with your child’s PCP if the stump actually hasn’t isolated following three weeks. This may be an indication of a hidden issue, such a disease or resistant framework problem. | https://medium.com/@babycare7171/umbilical-cord-care-dos-and-don-ts-for-parents-721b261d14ab | [] | 2021-04-09 19:56:40.923000+00:00 | ['Baby Care', 'Umbilical Cord', 'Baby', 'Baby Boomers'] |
A Retrospective | As I review the years, the common theme is a push and pull between better care of myself (2016, workouts, meditation, less alcohol; 2017, more water), personal life balance (2017, double-book less; 2019, being outdoors), kindness to others (2016, less judgement), appreciation of the little things (2019, new music), stopping to smell the roses, etc. and a desire to achieve and accomplish more (2017, lead 3 investments, higher conviction). I wonder if I’ll ever find equilibrium here. I worry that I may not, but I’m getting more comfortable accepting that.
I see over the years a desire to give myself space and time to be creative (2017, writing; 2019, writing, live music) in addition to traditionally productive, as well as to be static and enable recovery (2019, solitude). I see the outdoors, music, and poetry as common denominators. I also see a desire to give more to others outside of the daily professional sphere within which I live (2017, volunteering; 2019, creating playlists), as well as to learn more about them (2017, world religions).
These desires are compounding and getting stronger and stronger with each passing year. Each year passes faster than the last and the resolutions feel like they come and go, but I’m starting to take note of the trend lines.
Perhaps this is why I didn’t want to write a 2020 commitments post, I felt more prepared to write something that felt like a ‘lifetime’ commitments post. A series of trends that I realize are becoming my values. An aggregation of small choices that make up the way I choose to live. Long-term choices that I hope to ingrain forever in my lifestyle, rather than short-term challenges against myself.
As a nod to the specifics, I’m proud of myself for taking to action on some of these ideals, for what better reward than having your resolutions become your habits. Some of these things include: more long form content, more frequent short workouts, less alcohol, more water, learning more about world religions, and being outdoors. I have fallen short on a morning routine, meditation, sharing my poetry, cycling, and finding mentorship — none of these things have become habit for me yet. And I fall somewhere in between on professional accomplishments, volunteering, finding community, and releasing judgement.
Here’s to a new decade and another iteration around the ☀️ :)
****************************************************************
If you found this interesting, please gimme some applause on Medium and sign up here to get my posts in your inbox! | https://arteenin.la/a-retrospective-aba390a91303 | ['Arteen Arabshahi'] | 2020-11-30 02:03:36.291000+00:00 | ['Productivity', 'New Years Resolutions'] |
Data Transfer Pricing: A Surprising Item on Your AWS Bill | AWS’s Data Transfer costs and how they can impact you
If you’ve used AWS, then Amazon must have already charged you for Data Transfer. These costs are what AWS charges to transfer data into or out of AWS. Data transfer could be between AWS and the Internet and within the AWS cloud. If you fail to consider these costs right from the beginning, then you might find yourself with an unpleasant surprise when the bill pops up. In an interview with TechRepublic, “Cloud Economist” Corey Quinn refers to AWS data transfer prices as “cloud’s Achilles’ heel”. According to him, we are still paying 1998 prices for data transfer.
AWS data transfer prices vary for different services and regions. AWS charges you for transferring data into or out of one AWS service to another one. There are charges for transferring data across services within the same region and for transferring it outside the region. So if you were to transfer data across different regions, the cost would be extremely high.
Now you must be wondering how much could these costs even be to be writing an entire article about it? Well, that thought is justified. Large-scale multinational companies end up paying a million dollars a month. But that’s not it. There are relatively small-scale startups that suffer fees close to a million. There are data transfers to and from AWS services that cost more than others. So, the cost of transferring data, say, OUT from EC2 to the Internet, ends up accumulating substantially.
Let’s take an example of data transfer fees for the AWS storage platform S3. For S3 buckets situated in the US West (Oregon) region, the first GB/month incurs no cost and the next 9.999 TB/month costs $0.09 per GB. But if the S3 buckets are in the South America (São Paolo) region, the first GB/month is still free, but the next 9.999 TB/month costs $0.25 per GB.
Tips on how to reduce data transfer costs
While you might not want to focus entirely on completely cutting down your AWS transfer costs, but you might want to reduce them sensibly by following the steps below:
Control data volumes by limiting the size of data transfers by using storage efficiencies with a data management platform.
Keep all traffic within a region, if you can. And in case of traffic leaving the region, pick the lowest transfer rate region depending on your business’s needs.
All traffic is free within the same AZ and the same VPC, using AWS Private IPs. So, make sure your resources are in the same AZ and the same VPC, using private IPs when possible.
Dedicated NAT devices incur a rate per GB on top of the applicable data transfer out rates, so try avoiding these. Utilize the VPC Internet Gateway NAT functionality instead and assign these instances public IPs. You can also utilize VPC endpoints.
Across the board, there are higher data transfer costs with public IP or Elastic IP addresses as compared to a private address. Using private IP addresses frequently reduce costs drastically.
Opt for billing alarms by subscribing to an AWS bill monitoring service. These include Billgist, Metricly, and Yotascale. These monitoring services let you keep an eye on your AWS billing expenses.
Choose Amazon CloudFront to transfer data out to Internet users. Up to 50TB of data transferred out to the Internet will be less expensive from Amazon CloudFront than transferring out from AWS regions and with less latency.
There are some distinguishing pricing principles for the following services, so you need to ensure that you double-check these: Amazon Neptune, Amazon MSK (Managed Kafka), Amazon CloudSearch, Amazon ElastiCache, Amazon ElasticSearch.
Data transferring fees might end up being an unpleasant surprise on your AWS bill. Amazon might charge you differently for transferring data into and out of your AWS database, depending on the source and destination. So, it’s a good idea to follow the tips mentioned above to ensure you aren’t spending, in terms of resources and money, more than you require.
You can read more on how to spot a surge in overall AWS cost in How to Monitor AWS Costs. | https://medium.com/billgist/data-transfer-pricing-a-surprising-item-on-your-aws-bill-637743b3d2ae | ['Nada Gul'] | 2020-12-28 15:58:26.988000+00:00 | ['AWS', 'Tips', 'DevOps', 'Web Development', 'Cloud'] |
Brand identity explained | Symbolism is usually the first thing that comes to the mind when people think about the brand’s visual identity — eg., the swoosh, red cross, golden arches, and chewed apple. It’s also just one small piece of the picture. And while memories of a brand are driven by the quality of a product or service, the quality is frequently being backed up by a range of designed elements that are relevant to what’s offered.
Brand identity is tangible and appeals to the senses. You can see it, touch it, hold it, hear it, watch it move. Brand identity fuels recognition, increases differentiation and makes big ideas and meaning attainable. Brand identity takes various elements and unifies them into whole systems.
“Design plays an essential role in creating and building brands. Design differentiates and embodies the intangibles–emotion, context, and essence — that matter most to consumers.” Moira Cullen — Senior Director, Global Design The Hershey Company
We’ve all heard that on an average day consumers are exposed to 6,000 advertisements and each year to more than 25,000 new products. Thankfully brands help consumers cut through the increase of choices available in every product and service category.
My process of designing brand identity in 4 parts:
Discovery & Research — Together with client we run collaborative workshop sessions where we gain a deep understanding of the business and users that will help develop successful brand experience. This approach accelerates the strategic process and brings clarity throughout the entire project.
Developing Brand Strategy — All the information gathered during the workshop is synthesized. After achieving agreement about what’s been done already we develop a brand statement, key messages, brainstorm THE BIG IDEA, write a creative brief and visualize the future using mood boards. The verbal side of a brand is as much important as the visual.
Designing Visual Identity — My design process always starts with designing a logo. After a few weeks of intense explorations, I choose two concepts that work the best and move on to designing brand identity, exploring applications, finalizing brand architecture and then presenting the visual strategy to a client. After achieving agreement at this stage we move forward.
Finalizing the design — After the chosen route has been refined, we do the due diligence to assure that any designed symbols don’t infringe upon existing copyrights. If everything is fine, we move onto the guidelines phase. The guidelines are there for two reasons — consistency & creativity. Consistency is about showing the core toolkit elements such as logo, color codes, fonts, whereas creativity is about showing how those elements come together, how they’re used.
Reasons to invest in brand identity
Make it easy for the customer to buy. Compelling brand identity presents any company, any size, anywhere with an immediately recognizable, distinctive professional image that positions it for success. An identity helps manage the perception of a company and differentiates it from its competitors. A smart design system conveys respect and delights the customer by making it easy to understand features & benefits. It creates loyalty and most important — builds trust. Make it easy for the sales team to sell. Whether it is the CEO of a global conglomerate communicating a new vision to the board, a first-time entrepreneur pitching to venture capital firms or a financial advisor creating a need for investment products, everyone is selling. Strategic brand identity works across diverse audiences and cultures to build awareness and understanding of a company and its strengths. An effective identity seeks to clearly communicate a company’s unique value proposition. The consistency of communications across various media sends a strong, trustworthy signal to the customer about the laser-like focus of a company. Make it easy to build brand equity. The goal of all public companies is to increase shareholder value. A brand or a company’s reputation is known to be one of the most valuable company assets. Small companies and nonprofits also need to build brand equity. Their future success depends on building public awareness, defending their reputations and sustaining their value. Strong brand identity will help build brand equity through increased recognition, awareness and customer loyalty, which in return helps make a company more successful. Managers who take every opportunity to communicate their company’s brand value and what the brand stands for — they are building a precious asset.
How long will it take?
Every business has a sense of urgency, regardless of the size and nature of the company. There are no shortcuts to the process, and reducing steps may be harmful to achieving long-term goals. Developing an effective and sustainable identity takes time. Every project is unique, so there is no universal answer about how long it would take. From my personal experience working with SME’s, brand identity development takes somewhere between 2–3 months. However a rebrand of a big corporation could take up to a year to complete.
Measuring the impact
Brand identity systems are a long-term investment of time, human resources, and capital. Each positive experience with a brand helps build its brand equity and increases the possibility of repeat purchasing and lifelong customer relationships. A return on investment is achieved through making it easier for the sales team to sell and more appealing for the customer to buy. Clarity about the brand drive success.
Decision-makers frequently ask, “Why should we make this investment? Can you prove to me that it has a return?”. It’s difficult to quantify the impact of a new logo, a better brand architecture or an integrated marketing communications system. Companies must develop their own measures of success.
Those who don’t expect instant results and think in the cumulative long term, understand the real value of incremental change and focus.
Brand Strategy + Design
[email protected] | https://medium.com/@kasparas.sip/brand-identity-explained-e2037fb87669 | ['Kasparas Sipavičius'] | 2019-11-21 17:57:45.970000+00:00 | ['Process', 'Design', 'Brand Identity', 'Branding', 'Brand Strategy'] |
Subsets and Splits