title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
3 Simple Steps to a Harmonious Household
Thankfully, we’ve escaped the days when women were confined to housekeeping duties while men got to endure boring meetings in cold rooms. Let’s face it, no one really wins there. Still, everyone deserves the chance for a fulfilling career, which means that the household often becomes a neglected home, waiting patiently while everyone else works to pay for its costs. If houses were sentient, they’d likely feel a little forgotten, longing for the days when someone was there to enjoy and take care of them. Instead, the residents hustle all day. Even the kids are busy with school and extracurriculars. At best, the house might enjoy the presence a sleeping cat who coats it in shed hair, or a plucky betta who fights with his reflection. Homemaking seems to be a lost art, and that’s not feminism’s fault. Most of us, all genders, are simply too busy to spend time decorating, cooking, cleaning, and so on. When we need to relax, we simply shove the laundry to the side of the couch and ponder the impending apocalypse. I’m sure I sound like an angry boomer right now, but last year, I gained a new perspective on what homemaking entails and why it matters. That’s because I started working from home. I quickly became frustrated with the perpetual chaos, born of my hectic schedule and my husband’s backbreaking job. Neither of us had the time or energy to devote to homemaking. Still, the house and its many needs were a point of contention. My husband didn’t quite see the need to wipe down the backsplash or dust the blinds, and I couldn’t seem to understand that cleaning doesn’t need to happen at midnight. I wanted a clean, comfortable home, even though I never made time to relax in it anyway. For sure, my priorities were out of alignment. When my husband and I moved to Orlando, we had to leave our spacious home for a tiny apartment that cost twice as much in rent. We downsized a lot but quickly discovered that even the basic living essentials had to fight for space. With little space, the mess compounded. At times, our floor resembled an obstacle course. I had to make a change. And so we spent a couple of days cleaning and rearranging. It looked so nice when we were done that I pledged to never let it became chaotic again. Along the way, we made some decisions that ultimately worked out for the better. Here’s what we did.
https://rachelwayne.medium.com/3-simple-steps-to-a-harmonious-household-d632fa6d5a24
['Rachel Wayne']
2020-01-17 20:34:38.442000+00:00
['Organization', 'House', 'Relationships', 'Lifestyle', 'Home']
Akuntabilitas dalam Team Work
in The Road to Wellness
https://medium.com/@iniakutama/akuntabilitas-dalam-team-work-eccbeb68ea6b
['Edwin Al Pratama']
2020-12-17 02:07:30.473000+00:00
['Team Building', 'Kpi', 'Teamwork', 'Team Collaboration', 'Team']
A path to Alberta’s Autonomous valley
The new decade marks the cusp of a systemic change in transitioning to a digital economy. Alberta is facing a difficult time of economic instability, and the Global Pandemic COVID-19. During difficult times, opportunities can arise, such as the emergence of a more robust technology industry. Currently, Alberta is strengthening its foundation through its innovation corridor that spans from Edmonton to Calgary. Our Innovation corridor could hold the path to building a stronger Alberta on the global stage and embrace the opportunities. The vision Alberta is already starting its transition to becoming a clean technology hub. Our province should embrace the possibilities available from Rural internet expansion. Over the next ten years will be part of the fourth industrialization revolution, the beginning of a rapid digital transformation period. To build an Alberta that uses technology and innovation as an enabler of prosperity in new industries and existing ones. There needs to be collaboration throughout the people of Alberta. Whether you’re in the heart of the cities or a rural community, technology will improve your living quality. More specifically, economic opportunities and prosperity for Albertans of all ages. We can pioneer new sustainability models in agriculture, energy, entrepreneurship, public services, natural resources, and privacy. It’s a possibility that we can start working towards today, and here’s how: Strengthening the data in our cities Smart cities use IoT (internet of things) technologies to help improve standards of living, and efficiently use city resources. Data is the currency in the digital economy. Data such as map resources, business venues, neighbourhood packing, and anonymous transit ridership data could help build better communities. Introducing Open data initiatives in Edmonton and Calgary could help spur innovation and create new enterprises using this data to create value. The adoption of new technologies social enterprises could create incredible social value that could benefit our communities and Alberta. We also depend heavily on free-market initiatives and values. Such consortiums of organizations should help drive public-private partnerships to help engage the government and residents of Alberta. The expansion of government identification and confidential health records could help expand medical services throughout the province. We can start initiatives to build a digital environment for residents with Digital ID, healthcare cards, and public services. However, as we’ll talk more, we need strong privacy laws that enable such digitalization and the decentralization of individual data. Keeping our rights of freedom, right to privacy, and democratic foundations will be crucial in how we should structure our smart cities. Utilizing a new type of worker We will see the advancement of artificial intelligence in digital and physical environments during our ten-year timeline. Our current model of capitalism creates value in the form of profit through private and public enterprises. These models of our contemporary society will become unsustainable from an exponential rise in artificial labour. The current model is utilizing work at market prices for wages without a standard for equity opportunity. The divide between wage earners and equity earners will be exacerbated naturally by automation. As individuals under capitalism, our incentive is to obtain & grow wealth. However, Alberta is in a position to change the status quo. For the first time, create value using AI, which doesn’t have incentives to fulfill. Meaning artificial intelligence will be one of the foundations in the digital economy. For our capitalist model to adjust, there needs to be slow systemic change in our markets and governance, but Alberta should lead that charge. We want to prepare by taking advantage of our existing and emerging industries to help build a diverse economy. Technology will serve as the foundation for providing business services for data-driven organizations. As Adam Legge in April put it: Technology is not necessarily a sector on its own – it’s an enabler of all other sectors. Jurisdictions that are able to grow and develop their technology and innovation capacity are going to be the ones that are high-growth jurisdictions. Developing a technology ecosystem that enables not just existing industries but enterprises as a whole. Operations ranging from financial services, and customer relations management to strategies and sustainability with automation will complement many of our jobs, and increase productivity. But with the rise of data usage, and digital infrastructure becoming available will also spur the slow collapse of many of the barriers to venture creation. Those individuals that are looking to voyage into a small business or an entrepreneur looking to solve a global challenge. Technology makes it easier for you to create that organization, and Alberta should further pioneer those technologies and organizations. Engaging in Systemic Change Our models of society are the foundations of ecosystems and individual models of how we view the world. We are capable of changing these systems. In society, some systemic models benefit or hinder economic, social and environmental factors. Recently we have become more aware of our systems through the advancement of knowledge and technology. As we advance our society, there will need to be a slow, moderate systemic change that will allow for economic growth. Social adaptation to how we do business and understand organizations are changing. We also can contribute new initiatives that enable social innovation. A digital era means the steady adoption of new regulations to help sustain sustainable economic diversification. Environmental, social governance is an already emerging market force. Over the next decade will be many more emerging economic pressures, such as including all stakeholders in our organizations, such as employees, customers, and the economic, social and environmental systems of their surrounding operations. Sustainability should be our priority, and we’re already making strides in the energy industry. Systemic change is crucial over the decade to ensure the capability and prosperity of our free market. As well as providing once automation gains mass adoption, our society is ready to take steps to a sustainable path of growth and more opportunity for individuals. Keeping our Young individuals, and their talent is going to be crucial in recognizing our goals. Our youth are driven not only by economic factors but also by social and environmental factors. To attract young talent and retain them, we need to embrace change in our mental models. Advancing regulations that would be good for our technology industry and the sustainable conservation of our natural landscapes. So how do we change our regulations? First, focusing on privacy, we need to slowly adapt our privacy laws to protect consumers’ data better. We could even go as far as helping enable a data-driven consumer economy. One that allows private-public partnerships to ensure the decentralization of data to individuals. Consumer data exploitation is unsustainable in a healthy market economy by collecting our data for no cost from our individuals. The market demand for data is growing, and ensuring a fair deal for consumers is critical to creating a sustainable market environment. The second focus is on better social enterprises. Social enterprises have grown out of urges to solve social issues. But adapting to use the market flexibility to gain profit to scale operations, and sustain self-sufficiency. Sprouting from the itch for further social development within economies globally. Alberta is in a suitable position to increase social innovation while transiting to a digital economy. Artificial intelligence will further increase new solutions that provide value to our economy, enterprises, and individuals throughout Alberta. Providing regulation that enables the creation of provincial jurisdiction of social enterprises, and starting initiatives could help these organizations. Market-based social enterprises could complement our public services with sustainable operations and efficiently tackle new challenges. Increasing business investment and confidence is a foundational piece in obtaining progress. Thus keeping a competitive business environment is critical to adopting entrepreneurs, startups, and established enterprises. Our province needs to construct social enterprise regulation. One that tells our communities, the rest of Canada, and the world that Alberta is ready to lead in market-based social innovation. Conclusions Whether we’re ready or not, the digital economy is coming, and an enthusiastic approach could put us in the position to develop tomorrow’s technological capabilities. The digital transformation will shift mental models, and various systems, and institutions. Our situation can only get better if we collaborate to build and sustain a new age of diversified economic expansion.
https://medium.com/@darrylhuet/a-path-to-albertas-autonomous-valley-d20e8055c909
['Darryl Huet']
2020-09-02 21:23:09.606000+00:00
['Alberta', 'Smart Cities', 'AI', 'Entrepreneurship', 'Blockchain']
How Haryana is working with cab aggregators to solve the unemployment puzzle
Since July 2018, more than 24,000 youth in Haryana have been onboarded by two cab aggregators — Ola and Uber as drivers. This has been the result of the collaboration between the Government of Haryana and these aggregators to increase employment through a focused strategy. The essence of this partnership is that the government and private sector can collectively generate employment under a framework that ensures mutual benefit and no risk. In this case, the government wanted to increase employment while Ola and Uber were looking to expand the market for their services and hire more people. While the objectives of both entities are intuitive enough to comprehend, what was missing was making this linkage and bringing them to the same table. Samagra’s Saksham Haryana-Skills and Employment Cell, set up in the office of Chief Minister of Haryana conceptualized this partnership. The team was instrumental in forging this link and helping to make this initiative, Saksham Saarthi, fruitful. Government of Haryana’s approach Unemployment is a concern across states. In general, unemployment manifests in three ways: 1) There is no skilled manpower for the existing jobs in the market 2) Both skilled manpower and jobs are available, but there are no linkages between the unemployed and job provider 3) There is skilled manpower but no jobs in the market - The government often intervenes in scenario 3 by creating demand. However, this is a long drawn out process that can’t address the unemployment challenge with immediacy. Saksham Haryana focuses on scenario 1 (more on this in another blog post) and 2. The partnership with Ola and Uber is specifically solving scenario 2 by creating systemic linkages between job seekers and employers. These interventions show immediate results and also have lesser financial implications for the government Why this collaboration is unique More often than not, partnerships between the government and private entities tend to give the former more control than the latter. Either the private entity is relegated to the position of a vendor who does the government’s bidding, or it is seen as a source of funding and nothing more. In the case of an NGO, the government believes it is letting the organization fulfil its mandate rather than working collaboratively on a development issue. In all these cases, the partnership is far from equal and ends up giving more control to one partner. This prevents both entities from co-working, problem-solving, continuously iterating on how best to achieve a common objective. In this sense, the Government of Haryana’s partnership with Ola and Uber is unique. They signed a non-financial MoU, i.e. the government didn’t pay the companies or vice versa. The government also didn’t give Ola or Uber any target in terms of the number of youth to be employed. Instead, they asked both parties to commit to targets they think would be feasible. If Ola or Uber failed to meet their commitments there was no financial penalty involved. The partnership works on good faith and the recognition that one can help the other. Ola or Uber couldn’t try to become monopoly players in Haryana nor could the government demand that the firms hire only residents of the state. The idea behind the partnership was creating jobs and increasing market access, and all entities worked collaboratively to achieve this end. What does implementation look like from Samagra’s end? Samagra’s Saksham Haryana-Skills and Employment team have been working with the Government of Haryana to streamline policies and procedures, problem solve with the aggregators, facilitate data and information exchange, and help organize exclusive job fairs for Ola and Uber to hire interested candidates. All of this requires ensuring a continuous flow of communication from the aggregators to the government and vice versa. The Saksham Haryana team has set up a robust communication channel between the two parties. For instance, if Ola or Uber wants to hire drivers from a particular district, they would convey the same to the nodal officer in the government. The nodal officer would then direct that particular district’s employment officer to collate a database of interested candidates who meet the eligibility criteria and send this information in the required format. The nodal officer would then share this database of potential candidates to the aggregators. In another case, where the aggregators faced inordinate delay in getting commercial licenses processed at the regional transport office, the Samagra’s Saksham Haryana team was able to work with the Department of Transport to reduce the processing time from 4 to 2 weeks. This not only helped Ola and Uber but also other drivers and businesses. As part of the Saksham Saarthi initiative, the Government of Haryana agreed to organize exclusive job fairs for Ola and Uber in districts of their choice. This significantly helps the aggregators in ramping up recruitment. Organizing this fair is primarily the responsibility of the District Employment Officer (DEO) and the Samagra team supports the DEO with the planning and management of job fairs. These job fairs target candidates from rural areas. Since September 2018, 14 job fairs have been organized across districts, with a footfall of nearly 3,000 candidates and more than 600 drivers onboarded. Job fair organized by District Employment office, Sonipat for Ola & Uber What makes this partnership successful The straightforward criteria to assess the success of this partnership is the number of jobs that have been generated in the state. Since July 2018, more than 24,000 youth in the state have been onboarded by Ola and Uber. The aggregators have expanded their services in four districts of the state and are starting new services as well. But beyond the numbers, this collaborative partnership between two equal stakeholders has also resulted in meaningful policy changes. These changes improve the ease of doing business for all operators in the market and not just Ola or Uber. To understand the modalities and impact of these policy changes, they were first rolled out in one district in the state, Gurugram, and will soon be implemented across the state. Samagra worked with the Department of Transport to allow non-local residents to apply for commercial vehicle licenses. Processing of applications used to take an inordinate amount of time causing delays. The procedure was streamlined by reducing the number of steps involved and as a result, the time taken for processing came down from 4 to 2 weeks. To ensure that there was no variation across districts, with respect to procedure, documentation required for processing commercial licenses for two-wheelers, an SOP is being developed at the state level. The introduction of bike taxi services by the aggregators along with the streamlining of the licensing procedure incentivized more applications. Between January 2019 and March 2019, at least 150–200 bike commercial licenses have been processed at the Gurugram Regional Transport Office apart from those for Ola and Uber. This points to a gradual expansion of the market for commercial bike services in the state. The engagement is now approaching its one-year anniversary and our focus is to make this a sustainable collaboration. The Saksham Haryana team has created a detailed SOP for the Department of Employment so as to build capacity within the government to manage this innovative partnership. Department officials are currently being oriented to this SOP. Based on the success and learnings of Saksham Saarthi, the Government of Haryana is considering engaging with more aggregators in other sectors.
https://medium.com/@mugulur/how-haryana-is-working-with-cab-aggregators-to-solve-the-unemployment-puzzle-eab21e84da1c
['Aneesh Mugulur']
2019-05-20 09:55:05.265000+00:00
['Ola', 'Unemployment', 'Saksham Haryana', 'Transforming Governance', 'Haryana']
Core Values for Web Development Teams
As a webdev team, we have several core values that help guide our decisions day-to-day. Here are a few examples: Tests or it didn’t happen A past refrain in internet culture when someone told an outlandish story is, “Pics or it didn’t happen.” We say the same thing here except for tests. Test provide 1) runnable documentation of business rules, 2) a great practical way to ensure devs ponder and cover edge cases, and 3) a guard against regression. We all know this though. Ask any developer anywhere if they think tests are good and necessary and they’ll say, “of course!” Then, ask them if they have greater than 90% test coverage on their projects. Most of them will balk at that. Here, however, we enforce it via automation, which touches on another of our core values, “automate all the things.” Automate all the things (especially rules) This is how you do much with few people and few errors. Automate build pipelines, testing runs, deployments, license checks, everything. Spending time up-front to set up a CICD script will save you countless hours of tedium. More importantly, it will boost your speed to market a.k.a value delivery (see “It doesn’t count until it’s released”). Rules deserve special mention here. Development teams usually come up with rules around things like test coverage percentages, code style/formatting, branch naming patterns, and more. Developers are often under pressure and, being human, are wont to cut corners sometimes. It is natural. However, keeping compliance and quality consistent over time requires constant enforcement of the team rule. Enforcing these rules manually is like herding cats and is doomed to failure. Instead, automate these rules. Use linters, code formatters, coverage thresholds, SCM push rules, and others to make this enforcement automatic, constant, and unaware of any “fires” giving the humans, including the tech leads, stress enough to want to cheat. Rules: if they’re not automated, they’re not enforced. It doesn’t count until it’s released When we build features and push them to develop,the issue is closed, and the developer considers it “done”. They’ve created new value in the product. However,like potential energy, created value is not doing anything productive until it is delivered to a user. And, of course, delivering value to users is the name of the game in the software business. Merges to develop and sprint demos are not the goal. Releases are the goal. Deleted code is tested code Old, stale code can be the source of bugs and security vulnerabilities. Get rid of dead code, and if you can refactor 50 lines down to 5, do it. In the quest for code coverage, deleted code is tested code i.e. you get that code coverage for free. If you happen to need to refer to that code later, find it in your git history. There’s no need to keep it in the file commented-out “for reference”. Work should be “edged” With advanced in communication technology, we can read emails on the go. This gave rise to so-called “edgeless work” where you’re never really off. We don’t like that. We work hard during our work time and play hard during our play time. Our team respects people’s fun time, family time, exercise time, …, off time. Sure, some developers like to code as their hobby. That is, of course, up to them. But they should be able to work on their own projects during their own time. Write code for humans, not computers Computers are fast, and we have code minifiers anyway. Code that is a bit more verbose with longer, more descriptive variable names will help you and other developers return later to maintain that area of the codebase more quickly and with less stress, and fewer new bugs. Remember, head-scratching time costs money too. We don’t have 1 team of 7, we have 7 1-person teams We strive to train all the team mates to do all the things: dev, QA, ops, scrum master, release manager, etc. This is so mitigate key-person risk and to ensure that anyone can go on vacation without being bothered because there are multiple people that can perform any one task. The only way to “have time” is to “take time”. This is great quote from the Matrix movies. We understand that if left to their own devices, product managers would have the dev teams work only on business projects from the roadmap and not so much on internal projects like process automation, refactoring for maintainability, and the like. Many dev teams intend to get to those when they “have time”. Instead we choose to “take time” each sprint to tackle a “balanced diet” of issues covering business projects, internal projects, bug fixes, refactorings, and automated testing. This would never happen without conscientiously adding these other types of issues to each sprint backlog. Coding time is sacred Coders are paid to code, and it is difficult to get back into the zone once distracted. All developers are encouraged to protect their coding time from extraneous meetings and other distractions. If it is becoming a problem for the developer, it behooves them to alert the manager who’s job it is to help protect the developers’ time. We may decrease grade, but we’ll never decrease the quality. Sometimes there are external pressures and deadlines we cannot avoid. In these cases, we may have to reduce the scope of our work to fit within the time box. This is fine. Releasing features at a reduced “grade” may be necessary from time to time. However, although we reduce the grade of the feature set, we will not reduce the quality or test coverage. Too much of a good thing is bad How long should a feature request write up be? As long as it needs to be. If the developers, QA, etc. understand the intent of the feature, there is no need for the developer/manager/product owner to spend the time on writing out a novella in the issue description field. Some people look down on “incomplete issue cards”. We look down on wasted time and effort. No matter how far you go down the wrong road, turn back. Things evolve, lessons get learned, technologies fluctuate. We do not do things a certain way because they’ve always been done a certain way. We are always open to improving and refining our team processes and technologies.
https://zachary-keeton.medium.com/core-values-for-web-dev-teams-21eeb5d2c0db
['Zachary Keeton']
2020-11-30 10:32:02.164000+00:00
['Software Development', 'Leadership', 'Web Development', 'Tech Leadership']
Top 10 trading challenges
Trading has all the cards to seem very easy from an external point of view: all you need is a computer, and you can get rich by comfortably sitting on your sofa. Unfortunately, however, the reality doesn’t quite correspond to this ideal scenery. The trading activity implies a lot of daily struggles and pains, and we have summarized the 10 major ones for you. 1. Trading needs a lot of preparation before getting started Truth is, you can’t just sit on your couch and wait for the money to get into your wallet. Before you start seeing some return, you need to make a few investments, not only in terms of money but mainly with regards to your time and of your efforts. You need to carefully study the markets, choose your target companies, analyze the trends of the historical data and try to forecast what the future holds. 2. The amount of information to collect and digest is enormous Your analysis can’t be limited to your company, though, and needs to expand towards the whole market. This means that you also need to study your company’s competitors, the main ones as well as the newcomers into the sector. 3. The market needs to constantly be monitored and sometimes your strategies might require to undergo major changes The market can change as we speak and we might be required to change our plans on the spot. Preliminary research is extremely important but, unfortunately, it is not enough. The information you collect have to be updated and you mainly have to keep your eyes everywhere to be aware of what is going on. 4. Lots of patience is needed to wait for the right signals You might want to follow your instinct, and you might do so at the beginning, as it seems to be the easiest way. But what happens once you realize that, if you base your strategy in your faith about the future, your trading career won’t last long? 5. The fear of losing can greatly affect your decisions Losses are hard to accept and harder to overcome. How to take decisions freely anymore, when the mistakes you made in your past are stuck in your mind and affect the way you’ll read the signals in the future? 6. Uncertainty towards the future is the feeling that most affect a trader’s mind In trading, every move you make might be either the best or the worst decision you take in your life. Every transaction could either bring you a great amount of money or a terrible loss. Trading successfully requires you to have the calmest nerves, but at the same time puts a lot of pressure on your shoulder. You are starting to realize it’s not exactly as easy as you thought, right? 7. You might think about asking help to a third party, like a broker. But which one to choose? You might find a lot of people offering their assistance. But how do you choose the right person? What does this choice necessarily easier than the one of the right company to invest in? 8. Scams, fake gurus, and inexpert brokers Someone might lie to you and take all your money. Someone else might have good intentions, but not as good expertise in the matter. 9. Consecutive losses can have negative psychological effects towards your mind You might feel stressed and nervous about all the money and time that you have lost. Once again, how to keep your nerves down, knowing that you might lose again 10. Admitting you were wrong and move on it’s probably the hardest challenge of them all A good trader needs, with time, to learn how to master his feelings and move on. He needs to forget his mistakes but also learn from them, in order not to make them again. This last point, in particular, represents a great challenge for our human nature, and it actually can be even harder than acquiring all the knowledge and the technical skills of a trader.
https://medium.com/aitrading/top-10-trading-challenges-8ca91ae728a1
[]
2018-06-05 10:51:09.418000+00:00
['Personal Finance', 'Exchange', 'Bitcoin', 'Fun', 'Trading']
Importance of Skin Care: Ever wondered just what exactly our skin does?
The importance of skin care cannot be underestimated. Did you know, for example, that the skin is the largest organ of the human body? Mostly we take our skin very much for granted and don’t spend much time thinking about the skin’s functions and their importance, until some problem occurs or we injure ourselves. The following article provides some insight into why good skin care is important and suggests a simple skin care regime anyone can follow to help your skin perform at its best. Our skin’s functions are too many to go through here in detail, however it protects our ‘insides’ from the external environment, acting both as a barrier and a filter between ‘outside’ and ‘inside’ our bodies. The skin helps in regulating our body’s temperature, like when we have a fever or we’re physically working hard, we tend to sweat, which is the body’s way to attempt to lower the temperature. The skin also protects us from harmful substances entering our body, and it eliminates many toxins. This takes workload off our Liver and Kidneys to filter out by-products from our body’s metabolism. The skin also breathes! These are just some of the important functions of our skin and as you can see, looking after your skin is vital, not just for your outer beauty, but for maintaining your inner health. Now there is skin care and there is skin care… well, we all (hopefully) wash our body (skin) every day and we may even rub some body cream on and that is pretty much that. Women (most) go one step further when they make-up their face. Often using a cleanser and then a moisturiser before applying make-up. But is that really skin care? I think not. I would consider it an attack on the skin rather than caring for it. You see, most of the products marketed to women are full of artificial colours, stabilisers, emulsifiers and other chemical, which are supposed to help in achieving a ‘beautiful complexion’. Some products are advertising hormones, which are supposed to make your skin re-gain that youthful (no wrinkles) look — but more often than not these ‘hormones’ are artificial or synthetic and may well cause problems with the hormonal balance of the body. This is NOT Skin care — this is plain old manipulation and marketing. Real skin care is much more than that and is more than just skin deep. Your skin is a living, breathing organ of your body. As such, just like every other organ in our body, it needs to be fed from the inside — it requires nutrients. There are 4 basic ingredients to feeding the inside: Good nutrition: keep it simple, fresh and unprocessed. That is the best nutritional advice I can give. The simpler the food, the less processing and the fresher your food is, the better it is for you. Fresh fruit and vegetables contain so many of the nutrients we need to maintain our health. Sure have the odd processed, high in fat meal when you’re enjoying a meal out or have to attend a Luncheon or what ever. But make sure you have more natural foods than not. Keep the diet varied — don’t eat the same old, same old… risk a new veggie — one you haven’t tried before — you might like it … Adequate rest and relaxation: don’t work yourself to an early grave — it’s not worth it. Make sure you get the sleep you need. Did you know that a study in England showed that your IQ (intelligence) drops if you do not have 8 hours sleep per night? Think about it, do you get more work done if you feel well rested? Can you concentrate better if you’re not tired? — I bet you can. Well, why not invest some additional time into rest and relaxation so that you gain an increase in energy and concentration? I’m sure you will find you will get more work done in less time if you’ve had sufficient rest. Sufficient water intake: that’s a biggie. Most people (irrespective of were they live) will utilise around 3 litres of water per day — hey, don’t believe me, all the medical texts say so. Our body simply needs water to function. If you do not drink at least this amount, your body will either not function well (on some level) or it will take it from where ever it can. That is called dehydration. You know, dry lips, dry flaky skin, parched mouth, cracks on you tongue, premature wrinkles… the list goes on. So, drink up (water, mind you) or shrivel up — it’s up to you. Fresh air and sunshine: well, what can I say. Taking a deep breath of air and tell me it doesn’t feel great… Well? Oxygen is the stuff of life. Fill your lungs with it. Here I could go into how most of us do not know how to breath properly, but I’ll save this for another article. So what does all this have to do with skin care? Well, that is the point of putting expensive, beauty products on your skin, when you do not give it the stuff of life from the inside? The cells that make up your skin need the right nutrients for proper development, growth and all that… You can help your skin by using good quality natural skin care products, but you have to support this from the inside as well. Only in that way can you expect to get good results from proper skin care. So what’s proper skin care? Well, for starters there are 3 basic steps. 1. Cleanse and Condition 2. Hydrate and Tone 3. Moisturise and Revive. OK. Cleansing the skin seems obvious and I know, you do know how to use soap — wrong, this is one sure way to make your skin dry-out quicker. Most soaps remove the natural oils of the skin, change the natural pH levels and do nothing to remove the dead layers of skin, which can block your pores and lead to blackheads. And, oh no, not pimples! The skin produces oils and acids to help it function, to protect it from loss of excessive moisture, to form a barrier… etc. So please do not use soap or detergents unless it is necessary. Using a loofah or a gentle ‘scrub’ will remove the dead skin cells and this in turn will promote better blood circulation and help your skin to breath. The next step is to hydrate and tone the skin. Say what? Well, you’ve just removed the dead skin layers, rubbing the skin with a loofah and or a specially formulated cleanser, now it’s time to remove the residue, sooth the skin and prepare the skin for getting a good feed of nutrients from the moisturiser. Preparation of the skin prior to putting on the moisturiser is not dissimilar to preparing a surface about to receive a new coat of paint. You wouldn’t just paint over a wall that hasn’t been cleaned and prepared for the new paint, would you? It would be a waist of time and money… well, good skin care is the same. You first get rid of the old layer of paint, than you give it a primer and finally the top-coat. Ah, I already use a moisturiser… Great, at least that’s a step in the right direction. But, have you looked at the ingredients? Are they natural, or are there numbers and words you don’t recognise on the label? If so, then consider that your body absorbs these substances and if they are not useful, (preferably of a natural kind) then the body has to eliminate them — and that’s more work and not necessary. In some cases, the body actually can’t eliminate these substances and has to store them. This is a potential problem and could cause health issues down the track. Pure essential oils, or herbal extracts are usually good ingredients to have in your skin care products. So there you have it — the importance of good skin care. These steps, if you follow themBusiness Management Articles, will help you to achieve the results you want and your body will thank you too.
https://medium.com/@libesh-services/importance-of-skin-care-ever-wondered-just-what-exactly-our-skin-does-4afc49156430
['Libesh Services']
2020-12-19 12:07:19.147000+00:00
['Womens Health', 'Sexual Health', 'Anti Aging', 'Skin Care Products', 'Skin Care Tips']
The Pain of Humanity’s Coming Realization
The Pain of Humanity’s Coming Realization The web of lies from the fetid core of the highest offices of power continues to unravel at what feels like an exponential pace. As such, I strongly suspect that many more disturbing facts will come into focus in the coming days and months —And yet, had these facts been proposed as hypotheses mere months ago, they’d have been considered generally laughable within the context of any serious discussion. And yet it turns out that the goal of the Great & Powerful People’s Club (i.e. Bankster Lobbyist Warmonger Fantasy Camp; hereafter the “Special Club”) was not to help you or solve your problems but to help themselves at your expense and instigate at least enough problems that you wouldn’t notice their elaborate graft. And just as obvious as the conspiracy has become, is the cover up becomes just as clear. Power hungry people, a few of which seem to lack any regard for kindness towards their fellow man, not only reached their hands into your pockets but stuck their greasy fingers into your brains and the brains of your children, instilling in you an artificial instinct to laugh off any mere suggestion of such a deception. Consider this mental exercise — If one were to suggest that the Sun revolves around the earth instead of the other way around, wouldn’t such a suggestion seem quite absurd to most people of sound mind living in America today? Yet, in the old days, before the hypothesis of heliocentricity had been discovered, discussed, vetted, and widely disseminated, one would expect that most people would have no idea how the solar system works. But now, after Galileo, and with a small bit of research and consideration, the suggestion that the Sun revolves around the Earth is easily dismissed as illogical and without merit, and there’s ample evidence to back it up. As for the complete picture of our current situation, imagine a jigsaw puzzle for which you had no box or knowledge of the image presented upon it: At first, the edge pieces were hard to come by and though their tone was clearly dreary, the picture they outlined remained unintelligible. However, after a bit more picking and sorting, a few more pieces come into place — parts of the picture become clear, but others required more pieces to be made apparent. And, after a good deal of effort and a few more Wikileaks-shaped puzzle pieces, the puzzle’s grim picture becomes suddenly obvious: It’s a picture of a boot, stamping on a human face — forever. And while this metaphorical puzzle isn’t yet totally complete, it’s only a matter of time until it is. Instead of 1,000 pieces and no road map, we now have left only a few handfuls of pieces but little wonder as to what landscape they’ll reveal. And as such, the picture on this puzzle is quickly becoming far too detailed and vivid for one to credibly deny the reality of what’s happening. The scams are becoming obvious as are the poignant arguments against those who deny their existence. That said, I suspect that many humans will continue to deny even such a clear reality, insisting that the sky is green and the grass purple. For those who have had the greatest magnitudes of deception put upon them, have in essence been turned into various forms of weaponized humans, using belligerence, anger, verbal trickery, and sometimes violence towards their fellow man, all in an effort to suppress the truth. But most tragically, those deceived have been turned into weapons against their very selves — Regardless of whatever the survival purposes it may have served in the past, the human mind has evolved a great and cunning ability to self-deceive. That ability and its power are not lost on those inside the Special Club — Instead, it has been used to convince the deceived victim, beyond a shadow of a doubt, that any other reasonable explanation for how the world works other than *the company line* is total anathema and SHALL NOT be considered. And as a result, it’s only natural that, more often than not, such self-deceptions have been carried on for generations in spite of the facts, given that they were mostly (and cleverly) hidden from view. Because knowledge (and especially self-knowledge) is power and a lack thereof will have inevitably lead to greater likelihood of misery and regrets, the realization that one has been living under a false view of how one’s self relates to the world is a painful one, and in this case, the avoidance of reality is really just the avoidance of very intense pain. As mentioned previously, because this phenomenon is now becoming blindingly obvious, for many of those who are currently still deceived, their distorted worldviews will soon, and quite suddenly, morph into ones based much more so in reality — The available evidence, pushed past a tipping point, will force the deceived to catch themselves in their own self-deception. While this increased level of awareness to the truth and decrease in overall self-deception I suspect will, in the long-run, be a great boon to humanity, I certainly don’t point out the coming awakening of those who have been deceived with relish. For those people, the process will be abjectly painful. Consider a time when you realized you made a horrible mistake long after it had caused you significant harm, often even beyond the point of remedy. It’s an extremely painful process of realization and subsequent coping. I wouldn’t wish such pain upon my most conniving opponent. Seeing this picture of Huma Abedin and Hillary shortly after learning about the new FBI probe gives me genuine pangs of empathy. I see in their eyes the intense pain of regret and realization. To me, it’s haunting.
https://medium.com/@rationalinfo/the-pain-of-humanitys-coming-realization-2da0e77f431a
['C. R. Mason']
2016-11-03 20:33:17.492000+00:00
['Politics', 'Future', 'Donald Trump', 'Hillary Clinton', '2016 Election']
Dirty Laundry
If there is one house “chore” I hate doing, its laundry. During the week, as you put your clothes in the bin, you can easily forget about them. The first few days, its just a t-shirt, some workout clothes, and some underwear. But, as the days go by, the pile starts getting bigger. If you let it get too big, you start to realize that you don’t have any clean clothes to wear. Or you want to wear that cute t-shirt but then you remember its in the dirty pile. You groan because it means you have to face the obstacle: doing laundry. We all have dirty laundry we try to put off for later. We pile it away in the back of the closet, and sometimes we just try to hide our empty drawers by buying new clothes. Living a new lifestyle. Without facing the pile that keeps growing in the back of the closet. Sometimes I don’t clean out my inner “dirty laundry.” The struggles I’ve gone through the week, the fear that has grown and grown. I leave it hanging in the back of my mind, hoping that the new week, new month, or even new year will cover up the pile of clothes I have yet to face. Do you have “dirty laundry” you refuse to face? It’s never too late to take some time to dig through it, throw out what you don’t need, but also find that nice t-shirt you forgot about, clean it up, and hang it right back in your closet. Starting on a clean slate means cleaning up the messiness that’s on top of it.
https://medium.com/@alexa-ruiz0109/dirty-laundry-58af59db6a02
['Alexandra Ruiz']
2020-12-23 16:38:46.562000+00:00
['Resolutions', 'New Year', 'Motivation']
Face Mask 😷 Detection Using Deep Neural Networks (PyTorch)
Photo by Anastasiia Chepinska on Unsplash ABOUT THE PROJECT Face Masks play a vital role in protecting the health of individuals against respiratory diseases, and is one of the few precautions available for COVID-19 in the absence of a vaccine. And now everywhere it’s compulsory to wear masks. So hear i created a model which detects whether a person in the image have a face mask or not.In this article i will explain you about my approach towards making this project and how i created it from scratch. ABOUT THE DATASET The dataset contains around 12000 images of different sizes belonging to the 2 classes(with mask and without masks) .The dataset is available on kaggle,you can find it here. Training set size : 10,000 images Validation set size : 800 images Test set size : 992 images Lets get started…. PREPARING THE DATA We begin by importing all the required libraries and functions. We have imported os and tarfile for extracting data from dataset directories, ImageFolder to load the data from different folders as pytorch tensors, random_split and DataLoader to load data into batches for training and validation, matplotlib and make_grid for data visualization purpose, torch.nn package from PyTorch, which contains utility classes for building neural networks. Lets take a look at data directories and classes and load the data as pytorch tensors Training Validation and Test Dataset As our dataset contains images of different sizes so we will apply some transformations on the training,validation and test set to make all the images of same size(most of the computer vision applications need images of same size).For that we will use PyTorch inbuilt functions and we will resize all the images to 224x224 pixels and then we will convert them into pytorch tensors. Dataloaders Now we will create dataloaders for splitting data into batches for training and validation purpose.Dataloaders returns the dataset batch by batch of predefined batch size,instead of loading whole data at once we use dataloaders to load our dataset batch by batch so that we don’t run out of memory and to prevent our training process from slow down.Here we will use a batch size of 128. Let’s take a look at a batch of data from the training dataset,for that we have to create a helper function show_batch . As our dataset is huge so we need GPU to train our model within a reasonable amount of time,so for that purpose we define a helper function for selecting the available device(cpu or gpu).This function checks if a GPU is available and the required NVIDIA CUDA drivers are installed and returns a available device. Now lets define few helper functions and class to move our training ,validation and test data on the available device(cpu or gpu). DEFINING THE MODEL To include some additional functionalities within our model, we need to define a custom model by extending the nn.module class from PyTorch. nn.module class is basically a base class for all neural network modules.Before defining our model let’s create some helper functions. Here we have class Dnn in which we have training_step function which calculates loss of training data,validation_step which calculates loss and accuracy of validation data,validation_epoch_end and epoch_end function which calculates and prints validation loss and accuracy after every epoch(iteration). Note here we have used loss function as cross entropy which performs better on these kind of problems.you can check more about it here . Now we will extend the Dnn class to complete the model definition. Here we have created a neural network with 4 layers.one input layer,two hidden layers and a output layer. For creating each layer of our neural network we have used a inbuilt nn.linear function from pytorch which takes two arguments input size(features) and output size(no of classes or no of output labels).size of the hidden layers are 128 and 256 respectively,and the activation function which we have used here is ReLU(rectified linear unit).it has a simple formula: relu(x) = max(0,x) i.e. if an element is negative, we replace it by 0, otherwise we leave it unchanged.Introducing a hidden layer and an activation function allows the model to learn more complex, multi-layered and non-linear relationships between the inputs and the targets(outputs or labels). TRAINING MODEL Before we actually begin with the model training process,we need to define some helper functions,for model training and evaluation purpose. We have define an accuracy function which calculates the overall accuracy of the model on an entire batch of outputs, so that we can use it as a metric in fit function. evaluate function for calculating loss and accuracy of validation data after every epoch. and the most important utility function fit which trains the model for a given number of epochs. It basically preforms the following operations: 1- Generates predictions, 2- Calculate the loss , 3-Compute gradients w.r.t the weights and biases, 4-Adjust the weights by subtracting a small quantity proportional to the gradient, 5-Reset the gradients to zero and at the end of every epoch it evaluates the model on validation data and prints it’s loss and accuracy.Note that the optimization algorithm we have used here is stochastic gradient descent (optim.SGD).you can learn more about it here . Before we train the model, we need to ensure that the data and the model’s parameters (weights and biases) are on the same device (CPU or GPU). We can reuse the to_device function to move the model's parameters to the right device(cpu or gpu). Let’s initialize our model and move it on the available device. Now we are ready to train our model.let’s train it for 10 epochs with the initial learning rate of 0.01. After this I trained the model for few more epochs with learning rates 0.001 and 0.0001,and after evaluating it on the test data i was able to achieve the accuracy of 95.37%. Now let’s test our model on the predefined test data and see how it performs. Well you can see we got a test accuracy 95.37% which is pretty good and our model is performing quite well. check out the entire notebook of this project here. If you are reading this i hope this article helped you to gain some knowledge about creating end to end model using neural networks and encouraged you to learn more about it. Thank you so much for taking the time to read this! I hope you enjoyed reading it .you can connect with me on LinkedIn and Twitter REFERENCE LINKS Check out the playlist of the course “Deep Learning With PyTorch” here Check out jovian.ml, It’s a sharing and collaboration platform for data science projects and jupyter notebooks. Check out my other projects here .
https://medium.com/dsc-dypcoe/face-mask-detection-using-deep-neural-networks-pytorch-af448f78a8b6
['Saurabh Palhade']
2020-06-30 12:35:54.548000+00:00
['Deep Learning', 'Computer Vision', 'Machine Learning', 'Software Engineering', 'Neural Networks']
You Will Probably Never Have a Chance to Climb Kilimanjaro Like This Again!
During a largely forgettable year, some decided to make it memorable despite the additional challenges they faced. Here is how they fared, and why it might be a surprisingly good time to make that Kilimanjaro dream come true. If you’ve climbed Kilimanjaro during high season, you know it’s busy, it can be crowded, even noisy. If you arrive late to camp, no matter how hard your team works to secure the ideal spot on the mountainside, they may not have a great location for you. Climbing under Covid-19 The title of this piece is taken from a comment from Sianna S., who summitted the mountain seven years after I did. Here’s what she had to say about her journey. “2020. A year of restrictions, cancellations and postponements. But if you are thinking about climbing Kilimanjaro, this is not the moment to hesitate! For now, due to the lack of many tourists, it is the time where you can fully enjoy the remoteness and tranquility of the mountain.” Said Sianna S, who recently climbed with Just Kilimanjaro. She went on to explain that she and her 57-year-old father saw that “after many months without any tours, you could feel the whole crew was highly motivated and so happy to be back on the mountain. I can definitely recommend trekking up Kilimanjaro as a safe holiday alternative far from many people, where it is easy to forget about Covid for a week.” Annie Leroy Okay. Let’s talk about this, particularly if you’ve put your plans on hold this year. While it might be very tempting for armchair Covid quarterbacks to shame folks who have taken the plunge to put this climb on their agenda during a year when so many of us are grounded, let’s please take a moment to consider. First, the airlines are doing everything they can to ensure safe passage, perhaps more so right now than they ever have. In addition, many airlines have made cancelling or changing plans free of charge, which given what these two women had to say, is hugely helpful: “We had some challenges with flights due to Covid 19, but, fortunately, we were able to make it! From day one our company had our best interests in mind and cared for us every step of the way. The crew wore masks, kept social distance, and sanitized regularly to keep us feeling safe”, said Julie S, who climbed Kili with Pristine Trails, joined by her daughter to celebrate her 56th birthday. The airlines want you to fly. That said, they have made sweeping changes not just to how they sweep the plane’s surfaces for germs, but the expensive and dunning costs of sometimes inevitable last-minute plan shifts are gone. GONE. That could be hundreds of dollars in fees saved, all to make sure you get where you need to go. That said, if you do choose to book, always leave a few extra days’ grace just in case. Tanzania opened its borders on June 1st, 2020 and welcomed tourists back into the country. In May, the Ministry of Tourism established its National Standard Operating Procedures (SOP) for tourism-related companies. The Kilimanjaro National Park also defined their Handling Procedures of Tourists during the Outbreak of COVID-19. Furthermore, the Kilimanjaro Porters Assistance Project (KPAP) and their Partner companies created a special SOP committee and outlined additional criteria for the Covid safety and education of the mountain crew. The KPAP Partner companies were provided with handouts, in Swahili (the predominant language of Tanzania), on mask application, proper handwashing, Covid education and protection. Due to its commitment to safety, KPAP added these SOPs to its regular monitoring activities of the Partner companies’ treatment standards towards their crew. Kevin Meier Here’s what one of the Partner companies had to say: “To combat the pandemic, we implemented all of the requirements set out by the government to safeguard the health of every client. We also worked very hard with KPAP to develop new standard operating procedures to protect our Kilimanjaro crews. This gave our management team the confidence to continue with our climbing operations instead of shutting down. Our clients were welcome to come on their scheduled trips or, if they wanted to postpone, they could do so with no penalty. We offered total flexibility. As a result, we conducted group climbs with just a few people, sometimes at a financial loss or breakeven. But we felt it was crucial to keep the staff working to provide income during these difficult times.” Tumaini Anatoly — Peak Planet Operations Manager Training all year and bummed because your butt is on the couch? Maybe you don’t have to be. A number of people turned this time of low tourist turnout into an opportunity to enjoy the mountain without the usual crowds: “Our team was professional and prepared for every situation. We made it to the summit! But we also left with friendships and memories that run deeper than just trekking up Mount Kilimanjaro. These memories include people, relationships, laughter, and sincere heartfelt kindness and warmth between human beings during a time when COVID says you cannot do this”, explained Ben B. when speaking about his climb with Summit Expeditions & Nomadic Experiences. Dianna Snape Six of the climbers who contributed to the article are parent/child duos, which is a very special opportunity. Not only did these people get a chance to bond, but the porters and guides are immensely grateful for the work. The some twenty thousand porters, whose livelihoods support up to one hundred extended family members each, count on the income from these climbs. This past climbing season, May to November, most porters didn’t get regular work. With volunteer efforts, generous donations and collaboration with the Partner companies, KPAP launched a series of efforts to provide interim support for the porter community. That said, here’s what one climber revealed about what his adventure taught him about the local community: “I’m writing this in October 2020, deep into the corona virus pandemic. Traveling is severely restricted and for the most part, very few people have summited the mountain in the past 6 months. Climbing Kilimanjaro sustains the local community. I don’t think I fully appreciated this point before I was there. There are tons of people that rely on this as their main source of income — the chefs, porters, guides, trekking agencies and all of the local hotels, restaurants, food producers, merchants etc. The people were so grateful, not only to be able to return to the work that they love but also to be able to have their industry and livelihood supported financially.” wrote Kallum L on his travel blog, after having climbed with 360 Expeditions. Kallum L Ultimately, if you really want to immerse yourself in one of the finest adventures to be had, it’s hard to beat Kilimanjaro. Not only are the companies completely dedicated to getting you to the base of the mountain safely and securely, they are also dedicated to having you get as close to the top and back down not only in one piece but with the peace that can only be had when you can get away from the world’s insanity for a while. To that: “In the most interesting and memorable year of our lives, an October 2020 visit to Tanzania and climb of Kilimanjaro provided an incredible escape, both in our heart and minds. Our porters and staff were as caring for our well-being as we were for theirs. It will be hard to not think of the year 2020 as a forgettable one for sure, wrote Jon and Nyla V. about their climb with Duma Explorer. If you’re energized by these folks’ stories, please consider climbing Kilimanjaro with a KPAP Partner company. Please see: They are dedicated to the safety and welfare of the porter community. As you can see from the above, when the porters are healthy, safe and happy, so will you be. The immense joy of standing under the famed sign is triple-underscored by the folks who helped get you there: well-fed, well-protected, and in excellent company. Interested? Please consider reviewing this before you make your plans: With airlines taking extra precautions, the climbing companies eager to do the right thing by you and by their crews, this is one place where you might well be safer than many others. The windswept breathtaking slopes of this great mountain are incredibly quiet these days. Which is why, as someone who knows what it’s like to feel almost alone on this amazing journey, as Sianna S. says above: this is not the moment to hesitate! See you at the top.
https://medium.com/age-of-awareness/you-will-probably-never-have-a-chance-to-climb-kilimanjaro-like-this-again-94835672dc8c
['Julia E Hubbel']
2020-12-22 22:06:28.110000+00:00
['Adventure', 'Travel', 'Covid 19', 'Kilimanjaro', 'Africa']
RightMesh Hires Five New Members to the Development Team
About David Our new QA Lead, David Sheen, joins RightMesh with over 18 years experience in software and systems testing. He has worked with both large organizations, such as Ericsson and Telus, and scale-up organizations, such as Function Point, to implement best in class processes and procedures. Having led multiple successful teams, we’re looking forward to what he will bring to the table. I am looking forward to working on a team that can deliver such an important project as RightMesh. Being able to bring connectivity to rural regions is vitally needed. My being able to support the project by identifying issues and helping improve the level of quality and stability gets me really excited. - David Sheen, QA Lead About Peter Peter Dang joins RightMesh as our newest Software Engineer. With experience as both an Android Mobile Developer and Web Developer, he is a double threat and a great asset to the organization. Peter’s previous roles have included working in fintech and integrating payment systems. The Internet is currently only accessible to 35 per cent of people in emerging markets. Connectivity will help these nations accelerate progress, bridge the digital divide, and develop knowledge societies. I am looking forward to being a part of finding a solution. - Peter Dang, Software Engineer, Android About Xuan Our new Mitacs Intern, Xuan Luo, joins RightMesh through our research project in partnership with Mitacs and the University of Guelph. Xuan is currently pursuing her Master’s Degree with a specialization in blockchain technologies, payment channels, and machine learning, from the University of British Columbia. Throughout her internship, Xuan will be working closely with Dr. David Wang, Chief Micropayments Scientist at RightMesh, to work on payment channels. Prior to pursuing her Master's Degree, Xuan worked as a software developer for 6 years at notable companies including SAP. About Penjani Penjani is a Computer Engineering student in his third year at the University of British Columbia and is joining the RightMesh team as a Software Engineer Co-op. Prior to working at RightMesh, Penjani did an internship with Microsoft as a Software Engineer and worked as an Emerging Media (VR) Developer at the UBC Emerging Media Lab. In his spare time, Penjani also works as an Autonomous Navigation and Communication Developer with UBC Voyage. I am really excited to work on technology that could empower millions of people through connectivity. Mesh-enabled applications that promote education, such as the sharing of resources, can make a big impact. It could be a wonderful platform for learning. - Penjani Chavula, Software Engineer Co-op About Matthew Matthew is a Software Engineering Student at the University of Guelph and joins our team as a Software Engineer Co-op. Matthew first learned about RightMesh through the research work we’re doing in partnership with his school and Mitacs to improve connectivity in northern Canada. Prior to working at RightMesh, Matthew did two internships with BlackBerry on enterprise solutions and automation testing.
https://medium.com/rightmesh/rightmesh-hires-five-new-members-to-the-development-team-c747edbf44cb
['Amber Mclennan']
2019-02-13 20:42:38.780000+00:00
['Payment Channels', 'Blockchain', 'Mesh Networking', 'Company Culture', 'Mesh Networks']
WHY SUICIDE
Suicide, a seven letter word is one of the most dangerous things of the world even worse than Death and Murder. As Suicide is an act of killing himself or can say a murder but the most threating part of it is at the time ,when any individual even carries a thought of murdering himself makes him the most dreading breast of the world. As it is a fact, No one in the world can love others more than himself so it is impossible to understand the psycho of that person when he is lingering with the thought of suicide ,if he can harm himself then he can too others at any extent. But why if we know every problem keeps a solution with it then how Some people embrace death at their hard time. So we must understand few things How we can avoid any self killing or Save others. The first and foremost is the stories or the idea which has already enslaved us that Suicide is a solution of any problem of the world from the Trivial to the worst . But Suicides always demands courage and risk which we don’t had when the first time we thought about it ,the reasons can be manys because problem looks big from outside but we know from inside, it doesn’t have that much gravity for which we can give our life.And soon we found problem has been solved or We became habitual to live with it. This process goes longer sometimes Alcoholic drinks, Smoking of weeds ,Our outside support like family members, friends etc, get rid of this idea from our mind .But the most disgusting part is the space of this disgusting idea constantly increases in our mind and for every problem we think once for a suicide. Due to this, we hear even small children are heard several times, I am in a problem or in depression and I want to commit suicides but in reality, they neither have been seen the stage of depression yet or not at a stage of depression but they talk about it and enjoy their life with the idea. Movies, Songs, News, Stories books are also responsible because when they watch or read any story like that, it make their beliefs harder that suicide is a solution of all the problems of the world. As we all know, everyone’s life is not parallel and we all go through unique experiences ,some disappointingly never give a minute to correct their thoughts and embrace the suicide when they really became enslaved of their emotions, pain of problems or anything which become an obstruction from getting happiness in the long run.They embrace the suicide when the problem really looks the biggest to these people . But it made them too rigid to believe that there is no problem in the world which doesn’t give an option of solution. Everything in our life always comes with two faces in our life one is either Victory or defeat. So I want to conclude what we must do whenever this thought comes in our hearts or Mind. Say clearly and loudly, there is not a thing in the world like Suicide. I can’t harm me even I don’t have any right to do this. If still it comes in your mind then say to you, if I have to kill myself then why I am in hurry to do this,I will take some years to do this and sooner or later you will find one day you will laugh on the idea and it makes you more embarrassing. The average age of human being is 55 years and I am not going to live for always then why I am thinking to be killed by me. Let me see the next generations. Remind once, how you did forget all the bad past experiences and this one too. And at last say, I am a lion and lion cant kill himself. This is my own experience and soon I will share what made me to think suicide can be the solution. So keep in touch with me. Thanking you for your valuable time and comment as well if you really have any idea regarding the Post. It would be appreciable.
https://medium.com/@rahul8791gupta/why-suicide-778785a224ae
['Ragu Politics']
2020-12-04 13:32:59.154000+00:00
['Mindfulness', 'Depression', 'Suicide', 'Emotions', 'Sadness']
Selecting A Reliable Gift Delivery Tokyo — Why Should You Go Online
Selecting A Reliable Gift Delivery Tokyo — Why Should You Go Online Birthdays come once every year and clearly you need to give the best birthday gift to your loved one. The best place to shop for birthday gifts or gifts for any events is the internet. There are numerous virtual stores which cater to your needs and deliver your gifts where and when you need them to. The online stores have birthday gifts for everyone and every age. Birthday gifts for women include perfumes, cosmetics, jewelry, totes etc. Shower and Body basket is a decent choice as most women like to be wrapped in a fresh appealing scent. When picking jewelry, you don’t need to go overboard and get her expensive precious stones, birthstones are in these days and her birthstone set in a ring will really please her. Or on the other hand you can give her cosmetics or totes endorsed by her favorite celebrity through an online gift delivery service in Tokyo. In the event that she loves reading, why not get her the latest paperback by her favorite writer. Mother will love precious stone show pieces or an imprinted mug while lingerie will make an intimate gift for your wife. Children adore toys; the sort of toy you select for the child depends upon his/her age. While younger children love trains, dolls, stuffed toys etc, remote controlled toys make the older ones upbeat. There are different types of toys also available to send birthday gifts to Japan. Gone are the days when men accepted socks, lighters or gloves and other utility items on their birthdays. Men today demand to be pampered with gifts; electronic gadgets are currently preferred by men. Tickets to the latest games in the area are other things they like. In the event that you are as yet confused with respect to what to choose, why not send flowers and cakes on that special day. Flowers make a unique and adoring gift. A bouquet of fresh beautiful flowers speaks eloquently of your love and fondness for the special person. Cakes are the perfect gift for children. Imagine their happiness when they cut the cake you have sent among friends and relatives. The online stores not just offer you a wide range of gifts to choose from yet in addition see that the gift reaches the person on time and in great condition. The online gift delivery stores in tokyo provide speedy birthday gift delivery and provide different gift delivery services, for example, same day delivery; in case you couldn’t choose a gift earlier or the exclusive midnight delivery. Midnight gift delivery is very well known these day and numerous people are sending birthday gifts utilizing this service. With midnight gift delivery, you loved one receives the gift as they enter the new year of their lives. There is the next day delivery service which delivers gifts the next day. We need to make the birthdays of our loved ones as special as possible even on the off chance that we are living far away and online stores help us in making events memorable with their wide selection of gifts and exclusive services which make sending birthday gifts to loved ones easier. Online shopping is so convenient that you can also send funeral flowers to Japan with just a click.
https://medium.com/@florajapan/selecting-a-reliable-gift-delivery-tokyo-why-should-you-go-online-558d020b3f4
['Flora Japan']
2019-10-16 06:00:28.677000+00:00
['Flowers']
A Statement Regarding Allegations Our Candles Are Unholy Monstrosities That Defy The Laws Of Nature
Photo by thevibrantmachine from Pexels Auntie Jo’s Candle Company began three years ago when two sisters from Vermont decided to sell quality hand-poured candles. However, in the previous month, we have received hundreds of complaints about our organic soy wax candles and we’d like to set the record straight. Each one of our candles is lovingly hand-poured and requires daily affection from its owner. Neglected candles will burn more aggressively. If your candle won’t start burning, try holding the candle at an angle so your match hits more of the wick. If your candle won’t stop burning, we have no solution, but water makes it worse. While there is a map scrawled on the back of every label on our candles, we do not know who put it there or where it leads. We sent our nephew Marcus to find out. We have not heard from him in months. Burning our candles an hour a day will give you the ability to tap dance at an intermediate level. Pouring the wax from our candles into a bowl of water does reveal the number of times you threw something in the trash when the recycling was right there. Our 100% organic candles are edible. Eating them causes vivid hallucinations where you run into your high school bully at a Walgreens, but they don’t recognize you, which is somehow worse than outright cruelty. We do not know who wrote Auntie Jo’s Candle Cookbook nor how it became a New York Times bestseller. Certain scents produce unadvertised effects, and we have listed them below: Maple Honey: The smoke spells out the day you will fall into an irreversible coma. On the bottom of the jar is the date your family will pull the plug. Lavender Garden: You gain the ability to breathe underwater, but you have to stay there. Clean Cotton: Anyone who smells it loses their right to vote. (These are on backorder.) Many people report seeing our candles in famous images such as the Tiananmen Square protests or the moon landing. We assumed this was a prank until we saw Washington Crossing The Delaware now featured our first president holding our Lemon Verbena three-wick. Our candles no longer obey the laws of physics but burn time remains over 70 hours. There is no secret fragrance so powerful the smeller’s soul immediately reaches Nirvana. Our candles were used for the vigil of the missing 15-year-old, Cassidy Goorman. We did not know that burning our Gardenia and Passion Fruit candles (Cassidy’s favorite scents) at the same time would create a rift in the universe causing everyone at the vigil to disappear. Nor did we know the second vigil mourning these victims would burn our Honeysuckle candle, which causes mile-wide sinkholes. This has caused a chain reaction of vigils and disappearances that has claimed thousands of lives. A vigil for all victims will be held next Friday. If you value your life, do not attend. Our nephew Marcus has returned. He has aged a hundred years. He says he went to The Wax Place and saw Cassidy Goorman, along with the thousands of other vigil victims there, suspended in a wax prson. So, mystery solved. A raven in the night delivered our return policy upon us. If you are not satisfied, you have thirty days to scale the nearest peak, bury your candle in the earth, and wait to be struck by lightning, which means the return has been accepted. You will find a new candle where you least expect it. Auntie Jo’s Candles apologizes for any hurt our candles have caused. As a gift to our valued customers, all future orders will include our Sugar Cookie apology candle. Once lit, this limited-edition scent evokes freshly baked cookies and prevents the smeller from participating in class action lawsuits.
https://medium.com/slackjaw/a-statement-regarding-allegations-our-candles-are-unholy-monstrosities-that-defy-the-laws-of-nature-bc769f99eed6
[]
2019-12-19 13:46:01.216000+00:00
['Fire', 'Humor', 'Home', 'Candles', 'Handmade']
How to Set Up Redux-Thunk in a React Project
Today I will be discussing how to set up redux in a react project. Yes, you probably heard a lot of fuss around it about how hard it is to set up on a react project, all the boilerplates along with it, etc… Yet redux is so popular today. Why? Because it allows us to manage the global state within one application, where several components can access the same piece of data within the global state. Let’s talk about the installation steps. Step 1: If you haven’t install node already run the following command to install node in your terminal: curl “https://nodejs.org/dist/latest/node-${VERSION:-$(wget -qO- https://nodejs.org/dist/latest/ | sed -nE ‘s|.*>node-(.*)\.pkg</a>.*|\1|p’)}.pkg” > “$HOME/Downloads/node-latest.pkg” && sudo installer -store -pkg “$HOME/Downloads/node-latest.pkg” -target “/” Step 2: install a new react project with the following command, with redux-project as your project name: npx create-react-app redux-project Step 3: Modify your index.js and replace it with the following content: Step 4: To install redux, you need to install several redux packages by running the following command: npm install react-redux redux redux-thunk — — save Step 5: Modify your app.js and replace it with the following content: Over here, I imported in “provider” and wrap it around our application. In this case, that would be ActionComponent even though normally it would be a bunch of routes that route to different pages. What this means is any component wrapped by the provider HOC(Higher-order function) will have access to the global redux store. ConfigureStore is a function that will hook the application to the root reducer(which connects all the reducer). Step 6: create store.js with the following content: Over here, we call createStore from redux, which gives us a default store to work with for the entire application. It takes in 3 arguments: one being the rootreducer, which contains all the reducers combined. This reducer is where you hold your global store data, as well as a function that parses incoming commands that modify the existing state. There is another interesting piece to this, which is the composeEnhancers. If you did not know what it is and by looking at it, you can at least guess that it is related to redux dev tools. That’s right! Redux debugging tools are now available on browsers once you include composeEnhancers for createStore call. Here is how the redux dev tool looks like on the actual browser: Step 7: Create a folder called reducers under the src directory and create a rootReducer.js with the following content: In a regular application, you can expect to have different reducers here, each with their own local state & data. Over here we export one root reducer with local state from simple Reducer. Step 8: In the reducers folder, create a file called simpleReducer.js and including the following content: Over here, the simpleReducer contains result, fruitOne, fruitTwo, all global state data that will be modified and utilized depending on the changes to the state of the application. It takes different commands, such as “SET_FRUIT_ONE”, and modifies the state depending what action.payload holds as a value. Photo by hannah joshua on Unsplash Where are all these commands coming from? They are all coming from an action js file, which contains a list of functions that sends commands to the reducer to modify the state. Step 9: under the src folder, create a new directory called actions. In the actions folder, create a file called simpleActions.js with the following content: Over here, we call the dispatch function with a type and a payload. The “type” is the command that the reducer picks up and decides how to modify the global state. The payload is the data that will be inputted into the global state, depending on how the reducer handles this piece of data. Now our redux is setup! Question is…how do we modify the redux store and get access to the redux store data on our application? Remember in app.js we have an action component that is wrapped by the provider? Step 10: Create a component folder under the src directory. Inside the component folder, create ActionComponent.js and include the following content: Over here, we get the connect function from react-redux, which allows us to wrap around actionComponent and connect it to the redux store. Connect component, in general, takes two components: mapStateToProps: this allows us to access global state variables mapDispatchToProps: this gives us access to functions that can modify the global state In this component, we connect the functions to update the global state from mapDispatchToProps. Afterward, we access them as attributes of props and call it. What happens is it calls the function that dispatches a specific action(for example, set fruit one) with its own type(command) and payload. It then goes to the reducer and modifies the state. If you look into the redux dev tools on chrome, you can track all the commands and the status of the global state, as seen in one of the screenshots above. How about getting access to variables from the state? Step 11: inside the components folder, create a component called FruitComponent.js with the following content: Simple to mapDispatchToProps, you expose the field that you want to see from a particular reducer and that variable becomes accessible as a props. Do note that any changes to the action component will cause mapStateToProps to get fruitOne and fruitTwo again from the global state. Well, that’s not good because we only want to get fruitOne/fruitTwo when the page loads and only when there are changes to these two variables in the global state. For now, this is enough to guide you to set up your redux. In future articles, I will talk about tools to memoize the data we are getting from the redux.
https://medium.com/javascript-in-plain-english/how-to-set-up-redux-thunk-on-a-react-project-79b0c29c96db
['Michael Tong']
2020-12-26 09:19:15.230000+00:00
['Redux', 'Redux Thunk', 'Front End Development', 'React', 'JavaScript']
Dear Parents, Get Off Your High Horse
This is a story about my parents. Oh! What disgusting, stupid idiots they are! Oops, sorry, I can’t talk about them “like that.” Well, at least our religion, and family dynamics, forbids it. It’s right that way, you know, because they are “god” in all forms, for they have provided for us, brought us up, (also loved us, apparently). But, let’s look at things they haven’t fucking done, or even tried to do. Consider our perspective: Have any of you ever seen them listen to us, no, like really listen to us. Not the fact that we did it. But, why we did it. That there may be some underlying condition, or valid reason, there. And even if there isn’t, it’s their time to make mistakes, learn, adapt. And fucking, just be happy. No, dear parents, don’t get me wrong, I’m not saying you can’t shout at us; just that we wished you would actually think, rather feel, and then shout. You know why children turn out to be liars? To be sick brats, hiding coke and wine in the closet? Well, of course you don’t know why! If you did, I wouldn’t have been writing this right now. Accept their mistakes: They’re always right, always correct, always perfect. We’re imperfect, right? We’re the ones who do all the shit, make stupid decisions, for whatever reasons that aren’t important, and then be blamed at anyways. We’re put down in public, treated like kids, expected to act like adults; oh, look, the saying actually fits! Whether it’s an educational decision, or a healthcare one, they have the say. Well, seems proper, doesn’t it? They’re the elders, they have more experience, and by the way they treat us, we practically “belong” to them, so why not? To the parents reading this, I say, we have our own bodies, our own minds, and while your parents didn’t even include you in the conversation, just take a look at how you turned out. Really turned out. I don’t want to write more, it’s like I don’t have the energy to. Now, I’m just sad. Thinking about my future, my life, that, I realise tonight, is simply in my parents’ hands. They can crush it, or mould it into something to fit the desires of themselves. I decide, crushing it would be less painful.
https://medium.com/@anonymous-writes/dear-parents-get-off-your-high-horse-fa9cae36a8d0
[]
2021-06-01 03:13:08.405000+00:00
['Perspective', 'Parents', 'Teenage', 'Parenting']
BlockBank FAQ
What you need to know about the company building AI-powered traditional mobile banking experience with seamless crypto integration. BlockBank is building and developing the infrastructure and decentralized financial services for “Banking 3.0. Here you will find the frequently asked questions about the AI-powered financial app. 1. What is BlockBank? BlockBank is on a mission to offer an AI-powered, traditional mobile banking experience with seamless crypto integration. We aim to combine the best of DeFi and CeFi worlds in one place, incorporating AI technology to bring financial empowerment to clients in a very simplified manner. 2. What is BlockBank´s vision? BlockBank wants to simplify the user experience without compromising security, privacy, or decentralization. The complex nature of DeFi platforms requires elegant trade execution and earning strategies to avoid high gas costs and slow speeds. We optimize multi-step DeFi purchasing processes and add a powerful AI assistant to guide our users to make intelligent and informed decisions. 3. What is BlockBank’s value proposition? Blockbank stands out because we are bridging DeFi, CeFi, and banking all while adding AI technology on top of this. This is revolutionary in so many ways and will be a deal-breaker for our users as they will be able to make better decisions advised by our intelligent advisor. 4. Can I download BlockBanks application? Yes, you can. We launched our Beta version in late 2019 and been adding features based on our community feedback. At the moment we have around 25k users and 100M TVL. At the moment our users can enjoy a simple and convenient non-custodial wallet with a credit top-up function and multiple gift cards to cash out. The latter function is heavily used by our users in countries with many restrictions on the use of crypto. We are building out the next evolution of the app which will bring in the full banking experience and bridge the DeFi Gap. Along with that, our users will get access to a sophisticated robot advisor to help them navigate this space. The developer team and AI team are working hard to bring as many features as possible in the V2 of the app. 5. BlockBank has 3 highlights: Blockchain, DeFi and Artificial Intelligence, how do these 3 complement each other? Blockchain, DeFi and AI are indeed building blocks of the infrastructure. DeFi is the underlying base for everything in crypto, it offers transparency, security and freedom. CeFi brings us convenience and payments, makes all processes easier and simpler. AI is what make every user more qualified to do everyday transactions, make decisions. So these 3 verticals are being integrated into one to offer a more complete and integral experience. And lets not forget about banking. This is the missing piece for almost all users having major hardships trying to use their assets in everyday life. 6. What are the security measures implemented by BlockBank? Security is something that we take very seriously, we have an experienced internal who is working around the clock on this to make sure all security measures are being implemented and everything will be audited by the third party before the deployment (Zokyo or Certik). Additionally, our AI will be monitoring the smart contracts for known vulnerabilities, it is in process of testing. 7. What is the current status of BlockBanks application? We are in the process of releasing an application update of the current beta version to include web 3 that will enable access to DeFi dApps in wallet and use funds stored in their non-custodial wallet. Our V2 application is being developed in parallel which is an absolute rebuild from the ground up. This was necessary as we are integrating true banking into the platform and have many regulatory and compliant development features to be accepted by central banks who will allow us to access the banking infrastructure. Our AI advisor is also in development in conjunction with our application as it needs to be interconnected to each aspect of the platform to ensure it functions most efficiently. We are also adding the CeFi aspect to our application a custodial wallet, fiat on and off-ramp, and some other exciting features we will announce in the future. 8. What are the future plans for BlockBank? Payments and Crypto licenses are already in progress. In the near future, you will see BlockBank launching the new version of our app, signing off with strategic partners such as banks, wallets, and other projects. 9. Who are BlockBank´s partners? Here are some of our amazing partners and there are more to come. BlockBank and Artificial Intelligence 1. Why do we need AI in financial services? Our AI assistant’s patented technology brings the analytical power equivalent of hundreds of highly skilled financial analysts to the palm of our users, leveling the playing field between retail and institutional investors. It will monitor, gather, and provide data in real-time based on investment preferences to maximize gains and minimize unnecessary losses. This is a feature that has been extensively used in traditional finance these last few years, especially with companies investing in ETFs & index funds. Bringing this to crypto, and especially to a non-custodial wallet, will be game-changing. 2. What company is providing the AI technology? Blockbank has partnered with a company called Skael. They have true AI technology being used by fortune 100 companies including one of the American top banks. Blockbank not only has received the exclusive rights for crypto and blockchain but also the distribution rights for the technology in these sectors as well. Skael has over 70 employees and a dedicated technology team to Blockbank and will be scaled up as needed. We are extremely excited that we will be able to offer AI services in any fashion a company would like to utilize the technology. 3. AI assistant brings the analytical power equivalent to hundreds of highly skilled financial analysts. How is this possible? This patented AI technology is being used in fortune 100 companies, as well as one of the largest American banks. Data being accumulated, analyzed, learned, and compiled all with one main function which is to provide insights on market trends and investment strategies is its core function. It will be expanded over time to more interactive functions like trade execution or APY chasing, however, that will take more time on the technology learning the interaction between each user including their risk profile, current strategies, and their intended returns they wish to achieve. 4. Advisor robot provides real-time data that will help maximize profits and minimize unnecessary losses, but is that data reliable and accurate during volatile and indecipherable markets? Being in the crypto space for quite some time you start to realize that from an analytical perspective the markets aren’t as indecipherable as one might think. Volatile yes but due to the oracles, data sets, and other variables we are incorporating into the Robo advisor we believe it will become the most trusted unbiased tool in our market. The processing power of our tool is immense and because the data is being pulled in real-time it can effectively quantify all data to provide accurate results. This technology already is being used by fortune 100 companies in the real world today and has proven to be up to 99.8% accurate. BlockBank and Banking 1. What is the relationship between BlockBank and the banking platform? The services provided by banks, such as fiat gateway, cards, and bank accounts, are essential for user convenience and everyday usability. BlockBank has two banking partners that will allow fiat top-up in the accounts/withdrawals. Also, we are negotiating with other banking institutions — as we have ambitious objectives for expansion. We are starting the process of obtaining our own licenses, it will allow us to provide lower fees and add extra use cases for $BANK tokens. 2. How does BlockBank enable the banking platform through the blockchain economy? The blockchain economy is growing rapidly — more institutional investors hedge funds, VCs and banks are exploring and some have already joined the space. Simultaneously, the number of retails users is growing at record levels. In all this booming crypto-economy we still do not have enough competition and tools that empower retail users. It is much easier for large institutions to get all the banking, custodial, and trading services while retail users face challenges and pay higher fees. BlockBank is addressing this, providing the same playing field through the applications that allow every user to get top-quality services, AI assistant and access to banking, all in one. Blockchain based services also reduce our costs when it comes to KYC/AML, reporting and audits (thanks for blockchain transparency!). This allows BlockBank to merge the best of two worlds into one convenient application. 3. BlockBank is building and developing the infrastructure and decentralized financial services for “Banking 3.0”. Why is it so important to focus on a seamless user experience? A seamless experience is key for massive adoption. The biggest barrier for crypto is that it’s never been that simple, users who might be interested get quickly turned off with the level of complexity. BlockBank aims to integrate the best of DeFi and CeFi worlds while keeping it simple for the average user, our app will bridge both of these worlds and everything will happen in the back-end. Our application was launched back in 2019 and now we are coming back with all the feedback collected by our early adopters. We understand all those pain points and unnecessary complexities that will be removed, creating an all-in-one application that will offer better UX and UI. Making it simple and coupling it with our AI, will only give our users the upper hand. BlockBank and DeFi 1. What is the relationship between Blockbank and Defi users? BlockBank is on a mission to simplify the user experience without compromising security, privacy, or decentralization. The complex nature of DeFi platforms requires elegant trade execution and earning strategies to avoid high gas costs and slow speeds. We optimize multi-step DeFi purchasing processes and add a powerful AI assistant. We are not discriminating against our users — they can continue using pure DeFi services without KYC/AML via our web3 browser and noncustodial wallet. DeFi market share is growing, innovation is blossoming and we support DeFi values, have planned partnerships with other DeFi projects to provide greats et of tools such as decentralized insurance or lending to all users. Our patented AI technology will be greatly beneficial to every DeFi users on BlockBank platform to get all information, advice and better APYs 2. How does Blockbank change the relationship between Defi users and the banking platform through blockchain technology? Our goal is to remove the barriers between these two financial services. DeFi economy and users are only growing and they need better options to use “traditional” banking if they want so. This is why we are developing a technological solution that our DeFi users, after going through internal KYC will get access to all perks and benefits of banking features. About $BANK token 1. Why do we need $BANK token and what are the benefits of holding? Using the BANK token only to stake you can receive up to 20%. We are providing our staked users the incentive to stake their tokens with us and are happy to offer such a high percentage as they are effectively taking these tokens out of circulation and should be rewarded. Being a $BANK token holder unlocks many different features in the application as well (unlocking different levels of information from the Robo advisor, exclusive offers, cashback on purchases using the card, insurance, 3rd party rewards, different banking tiers without having a monthly fee, and reduced fees). To receive the additional 10% APY in $BANK this comes from users who stake other assets in our platform for specific periods of time the longer they stake the more bank bonus reward they can receive. We also have a buy-back strategy to always ensure the reserves stay at a specific level to ensure we consistently have enough $BANK to reward users moving forward. 2. $BANK can increase the APY from 10% to 30%. How can BlockBank achieve this high percentage? Using the BANK token only to stake you can receive up to 20%. We are providing our staked users the incentive to stake their tokens with us and are happy to offer such a high percentage as they are effectively taking these tokens out of circulation and should be rewarded. Being a BANK token holder unlocks many different features in the application as well (unlocking different levels of information from the robot advisor, exclusive offers, cashback on purchases using the card, insurance, 3rd party rewards, different banking tiers without having a monthly fee, and reduced fees). To receive the additional 10% APY in BANK this comes from users who stake other assets in our platform for specific periods of time the longer they stake the more bank bonus reward they can receive. We also have a buy-back strategy to always ensure the reserves stay at a specific level to ensure we consistently have enough BANK to reward users moving forward. 3. When is Initial DEX Offering — DEX? BlockBank Public Sale IDO on Ignition: April 28, 2021; 11am UTC**. Find out the details here. BlockBank Public Sale IDO on BSCpad: April 28, 2021; 8am UTC**. Find out the details here. You can follow our progress in our telegram chat here. 4. When will BANK be listed on big exchanges? In terms of big exchanges, we are actively in discussions with them however our immediate release upon listing will be uniswap and pancakeswap at the same time as we are excited to be having an IDO on ETH and BSC.
https://blog.blockbank.ai/blockbank-faq-b8b5d8ba5e59
[]
2021-05-31 06:43:04.281000+00:00
['Financial Services', 'AI', 'Blockchain', 'Blockbank', 'Crypto']
Hi Abbie!
Hi Abbie! I am so happy to hear from you. I can’t remember where you live, but if you are anywhere around SF let me know. On your topic, I agree wholeheartedly, and I haven’t spoken about it much since it is rather personal in some cases and revoluionary to the point of heresy in others. In the spirit of inquiry- I’ll share my perceptions to add to the universe of said things.
https://medium.com/@therealphil/hi-abbie-c04390be4036
[]
2020-10-20 18:02:07.918000+00:00
['Genderfluidity', 'Gender Roles', 'Gender', 'Gender Identity']
What is Data Science?
The term Data Science comes with a lot of different interpretations these days, so it might be easier to start with what Data Science is not. Data Science is not about writing code or awesome visualizations. It’s not about making complicated models either. Hacked? Data Science is about using data to create as much impact as possible for your company. Impact could be driven in multiple ways, it could be insights and visibility into the data, or data products which allow prediction/classification. To build such things, we need tools like statistical models, visualizations or writing code. Data Science aims to solve real company problems using data. What tools do we use? It depends. What’s popular vs Industry Needs Some topics are more fun to talk about than others, so there is a misalignment in what’s popular in media and what is needed in the industry. The rise in big data sparked the rise in data science to support the needs of businesses to draw insights from their massive unstructured data sets. As the term became more popular, the Journal of Data Science described it as: “[…] almost everything that has something to do with data: Collecting, analyzing, modelling… yet the most important part is its applications — all sorts of applications.” With newfound abundance of data, companies are shifting from a knowledge driven approach to a data driven approach. All the theoretical papers written decades ago about Neural Networks and Support Vector Machines are now possible to implement, supported by the availability of both big data and the necessary hardware to build Machine Learning models. As we might have seen, the promising nature of Machine learning and Industry 4.0 concepts has been in the news lately. ..and then deletes it? Machine Learning and AI have dominated the media and overshadow all other aspects of Data Science like exploratory analysis, experimentation and skills we traditionally call business intelligence. This gives the impression that Data Science is research focused on Machine learning and AI. Technically, it is part of the job to have a cutting edge Machine Learning model, but companies have so many low hanging fruits that in most cases they don’t require any more advanced machine learning models than what has already been researched. This tell us that at the heart of Data Science is finding the right problems and solving them through data driven insights. The process involves financial stakeholders and is guided by business domain specialists who are in turn offered visibility into the data. Data Science is more about: Driving Impact Solving Problems Building Strategies And less about: Advanced Models Data Crunching In the case of Merino, the architecture enabling data availability is vast and of high quality I hope this article gives you a good idea of what capabilities we have and the future directions we could pursue. Feel free to send me any feedback and be part of the process :) Thanks for reading! Sources:
https://medium.com/merino-services-analytics-blog/what-is-data-science-23fd3511b730
['Aman Prasad']
2019-11-14 14:38:09.699000+00:00
['Data Science']
How to Benefit from Grails 4 Upgrade
If you are not familiar with Micronaut then you may not understand how important the upgrade to Grails 4 is. But if you already discovered the productivity of Micronaut then you know there is a whole new world of tools and best practises to take advantage of. Let’s start with mentioning some of the issues which may occur during the upgrade and then I will share some of the ways how to benefit from having Micronaut parent context. Grails 4 Upgrade Gotchas There are detailed upgrade notes for Grails and GORM which you should read first: Except these, I would like to mention some particular issues we were facing while upgrading our applications. This is not an exhausting list of all problem but I will try to keep it up to date if we face another issue: Transactions Now Required for all Operations This one is actually already mentioned in GORM for Hibernate Upgrade Notes. If you have any GORM operation happening outside of services annotated with @Transactional then you will see TransactionRequiredException start to pop up in your console. There is quite a simple fix for this issue — just visit all your services which works with the database and annotate them with @Transactional . If you have previously set Hibernate configurations such as hibernate.hibernateDirtyChecking: true or hibernate.flush.mode: AUTO you may consider finally fixing the causes for these overrides and remove the settings. Otherwise, you may face some corner cases where transactional behaviour can't be guaranteed or where a property update is not propagated to the database. In that case, you should edit your application.yml as follows: # especially hibernateDirtyChecking seems to cause problems # with propagating updates into the database # # (see above) hibernate: # ... # To keep default behavior hibernateDirtyChecking: true flush: # To keep default bevahior after Grails 3.2.4+ upgrade mode: AUTO # https://github.com/grails/grails-core/issues/11376 allow_update_outside_transaction: true grails: gorm: flushMode: MANUAL # XXX: only required if you have no choice to remove the overrides# especiallyseems to cause problems# with propagating updates into the database# (see above)# ...# To keep default behavior: true# To keep default bevahior after Grails 3.2.4+ upgrade: AUTO: true: MANUAL Empty Strings are Automatically Converted to Null If you have nullable String properties which you set to empty string to avoid null then your code will break as empty strings will be set to null during the properties binding. You can revert to the old behaviour by using the following configuration: grails: databinding: convert-empty-strings-to-null: false Old Version of Micronaut and Missing Micronaut Dependencies' Versions There is a limited number of Micronaut dependencies bundled in Grails BOM (bill-of-material — dependency versions' blueprint) but it points to the old version of Micronaut ( 1.1.4 as a time of writing). If you want to take advantage of the latest version of Micronaut as well as being able to write Micronaut dependencies without version then you have to also add Micronaut BOM into your Gradle build. dependencyManagement { imports { mavenBom "org.grails:grails-bom:$grailsVersion" } imports { mavenBom "io.micronaut:micronaut-bom:$micronautVersion" } applyMavenExclusions false } The Micronaut BOM should come after Grails BOM to override the versions from the Grails BOM. micronautVersion property is declared alongside grailsVersion in gradle.properties . We are currently running with the latest version 1.2.6 without any issues. Closure DSL inside application.groovy Configuration File There is one side effect caused by having Micronaut context initialised during the Grails application startup. Both Grails and Micronaut are independently reading the configuration files including the script called application.groovy . The problem is that Micronaut is adding @CompileStatic annotation and some AST transformation to be able to compile the script statically. Therefore if your application.groovy file contains references to the properties not known to Micronaut then the compilation fails and the application will not start. These could be appName and appVersion which are injected into script's binding by Grails automatically. You can replace these with calls to Metadata class such as Metadata.current.getApplicationName() . The other source of compilation errors are closures referring to some DSL unknown to Micronaut, typically GORM default mappings: grails.gorm.default.mapping = { version false } One way how to work around this issue is to move the closure definition into a method annotated with @CompileDynamic . grails.gorm.default.mapping = defaultMapping() @groovy.transform.CompileDynamic private static Closure defaultMapping() { return { version false } } Development Configuration File A similar problem arises if you were used to placing your application-development.yml or similar to the root folder of the project. Micronaut is not recognizing the configuration files places outside source folders so you need to move the file into grails-app/conf or src/main/resources . Controller Actions Returning “null” Controller actions should always either return model or call render , respond or a similar method as the last line. Returning null might produce cast exceptions: class AwesomeController { def myAction(String q) { // do some work // this no longer work // return null render status: HttpStatus.OK } } Using Micronaut Beans This is not an upgrade issue but just a remember-me note for further Micronaut integration. Grails 4 are still injecting the beans into the services by name. On the other hand, Micronaut names the beans usually with the fully qualified name of the class. You must annotate any bean you want to inject from Micronaut with @Inject . class AwesomeController { @Inject AwesomeMicronautService ams // any name SomeGrailsService someGrailsService // strict naming convetion // ... } As a side-effect, you can now name your services with any name as the injection happens by type. Building the WAR There were a war task for bundling the WAR for Tomcat deployments in Grails 3.x. Thanks to the upgraded Spring Boot plugin the bundling happens using a bootWar task so if your delivery pipeline relies on running war task explicitly then you need to update your build commands. Also, any Gradle customisation using the war task directly will stop working as the task is disabled by default: war { from('src/main/extra') { into('extra') } } One way to fix it is to enable the war task again manually and disable the bootWar task. war.enabled = true bootWar.enabled = false Another way is to change your configuration to use bootWar instead: bootWar { from('src/main/extra') { into('extra') } } Taking Advantage of Micronaut in Grails 4 The upgrade itself is generally pretty smooth but you should not get satisfied by just the upgrade. You should consider some of the following steps to gain the most of the Micronaut integration into Grails 4. Migrate from Grails Plugins to Micronaut Configurations We in Agorapulse are continuously trying to migrate our internal and open-source plugins to Micronaut configurations. The reason is obvious — we want to share the code between the Grails applications and Micronaut functions. For example, we have migrated a Facebook SDK plugin into Micronaut Facebook SDK library and also AWS SDK plugin into Micronaut AWS SDK library. Another opportunity is to migrate to libraries and capabilities provided by Micronaut Micronaut Redis can be a good replacement for Grails Redis plugin Micronaut Data can replace some simpler use cases of using either bare JDBC or GORM Micronaut declarative clients instead of RestBuilder Switching from plugins into configuration actually takes the most of the time we spent on the migration. Prefer Micronaut Services to Grails Services There are several reasons why you should prefer writing services as Micronaut services instead of Grails ones. Micronaut services are avoiding reflection and should have a positive impact on startup time and reduced memory footprint Micronaut context resolution provides much richer capabilities to create conditional beans Micronaut provides support for configuration properties beans Micronaut provides a reflection-free validation framework usable in any bean Micronaut services' names do not have to end with Service suffix suffix You may extract the service code into a library shared with other Micronaut applications and functions Also, if you migrate your services to Micronaut then it will easier for you to migrate your whole application to Micronaut if you find out that Grails framework no longer meets your needs. In a case of Grails plugins, it will allow you to migrate them to configurations.
https://medium.com/agorapulse-stories/how-to-benefit-from-grails-4-upgrade-e4f4aae4a9d
['Vladimír Oraný']
2020-08-27 08:23:57.255000+00:00
['Groovy', 'Grails', 'Micronaut', 'Tech']
Case Study: Enjoy your VACCAYtion without crowd
This is my second case study article. This time I’m working on an application project for the Innovative Tourism Project Pitching 2021 , Type 2: Digital Technology Innovation. I did the project with my college friends, Jeen and Plathong! The brief is to create an innovation that promote Thai local tourism businesses to consumers. This project is supported by DASTA At that time, we were still in a COVID-19 situation so we have to work together online. The tools that we mainly used are Google Slides, as a tool to brainstorm and do all the planning stuffs and Figma, as a design and prototype tool. We started by listing all the tasks we have to do with fixed deadlines. We were working as an intern at day, so we only have time together at night. I think that we did well on time management, as we talked almost everyday for 1–2 hours, so it didn’t feel that exhausted and we still managed to finish everything in time. We also agreed on the goal of this project, which is to do a project for our portfolio not to win the prize. DISCOVER Problem Scope / HMW First, we researched and brainstormed about Thai tourism and problems in order to know what we should focus on. Over-tourism was the one we were interested in the most since we thought about what tourism during post-COVID situation would look like, and believed that people would still be afraid of crowded places, so here’s our HMW: How Might We Reduce Tourist Congestion and promote local businesses We came up with an idea about tracking crowdedness of each locations, by using the same technology as Google Map’s, which can detect location data from users and display as traffic information. By knowing level of crowdedness in advance, users can avoid places full of people. We also wanted to promote local businesses, so we planned to include the “Rating and Review” feature as well. Online Survey We created an online survey to gather general information about tourists in Thailand. We asked them general demographic questions as well as questions about their trip. We also asked about how they search for information when travelling as well as other travel application they use. Survey result We didn’t conduct an interview session as planned due to some reasons, which I feel a bit regret. There’re lots of things that can be improved; for example, the question asking respondents to rate their feeling toward tourist congestion using number scale of 1–5 can be changed to asking them to describe their experience when encountering tourist congestions instead, as number scale does not really communicate feeling and different respondents may define each number differently. In addition, the problems I always encounter when doing user research is that most of the respondents are people related to me in some way, like friends on my social media or people in the same community, so the information I got usually contain bias. Survey Result Anyway, we know that majority of respondents don’t like crowdedness and they would find somewhere else to go. DEFINE Persona — UX Storyboard Persona We created a persona based on information we got from the survey. We focused on the Generation Z which is the group that is familiar with technology the most. UX Storyboard Then we created a UX storyboard to show the scenario of user using our application. We moved to Figma at this step. Lean UX Canvas — Competitor Analysis Lean UX Canvas As the contest seems to focus on business aspect more than UX/UI aspect, we thought that it might be great to do a lean UX canvas, as well as the competitor analysis. Some of the information in the canvas were refined and changed in later process. Competitors Analysis We analysed Top 3 most used application by respondents from our online survey and Google Map. IDEATE Sitemap First, I have to admit that I misunderstood sitemap as user flow. What we did is actually a “Sitemap” which is the structure of all pages of the application. Sitemap Name & Tagline It takes quite long time for us to come up with name and tagline. We have brainstormed lots of possible phrases and words. I really want the name and tagline to start with the same letter so that it will be easy to remember and sound cool, but we couldn’t think of it. Name and tagline brainstorming Finally we voted and here’s the name and tagline of our app: VACCAY — Flee and free from crowd Logo & Visual Direction Here’s my favourite part! We searched for referenced visual direction and agreed that the UI would be clean interface with many beautiful photographs. We chose blue as main colour since it is colour of the sea which is one of Thailand’s popular natural landmark, and orange as secondary colour.
https://bootcamp.uxdesign.cc/case-study-enjoy-your-vaccaytion-without-crowd-fd6aec5a546b
['Nawamon Chanprapun']
2021-09-03 05:45:28.055000+00:00
['Ux Case Study', 'Thailand', 'UX Design', 'UI', 'UX']
LA PARABOLA DEL RAGIONIER VITI: GIOCO E AUTORITÀ
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/breakfast-with-muesli/la-parabola-del-ragionier-viti-gioco-e-autorit%C3%A0-6442daf809d9
['We Are Müesli']
2020-12-16 08:01:20.455000+00:00
['Game Design', 'Giochi']
What Purpose and Benefits Does an Efficient Employee Database Software cater to?
Employee data management is surely a crucial task within an organization that is to be catered well with the HRM team. Major manual interventions and manual data recordings are to be subjected while maintaining employee based data within files and folders or even on a database. Time to time updating and modifications require a concerned person to update the details thereby, if missed a single entry or updated any entry with an error then it might lead to serious information loss t organization. Also, within the organizations having a large number of employees managing data manually also becomes an issue for HRM members as they need to collect the distributed data from various sources. Managing payrolls, maintaining attendance records, leave balances management, etc. all become issues for management. An efficient and modern approach to employee database management software might solve all your issues. Being managed online is its outstanding feature that suppresses easy access and support within all employee data management and recordings as well also, reduces the errors. What were the major fallbacks within a traditional employee data management that an efficient employee database software can easily cater to? The traditional system of managing employee data was all manual, HRM members need to update even a single modification within the data. Manual entries sometimes lead to missed data, erroneous data or the data redundancy issues within. Along with the manual recording of data almost performed on an offline system which creates a dependency over the system for the data access and caterings within. But when switched to employee database software then you can attain many benefits as well also facilitate your organization workings that too with quick data responses within. Below we have discussed some benefits out of employee management the software in an organization- Easy access to all employee data You can access all employee data within your fingertips, just within a click, all data be accessed that too efficiently and easily. Employee database software is a quick approach to all employee’s past and current records with just a click away. Easy to manage individual employee profiles. Employee based individual profiles can be managed easily that supports all sorts of employee data with ease. All past records, performance criteria, attendance records, payrolls, incentives, leaves, and leave balance management becomes so handy and efficient. The software approach serves easy data extraction using the smart searching features within. Employee database software enhances the extraction process of employee data through smart and automated search within. HRM members easily manage the search for the employee and can access their data with ease. Easy Data Imports in bulk amounts. If required any sort of data imports from different software then can easily be performed over an employee database software. Software approach data exporting catalyze the process of extraction. Easy data management and updating support. Data management and modifications are to be facilitated with advanced and modern features within an employee database software. Just search the data and hit update to easily modify the employee information.
https://medium.com/@barrownzerp/what-purpose-and-benefits-does-an-efficient-employee-database-software-cater-to-5f9aa2d8ccd9
['Barrownz Erp']
2019-11-15 10:02:33.728000+00:00
['Erp Software', 'Startup']
Kumpulan Ucapan Peringatan Natal dan Tahun Baru untuk Mereka yang Kamu Sayang
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/g%C3%B6%C3%B6p-kampus/kumpulan-ucapan-peringatan-natal-dan-tahun-baru-untuk-mereka-yang-kamu-sayang-8fec96ebef00
['Gööp Kampus']
2020-12-24 08:10:07.732000+00:00
['Tahun Baru', 'Indonesia', 'Natal', 'Christmas', 'Ucapan']
How 2020 started vs. how it’s going
This piece was originally published in the December 18, 2020 edition of CAP Action’s daily newsletter, the Progress Report. Subscribe to the Progress Report here. Photo by Tai's Captures on Unsplash “I’ll be fierce for all of us, our planet, and all of our protected land.” — Rep. Deb Haaland (D-NM), our next Secretary of the Interior and the first Indigenous person to ever be nominated for a cabinet role It’s been…quite a year spent grappling with the worst pandemic in a century and the worst economic downturn since the Great Depression. Let’s recap: IN THE NEWS High-ranking government officials and members of Congress across the political spectrum are starting to get vaccinated as doses become available. Speaker Pelosi received her vaccine in the Capitol earlier today, and Mike Pence streamed his vaccination live from the White House this morning. In the words of President-elect Biden, who is also set to get vaccinated in the coming days, “beating COVID-19 will take everyone doing their part.” received her vaccine in the Capitol earlier today, and streamed his vaccination live from the White House this morning. In the words of President-elect Biden, who is also set to get vaccinated in the coming days, “beating COVID-19 will take everyone doing their part.” There’s no consensus on a pandemic relief bill as of this afternoon, and Congressional leaders say they expect members will need to work through the weekend to reach a deal. Some members and senators are standing strong on sufficient stimulus checks (or, more accurately, “survival checks”), while Senator Ron Johnson (R-WI) has decided this is a good time for him to come out against any direct payments and feign concern over the deficit. Some members and senators are standing strong on sufficient stimulus checks (or, more accurately, “survival checks”), while Senator Ron Johnson (R-WI) has for him to come out against any direct payments and feign concern over the deficit. Outside of the metaphorical Hill bubble, the limited potential of a $600 compromise stimulus check has inspired a number of memes from those of us who understand what it’s like to have bills to pay in 2020. the limited potential of a $600 compromise stimulus check has inspired a of from those of us who understand what it’s like to have bills to pay in 2020. Another vaccine is on the way. The FDA gave a preliminary green light to the Moderna-NIH vaccine last night ahead of its official authorization, which is expected to be finalized today. If and when the vaccine is officially approved under the emergency use authorization, millions of doses could be shipped across the U.S. as soon as this weekend. The FDA gave a preliminary green light to the last night ahead of its official authorization, which is expected to be finalized today. If and when the vaccine is officially approved under the emergency use authorization, millions of doses could be shipped across the U.S. as soon as this weekend. Joe Biden will nominate Congresswoman Deb Haaland (D-NM) to be Secretary of the Interior. Haaland, who made history as one of the first Indigenous women elected to Congress in 2018, will do so again if confirmed as the first Indigenous Cabinet secretary in U.S. history. Her announcement was hailed by climate organizers, Indigenous activists, and a number of elected officials across the political spectrum. IN CASE YOU MISSED IT Multiple incarcerated people who are set to be executed before Trump leaves office have tested positive for COVID-19. Corey Johnson and Dustin Higgs, who are both Black, are scheduled to be killed early next month. It’s a horrific convergence of 2020’s many cruel verticals: Trump and Barr’s push to kill 10+ people on their way out of power; the fact that the coronavirus has run rampant through jails and prisons, killing thousands and spreading uninhibited through the facilities and surrounding communities; and the disproportionate impact of the pandemic — and of the justice system — on Black people in America. Corey Johnson and Dustin Higgs, who are both Black, are scheduled to be killed early next month. It’s a horrific convergence of 2020’s many cruel verticals: Trump and Barr’s push to kill 10+ people on their way out of power; the fact that the coronavirus has run rampant through jails and prisons, killing thousands and spreading uninhibited through the facilities and surrounding communities; and the disproportionate impact of the pandemic — — on Black people in America. Efforts to sabotage the Postal Service earlier this year seem to have had an unintended consequence: Holiday shipping delays. Thanks to the diligent dismantling and underfunding of the agency by Trump-appointed Postmaster General Louis DeJoy, Americans are experiencing unprecedented delays in shipping times as USPS workers across the country are stretched thin amid a surge in online shopping this holiday season. WHAT WE’RE READING
https://medium.com/@capaction/how-2020-started-vs-how-its-going-69a96dd008cf
['Cap Action']
2020-12-18 20:30:53.906000+00:00
['Trump', '2020', 'Congress', 'Politics']
Trump denies police violence is a systemic issue
Photo by Charles Deluvio on Unsplash This week, a round table discussion event was held in Kenosha, Wisconsin. It was, well awkward to say the least. President Donald Trump interjected when two Black pastors were asked whether police violence is a systemic issue. The pastors were James Ward and Sharon Ward, the pastors for the mother of last week’s victim Jacob Blake. Blake was shot seven times in the back as he was walking away calmly from a Kenosha police officer. Protests in Kenosha have since erupted as footage of the incident went viral. And this is where it gets cringe worthy. The pastors were asked whether like other community leaders they thought that police violence was a systemic issue when suddenly they were interrupted by President Trump. “I don’t believe that. I think the police do an incredible job and I think you do have some bad apples.” And he didn’t make things any better with his follow up comment, adding that, “You do have the other situation, too, where they’re under tremendous pressure and they don’t handle it well. They call it choking and it happens.” He also suggested that his interactions with police were a basis for police violence not being an issue at all. “No, but I don’t believe that at all,” he said “I’ve met so many police. I have the endorsement of like, so many, maybe everybody.” So why the interruption? Media are reporting as though it isn’t clear if the President understood whether it was directed towards him or towards the pastors trying to heal the Blake family. Earlier in the conference the pastors told Trump they wanted to work towards unity and policing reform.The pastors tried to keep a positive narrative. “We believe that we can help to listen with empathy and compassion to the real pain that hurts Black Americans, but we want to be of service to you and to our nation to do whatever we can to bring true healing, true peace and to really seek God’s very best in our nation,” James Ward told Trump after offering a prayer. When Trump was actually asked whether he thinks systemic racism is a concern and that not all protests are violent, he kept the same narrative. Heavily implying that Black folks are scary murderers. Trump responded, “Well, you know you just keep getting back to the opposite subject. We should talk about the kind of violence we’ve seen in Portland and here and other places.” The final blow was truly devastating. He added,
https://medium.com/the-juicer/trump-denies-police-violence-is-a-systemic-issue-c446c8318390
['The Juice Factor']
2020-09-03 09:49:20.619000+00:00
['Racism', 'Trump Administration', 'Trump', 'Police Brutality', 'Police']
Open Office Hours — the 1st Edition Story
Open Office Hours — the 1st Edition Story Open Office Hours, which took place last week, brought together Australia’s most active technology investors with a diverse array of startups in a collective effort to close the funding gap for early-stage, and in particular, under-represented founders. To say the response from the Australian startup community has exceeded our expectations would be a massive understatement. When we first started planning the event, our initial hope was that 20 investors and 20 founders would connect and have a great experience that would open up pathways for broader funding for startups. We were way off. The Australian community of founders and investors responded with such incredible passion and numbers that we expanded the program. We ended up with 40 investors (from 30 VCs and angel groups) and 197 ambitious founder applicants, of whom 75 were selected. What better evidence of the need for such an initiative than this overwhelming response from both investors and founders? Thanks to this collaborative effort, we created 150 connections across 80 hours of mentoring sessions — what our Inclusion Advisory Board member, Shahirah Gardner, likes to call: “The Giant Warm Intro”. 74% of founders who applied for Open Office Hours identified as under-represented in one or more ways including: 🙋🏾‍♂️ 37% migrant founders 🙋🏼‍♀️ 27% women founders 🙋🏽‍♀️ 21% non-anglo founders 🙋🏼‍♂️ 3% LGBTQ!+ founders 🙋🏻‍♀️ 2% regional founders 🙋🏽‍♂️ 1% Aboriginal or Torres Strait Islander founders Founders who made it through had the opportunity to meet remotely with two investors where they could ask for advice on anything, test their pitch or find out about the fundraising process in a supportive, collaborative environment. Founders who applied but didn’t make it through to this edition of Open Office Hours due to limited capacity will also have a chance to meet investors in the new year. Outcomes Following each mentoring session, investors recorded their advice and feedback to help founders on their startup journey. Investors told us that they could see themselves investing in more than half (53%) of the founders and their startups either now or in the future 💸 Given that the main objective of Open Office Hours is to create more connections and funding opportunities for under-represented founders, we think this is an incredible initial result, but recognise that this is just the beginning. Following Open Office Hours, Cara Waters spoke to Paul Naphtali from Rampersand about the diversity funding gap in Australia and the work needed to be done by the entire ecosystem to make a change. You can read the full article here.
https://medium.com/rampersand/open-office-hours-the-1st-edition-story-ca38e4ac9864
['Taryn Pieterse']
2020-12-18 03:29:47.845000+00:00
['Startupaus', 'Diversity In Tech', 'Venture Capital']
No matter who wins election, america loses
Wealth inequality has gone too far People are pissed off with the current state of this country — and for good reason. The pandemic revealed just how out-of-control inequality has become within this country. Like a runaway train, it just keeps on going with seemingly with no end in sight. For all of the talk of keeping America united, nothing could be further from the truth. Immediately after the pandemic started, the shock to the American economy became unfathomable. The American economy was defeated by an enemy that it was grossly unprepared for. As a result, millions of people across the country are struggling to make ends meet and are in constant fear of their financial future. Unless you’re in the upper echelon of a big name stock market company — then you’re just doing great. Ever since March, the economy has been in the midst of a K-shaped recovery. When the pandemic first broke out, Mr. Trump introduced the Cares Act as a rapid response to the crashing economy. This financial stimulus gave the markets exactly what it needed to persevere, however, it’s uneven distribution led to even greater problems. While many American households were given increased unemployment and a one-time $1200 check, it paled in comparison to the millions big-name companies received. This resulted in one of the greatest wealth inequality creations thus far. It’s mind boggling how so few people control so much of the country’s money. At this point, you may be wondering…what the hell is a K-shaped recovery? Basically, if you created a line chart, it goes something like this: First, the economic markets spiral downwards — represented by a line that simply goes down. Then once the economic recovery begins, that line then splits into two different lines — one line that will trend upwards, and the other that will gradually trend downwards. Thus forming the aforementioned K-shape. The line that goes upwards is represented by those with the wealth and resources to outlast a global pandemic/recession. This means those that have access to cash, money-making assets, jobs that allow WFH (work from home), and other traits that allow the power of wealth building. The line that leads downwards is the vast majority of those without those resources. People that lack wealth, a steady job or employment, WFH access, and a host of other financially constraining factors. Unfortunately, this line encapsulates much of the middle and lower classes. Photo Via CFM Advocates Photo Via JP Morgan Long story short — the rich have become significantly wealthier while average, middle-class workers have become increasingly broke. Wealth inequality has become so blatantly obvious, that it can no longer become hidden. The wealthy elite of this country has created a system where only they can win. Regardless, of who wins the election, it’s highly unlikely this will change. In fact, it will most likely get worse. If President Trump wins, you can expect more of the same as he will now be free to enact whatever policies he wants without a care in the world. No more worrying about re-election due to being on his second term. Since Donald Trump is a natural capitalist, you can expect wealth inequality to become dramatically worse. Meanwhile, if Biden wins, he intends to redistribute wealth by raising taxes on the rich and the companies that enable them. While this could become a boon for the middle class and America’s infrastructure, his opposition will no doubt use this as a way to blame him for America’s increasing budgetary issues and spin Trump’s supporters against him even further for the next election.
https://medium.com/an-injustice/no-matter-who-wins-the-election-america-will-lose-76b74140f2
['Dayon Cotton']
2020-11-04 06:21:52.851000+00:00
['Politics', 'Society', 'Equality', 'Elections', 'Election 2020']
Be in The Know About Living A Life In Vancouver
Getting lost at one point in your life in Vancouver may just happen without you knowing, be it with your career, with your business, with your relationship, and even more so, with yourself. So before that ever happens, you gotta meet Georgia Straight. Trusted by over 800K unique readers/visitors, print, and website per week, and being an integral part of the active urban West Coast lifestyle, the Georgia Straight sets its unprecedented record for the credibility of updating the people with its publications, be it with the latest news, lifestyle, and entertainment weekly in Vancouver for 50 years and counting. The Georgia Straight is your go-to community for Vancouver’s most comprehensive listings of entertainment, activities, and special events. That which, you can have them help you out with your marketing needs if you are up to letting the people in Vancouver and the world know that you and/or your business has something great to make life in Vancouver the best! Regular weekly coverage of the Georgia Straight also includes news, tech, arts, music, fashion, travel, health, cannabis, and food. Thethe Georgia Straight also produces a series of entertaining and informative reader polls, which are included in their issues; Best of Vancouver that features educating and inspiring real-life experiences of people living life in Vancouver; and Golden Plates Award featuring the best of the best culinary discoveries that satisfy your gustatory cravings. The Georgia Straight is not just a publication in general that updates you on what’s happening in Vancouver as you sleep, they can as well be your personal confidante that you can rely on when you are finding a career job in Vancouver, finding a new home to settle in, getting in the know about Vancouver’s economy for your business plans or updated research in finance, needs guidance to help you out in your daily life in Vancouver, excites you with contests where you can show off what you got, and what’s touching is you’ve got someone to whom you could vent out anything that weighs hard on your chest with their confession’s page. The Georgia Straight is a sure way to advertise your promotions for your business or startup if you are an entrepreneur in Vancouver. You can contact the Georgia Straight in the information below; Contact the Straight 2Vancouver Free Press Publishing Corp. 1635 West Broadway Vancouver B.C. V6J 1W9 Phone 604–730–7000 Fax 604–730–7010 Distribution 604–730–7087 Subscriptions 604–730–7000 [email protected]
https://medium.com/@canadaunleashed/be-in-the-know-about-living-a-life-in-vancouver-894a53f9dc22
['Canada Unleashed']
2020-12-17 15:46:19.168000+00:00
['Life', 'Vancouver', 'Best Of', 'Publication']
Connecting People with Nature
In this extract from Grassroots Stewardship, author F Stuart Chapin, III explores the benefits of nature on people, and how to connect the two. Wangari Maathai, the daughter of a Kenyan sharecropper, collected water for her mother from springs protected by the roots of trees. Her grandmother taught her that the fig tree near her home was sacred and should not be disturbed. Although it was unusual for girls to attend school, her brother suggested it, her parents agreed, and she graduated near the top of her class. She was the first woman in eastern and central Africa to receive a PhD. In the 1970s, when she was in her mid-30s, she became aware of Kenya’s ecological decline and its link to rural poverty. Watersheds were drying up as native forests were cleared for farms or plantations of fast-growing, water-demanding exotic trees. “I was hearing many rural women complain that they didn’t have firewood, that they didn’t have enough water. So why not plant trees, I asked the women.” She thought that planting native trees, especially fruit trees, might replenish soils, provide fuel wood, and improve nutrition in rural communities. Maathai began by raising tree seedlings in tin cans in her backyard and engaging rural women in planting them. Government foresters initially resisted the idea because they didn’t believe uneducated rural women could plant and tend trees. Little by little, the idea took root and blossomed into the Green Belt Movement. Maathai showed women how they could use their existing knowledge to gather seeds from the forest, plant them, and tend the seedlings. Eventually, with help from Kenyan and United Nations groups for the support of women, she was able to pay women 10 cents for each tree that survived and grew. This gave the women a sense of independence and empowerment. The Green Belt Movement spread across East Africa, and the 51 million trees planted by these women have significantly reduced land degradation in the region. When Maathai accepted the Nobel Peace Prize in 2004, she said that the purpose of her program was to help people “make the connections between their own personal actions and the problems they witness in their environment and society. . . .With this knowledge, they wake up — like looking in a new mirror — and can move beyond fear or inertia to action.” Clearly, even people with minimal material wealth and security can work together to do amazing things that simultaneously improve their lives and the environment. As Wangari Maathai showed, neither ecosystem health nor human well- being is sustainable by itself because each depends on the other. People shape nature through exploitation, impacts, and stewardship; and nature shapes people through its benefits to society (ecosystem services). Analogous to the social and commercial services provided by government and business, ecosystem services include the following: • Provisioning services: the products that are directly harvested from ecosystems (such as food, fiber, and water) • Regulating services: the capacity of ecosystems to buffer disturbance and shape interactions among ecosystems (regulating the climate, cleaning our drinking water, reducing disease risk, and dampening storm waves and flooding) • Cultural services: nonmaterial benefits (cultural identity, spiritual connections, aesthetics, recreation, and ecotourism opportunities) Many of these services are essential for short-term human survival, and they all enhance people’s quality of life. As people seek to meet their needs and desires, they do things that affect ecosystems. These actions range from exploitation and impacts, such as pollution, which reduce the capacity of ecosystems to provide services to society, to stewardship, which sustains or enhances these capacities. If people degrade nature to meet their short-term desires, ecosystem services decline. This erodes society’s capacity to meet its needs. On the other hand, if human well- being is severely compromised, people have no choice but to meet their immediate needs by taking whatever they can get from nature. In either case, both nature and people suffer. How do the positive and negative effects of people on ecosystems balance out at the global scale? As people seek to meet their needs and desires, they do things that affect ecosystems. These actions range from exploitation and impacts, such as pollution, which reduce the capacity of ecosystems to provide services to society, to stewardship, which sustains or enhances these capacities. In 2005, 1,300 ecologists from around the world completed a 5-year global assessment of the impacts of ecosystem changes on human well-being. The resulting Millennium Ecosystem Assessment (MEA) concluded that two-thirds of Earth’s ecosystem services were being degraded or used unsustainably in the global aggregate. One-fifth of ecosystem services had not been systematically altered. Only three (13%) actively managed services (crops, livestock, and aquaculture) were increasing. A 2019 update of this synthesis shows that 75% of ecosystem services are now being used unsustainably. These statistics clearly show that the services provided by most ecosystems are declining in their capacity to meet human needs. Most of the declines in ecosystem services result from human impacts on ecosystems. Pollinators that are essential for fruit production are declining because of insecticides and pollution. Human introductions of invasive species have reduced the capacity of many grazing lands to support cattle. Drainage of wetlands for agriculture and development has reduced their capacity to remove pollutants from agricultural runoff and to buffer the impacts of heavy rains on downstream flooding. By identifying the major causes of change in each ecosystem service, the MEA provides clear guidance about ways to foster their recovery — simply reduce the pressures that cause their declines. This approach is generally a safe starting point for reducing environmental degradation because it addresses the root causes of problems. The necessary steps are generally well known by resource and restoration managers — reducing rates of forest and wetland loss and restoring or replacing those forests and wetlands that were previously eliminated. Reducing the declines in ecosystem services is therefore not conceptually complex. The difficulties arise largely from trade-offs — when gains in some ecosystem services, such as agricultural production, come at the expense of losses of other services, such as climate regulation by forests.
https://medium.com/science-uncovered/connecting-people-with-nature-d96f79ed10f4
['Oxford Academic']
2020-07-29 11:01:01.448000+00:00
['Oxford University Press', 'Ecology', 'Nature', 'Grassroots Stewardship', 'Biology']
Automated Birthday Wisher using Python
Hello guys, To kaise hain aaplog? Ashutosh here again with another Python mini project. Are you bored of sending birthday wishes to your friends (well,some people are) or do you forget to send wishes to your friends or do you want to wish them at 12 AM but you always fall asleep? Why not automate this simple task using our friend, Python!!! To chaliye shuru karte hain… The first thing you have to do is to install pandas on your system using pip install pandas command. We shall use datetime module and SMTP library here to send the mail. Also create an excel sheet containing Name , Email , Contact, Birthday and Year. Let me show you my code : Wisher program First thing we do is import five libraries : pandas, datetime, smtplib, requests and win10toast. Then we put our gmail credentials in order to login. We define a sendEmail() function which will start a gmail session , send the email and quit the session. For the SMS part, we must have an account on www.fast2sms.com from where we will get an API key. This API key is used to send SMS over mobile numbers using your account on fast2sms. We have a sendsms() function which will verify the API key and send SMS. In the main function, we read the excel sheet and match today’s date with any of the birthdays. If there is a match, we call the sendEmail() and sendsms() functions and also we add the current year in the excel sheet. Also, we have used ToastNotifier from win10toast library to show desktop notifications once the e-mail and SMS has been sent successfully. To automate the task, we use Task Scheduler in Windows. I have mentioned all the steps to automate the task in my Github repository : I will also soon be publishing it on GeeksforGeeks. If you liked my work, please support by giving claps on Medium or starring my Github repository. Thanks, in advance 🤗 If you face any problem or rectify anything in the above program, feel free to contact me : Ashutosh Krishna ([email protected])
https://medium.com/@ashutoshkrris/automated-birthday-wisher-using-python-b39e09f8c2c4
['Ashutosh Krishna']
2020-08-21 12:50:37.373000+00:00
['Sms', 'Birthday', 'Wishes', 'Python Automation', 'Email']
Atlantis Touchless Tea Coffee Vending Machine in 2 Lane
Shop Atlantis is amongst India’s leading manufacturers of water dispensers and hot beverage dispensing and vending machines, https://www.atlantisplus.com/
https://medium.com/@shopatlantisin/atlantis-touchless-tea-coffee-vending-machine-in-2-lane-98d595898ede
[]
2020-12-22 06:38:00.300000+00:00
['Instant Coffee Machine', 'Vending Machines', 'Tea', 'Coffee', 'Tea Coffee Vending']
Declarative programming for the rescue
Since the moment the first machine language program saw the world, thousands of programming languages ​​have been invented with the aim of simplifying programming and bringing machine language closer to human language. And now we are as close as possible to this idea. Each time with the invention of a new compiler, we have come closer and closer to this, more and more abstracting over machine language. Programming paradigms Looking at the development of programming languages, they can be divided into certain stages of their formation. Each stage significantly improved both the syntax and the capabilities of the languages. Such stages of formation are called generation. Each of the generations of programming language aims to provide a higher level of abstraction under computer internals, making the language more user-friendly, powerful, and versatile. Today we can identify five main generations that form 2 main distinct groups Low-level programming languages 1.1 First generation (1GL) - machine code 1.2 Second generation (2GL) - assembler High-level programming languages 2.1 Third generation (3GL) - machine-independent languages 2.2 Fourth generation (4GL) - domain-specific languages 2.3 Fifth generation (5GL) - logical programming Functional programming languages, as well as object-oriented and procedural languages, are the main representatives of the third generation. These languages form 2 independent groups inside one generation: imperative and declarative. And the second one, declarative lies at the beginning of the next generations: domain-specific languages and logical programming. In imperative programming, the code consists of successive calls to certain blocks (operations), which are a bit like the previous level of assembler languages. Declarative programming operates mainly with syntactic constructions that form code quite similar to a set of sentences of human language. That is, in imperative programming, the programmer specifies the specific operations to be performed by the computer, and in declarative programming describes the logic of the algorithm without the use of details in the form of constructions: if / else, for, while, etc. Usually, a declaratively written program looks like a sentence in ordinary, natural language, after reading which, you can quickly understand the workflow of the program. Let's dive into an example of calculating an average number in a set written on the C-like language (C#) : Procedural public static int AverageImperative(int[] arr) { var total = 0; var i = 0; for(; i < arr.Length; i++) { total += arr[length] } return total / i; } 2. Declarative public static int AverageDeclarative(int[] numbers) { return numbers.Sum() / numbers.Length; } Declarative programming There is a significant difference in approaches to writing code in these two examples. If in the first case you need to have at least a basic understanding of C-like languages and syntax constructions like for, the second can be understood even by not technical folks! It just feels like a regular sentence: return a sum of numbers divided by number length. Using the power of C# extension methods we could even write something like this: public static int Average(int[] numbers) { return numbers.Sum().DividedBy(numers.Length); } And when we start to think in this direction we came into the functional way of programming things. For example, we could express the same statements as this set of functions using F#. let average numbers = sumOf numbers |> dividedBy numbers.length And in my opinion, this looks just impressive! Even if the language we use does not introduce any high-level constructions or libraries for declarative programming (like LINQ in C#), using the functional (declarative) approach we could hide all imperative stuff inside ‘helper’ functions and then use them inside our business logic. This simple change in your codebase could completely change the way your code looks and feels. Such a little change could help non-technicians (business analysts, product managers) to read your code, you could even add some code blocks into documentation and be sure that it will be understandable for each team member. It could be the first steps to DDD, which could be the topic for a new story… SQL and beyond The most interesting moment in this idea of structuring code in a declarative way is not new at all! Even good old SQL was invented with this idea in mind. I think this is the main SQL feature that allowed language to stay afloat for almost 35 years from the first standardized version — SQL-86! Structured query language or SQL (sequel) belongs to the 4th programming language generation — domain-specific languages. Using SQL, we do not think about how our query will be executed, we only describe what we want to get in the end. Let's look at this simple query: SELECT Name FROM Users WHERE Name LIKE '%Vlad%'; Writing such requests to our DB, we do not think about how these instructions are being executed under the hood. If we had to write such queries in an imperative C-like language, we would get the following: resultTable = new Table(); for (i = 0; i < rows.count; i++) { if(rows[i]["Name"].Contains("Vlad")) { resultTable.Add(rows[i]["name"]); } } return resultTable; So, there could be a reasonable question: ‘Why not use SQL all over the time ?’. The problem of maybe all the languages from the 4th generation is that they have very limited applicability because we could not write needed statements like SELECT for each situation… At the top of these generations, there are 5th gen — logical languages like Prologue, but now they have the same problems as the domain-specific languages and used mainly in artificial intelligence. Conclusion Using a declarative/functional approach could make domain logic more readable and understandable for each team member, even if you use completely procedural or OOP language, simply by composing imperative parts into well-named functions!
https://medium.com/@vrybnikov/declarative-programming-for-the-rescue-f62524fb2e64
['Vladyslav Rybnikov']
2020-12-21 20:08:22.499000+00:00
['Fsharp', 'C Sharp Language', 'Declarative Programming', 'Programming Languages', 'Programming']
Q&A: How Pew Research Center surveyed nearly 30,000 people in India
(Pew Research Center illustration) A major new Pew Research Center study examines religious beliefs and practices in India. In order to represent the views of Indians from a wide range of backgrounds, the Center conducted the largest single-country survey it has ever fielded outside of the United States. The study took more than three years to complete. It included extensive background research, consultations with academic advisers, preliminary qualitative work and the development of a comprehensive questionnaire and sampling strategy. The project culminated with face-to-face interviews with 29,999 Indian adults conducted from November 2019 to March 2020. In this Q&A, research methodologist Martha McRoy explains how the research team conducted this project and discusses some of the challenges they faced along the way. You can also watch the below video explainer of the process (and for a version that’s been translated into Hindi, see here). Why was the sample size for this survey nearly 10 times larger than the sample for the Center’s other recent surveys of India? The Center’s most recent surveys of India had much smaller sample sizes than this project. Our spring 2019 Global Attitudes Survey relied on interviews with 2,476 Indians, while our 2019–2020 International Science Survey included 3,175 respondents. A key difference is that these studies “only” required nationally representative samples, meaning that our data needed to represent the general population of the country. What sets this study apart — and why we needed the larger sample size — was our aim of examining differences between geographic regions in India, as well as differences within and across major religious groups regarding their beliefs, practices and sociopolitical views. We didn’t just want to know what Indians thought, but also what members of each religious group in India thought. Our earlier surveys of India were large enough to examine demographic differences in public attitudes, such as by gender and age. But the larger sample size in the new study allowed us to look at these demographic breaks within many of the religious groups and regions of the country, too. The substantial sample also allowed for comparisons across the six major religious groups in India — Hindus, Muslims, Sikhs, Christians, Buddhists and Jains — even though some make up less than 1% of the nation’s population. What else did you need to consider to study religious groups and geographic regions in India? Not only did we need to increase our sample sizes, but we also needed to target where we would conduct interviews in order to have large enough samples from the six largest religions and the six geographical regions of the country: North, Central, East, West, South and the Northeast. In some instances, it was easy to know where to go. For example, while Sikhs make up about 2% of the Indian population overall, we aimed for Sikhs to make up roughly 3% of our sample so we could do an in-depth analysis of the religion. Sikhism is the majority religion in the state of Punjab, so we were able to conduct more interviews with Sikhs by increasing the sample allocated to Punjab. Similarly, the Northeast region accounts for only 3% of the total population of India, but to make inferences about the residents of that region, we needed to allocate 5% of our study sample there. As with Sikhs in Punjab, we could easily select more sampling points in the Northeast region. Gathering representative views of some of the other major religious groups in India involved a more complex approach, as these groups are not always clustered in a specific area and make up very small percentages of the country’s total population. For example, we aimed for approximately 2% of our total sample to be Buddhists — even though they only make up 0.7% of the total population — and another 2% of our sample to be Jains, who account for only 0.4% of the total population. We also wanted to keep a probability-based design, which means there is a known and calculatable chance of every adult in the country being selected for our study. This also allows us to test for statistical differences. To keep a probability-based design, we did not recruit people to the survey based on their religious affiliation (quota sampling) or based on whether another survey participant recommended them (snowball sampling). These approaches could have biased our results since respondents would have been hand-selected and would not necessarily be representative of the entire country or all members of a particular religious group. Instead, we used a probability-based methodology called “composite measure of size” that increased our chances of selecting respondents from Muslim, Buddhist, Christian, Sikh or Jain backgrounds as compared to the natural distribution we’d expect from the general population. To better explain this composite measure of size, let’s pretend we had two villages of 100 people each, and we were going to select one of them. Typically, each village would have an equal chance of selection: 50%. However, we know that the first village has 10 people of a minority population and the second has 20 people of this same group. Using this information, we modify the probabilities of selecting each village to represent the proportions of these minority populations instead. The first village’s chance is now 10 out of 30, or 33%, while the second village’s chance is 20 out of 30, or 67%. Now imagine adjusting these figures for six different religious groups instead of just one across an entire country! As you’d expect, this methodology for sampling respondents of different attributes at higher rates than the actual distribution of the population means our sample no longer looks like the general population of the country. We correct for that through a process known as weighting, where we first take into account the probabilities of each respondent being included in our survey; then account for those unwilling to participate; and then adjust these figures to the known data of the 2011 Indian census (the most recent one). This entire process ensures that we have enough respondents from each group we wish to study while also making sure the sample represents all of India. How did you sample respondents for this study? We took many steps before selecting individual respondents to take part in the survey. First, we sampled districts from most states and union territories across India. (See below for the exceptions.) Within these districts, we then sampled subdistricts. Districts and subdistricts were chosen probabilistically — meaning all of them had a chance of being selected — but we gave certain areas higher chances of selection if they had more hard-to-reach religious groups based on data from the 2011 Indian census. This allowed us to identify areas more likely to be home to religious minority groups. Finally, within subdistricts, we sampled villages and census enumeration blocks (CEBs) with a probability of selection based on their total populations. All this sampling was done from our offices in the U.S., but the next steps required highly trained field teams that were on the ground in India. Field teams traveled to the roughly 3,000 selected villages and CEBs across 26 states and three union territories in India to conduct interviews. Once at their assigned village or block, the team used a systematic method known in survey parlance as a “random walk” to select the houses to contact. Although it is called a random walk, the procedure is actually quite detailed. The process had the teams start at a particular location within the village or block, such as at a school or the village leader’s residence. Then they walked along the road and approached every nth house or dwelling, according to a predetermined skip interval. This interval was based on the total number of houses in the village or block, as provided by the village leader. This process resulted in the teams covering the entire village or block, while randomly selecting 12 households where they would attempt an interview. Then, with cooperation from a household member, the interviewer created a list of all the adults living in the household, a process known as enumeration. Finally, one eligible household member was randomly selected to complete the survey. (If you want to learn more about the study’s sample design, all details are listed in the report’s appendix.) How did you prepare for data collection? To implement data collection projects of this magnitude, we typically work with other firms to help draw the samples and manage the fieldwork. For this study, we worked with RTI International, an international research organization with ample experience in India. RTI also enlisted the support of two local firms with the interviewing capacity necessary to carry out a project of this scope. Because of the complexities in this particular study, we did a test of the questionnaire and all procedures to be implemented in the study — commonly known as a pilot test — months before we planned to conduct the main fieldwork. Our pilot test consisted of 1,948 interviews in four states and six languages, which helped test all survey protocols, interviewer training processes and field logistics. The pilot test also ensured that all the parties involved — from the interviewers in India to the researchers in Washington, D.C. — were working in lockstep and ready for any challenges that might arise. After the successful pilot test, we needed to transition from a test of about 2,000 interviews to 30,000 interviews across all of India. This leap was not possible without standardized protocols and well-trained staff on the ground. Accordingly, fieldwork for the main study began with a staggered rollout of trainings across India to prepare nearly 200 teams of interviewers and supervisors. These intensive five-day trainings included classroom learning, role play exercises and at least one survey “practice day” in a nearby neighborhood so interviewers could acclimate themselves to conducting the survey under real-world conditions. Once fully trained, these teams, which consisted of one supervisor and two interviewers, began visiting assigned areas to conduct interviews. Researchers from Pew Research Center visited India throughout the project, including during the pilot project and the main field training. They were able to monitor the work, make any necessary adjustments immediately and learn firsthand about the challenges field staff faced. What were some of the challenges you faced and how did you overcome them? We always face challenges when conducting research. One way this study differed from our past surveys in India was that it was about sensitive topics such as religious identity and attitudes toward other religious groups. To ensure we were conducting the research ethically, and in keeping with local research norms, we sought and received approval from an institutional research review board (IRB) within India. To alleviate participants’ concerns about being approached for an interview, field supervisors first gained cooperation from local leaders or village heads before their interviewers began fielding questions in the area. Local leaders could then reassure community members that the team was conducting legitimate research and could be trusted. The interviewers also provided written and verbal details to the respondents on consenting to the survey, including their ability to refuse any question asked and to end the interview at any time, as their participation was completely voluntary. Safety concerns in some regions of the country, particularly in the Kashmir Valley, were another challenge during the fieldwork period. Prior to our fieldwork, the Indian government implemented lockdowns and curfews, as well as communications blackouts in the region. To keep our field teams safe, interviews that were supposed to be conducted in Kashmir Valley were reallocated to other Muslim-majority areas, including locations in Jammu, Haryana and West Bengal. Further, while the survey was being conducted, demonstrations broke out in several regions against the country’s new citizenship law that would expedite citizenship for followers of certain religions, excluding, most notably, Muslims. Our fieldwork in West Bengal and Delhi was delayed because of associated security concerns and local curfews. Another unexpected challenge we faced in all of the Center’s 2020 face-to-face research was the COVID-19 pandemic. Out of safety concerns for our interviewing teams and survey respondents, we stopped fieldwork just shy of our overall project goal, leaving 1.3% of the sampled villages and blocks unsurveyed. While this may seem like a very small amount, it meant we never conducted fieldwork in Sikkim and Manipur states, and therefore our national results excluded these areas. Even with these handful of challenges in our study, we were able to complete an in-depth representative survey of the nation, its regions and its six major religious groups. Clark Letterman is a senior survey manager focusing on global attitudes research at Pew Research Center. Martha McRoy is a research methodologist focusing on international survey research methods at Pew Research Center.
https://medium.com/pew-research-center-decoded/q-a-how-pew-research-center-surveyed-nearly-30-000-people-in-india-7c778f6d650e
['Clark Letterman']
2021-07-01 18:00:16.202000+00:00
['Religion', 'Surveys', 'India', 'Hinduism', 'Islam']
Why your QA team should be just as friendly with UX as they are with developers
Gareth Thomas | Senior QA at Purplebricks It’s no longer the nineties. Your QA team and developers are (hopefully) not embroiled in a war of attrition against each other and are instead peacefully working together towards a common goal of producing a great, working product. The “shift left” attitude towards testing and quality assurance that is en vogue has radically changed the day to day work life of QA teams. It has lead engineers to move from simply raising and closing bugs, to testing APIs without a front end, become experts in Selenium, Cypress and Cucumber, review code—and in some extreme cases—become nearly indistinguishable from their former nemeses. But at most organisations, this left-shift has been constrained purely to the “technical” field. There are definitely positives to this, as many businesses, including Purplebricks, have already discovered. However, it fails to utilise the “soft” skills that any good QA engineer should have; a hands-on understanding of the product, understanding the site of common errors, and an empathetic view of users. Design and QA working together. In our mobile app team, we’ve tried to add that skillset to the design process throughout the product lifecycle, something that’s forced us not only to shift left, but also to shift right. The shift left Just like a “technical” left shift, this is all about getting involved earlier in the creative process. Whether it’s a formal meeting or an ad-hoc Slack call, you call it a “test kickoff” or “three amigos”. Effectively, the initiation of any design work should start with your QA engineers getting together with your UX, UI and copywriters and making sure everybody is on the same page. The design team should use this opportunity to clarify any queries they have around the functionality and acceptance criteria, and your QA team should take the chance to impart their experience of the product: Have your design team considered any edge cases? Are they aware of any platform specific issues? What has frustrated the QA engineer while using a previous iteration of the product? What that experience entails will depend on your team structure, your product and the maturity of your engineers, but I’m sure you get the point. Your QA team should be product experts, as there will almost certainly be intricacies that they are aware of that your design team may not be. The Design Box After your team was set on the right path, and your design team has produced a “finished” product, it’s time for a design box. This is a close cousin of the more commonly known dev box, where QA engineers and developers test together on local code, meaning defects can be found earlier and fixed sooner. A design box very much runs on the same principle. Using a prototype or even screenshots, your designers and QAs will run through the acceptance criteria or test cases you’ve defined and ensure they’re met by the design before developers start using them to code. The Manual Part While there’s very much an emphasis on reducing manual test effort, both at Purplebricks and across the digital world, there is still value in it when focused on the right things. Hopefully, by the time your QAs get their hands on something to test on the intended medium, the majority of defects, technical and creative, have been found and ironed out, and the character of testing can now change tack. Testing of our mobile app is naturally UI heavy, so we’ve always taken a “UX first” stance, but a statement that vague can lead to confusion if not defined. Some questions a QA can tackle during manual testing that can add real value during manual testing from a UX perspective are: Is it accessible? Are there any colours that don’t contrast enough? Are the issues specific to a certain device? Are there any flows that confused you, an expert user? Are there any repetitive tasks that quickly become frustrating, or are slow to execute, especially when considered as part of the whole product? How does the design hold up across different devices, screen sizes, environments of use? Depending on the context and your own quality gates, it’s not necessary to tackle all of these issues prior to release, but they can inform the next iteration of your product. The Shift Right While we may be aiming to shift our testing left, a good QA will still take some ownership of quality after the product’s release, and tasks like collating user feedback, looking for troublesome platforms and interpreting analytics and usage statistics are very closely linked to the sort of work a design team will perform in a discovery phase. Not only will this production insight help to increase a QA engineer’s empathy with the end user, but it’s something that’s very useful to feed back into your team’s next kick off. Additionally, depending on just how in-depth your analytics are, understanding exactly how your users interact with your product can help your QA team refine how they test, increasing the value of that manual test phase even further. How can we forge this friendship? This will very much depend on your organisation, your ways of working, your product, and your team structure, but here are some things that work for us. Introduce your QA team to your design team! It might seem obvious but with many firms still seeing design as a “beginning” and test as an “end”, there might not be much natural crossover between these two worlds. Give your QA team some time to learn the basics of accessibility. Create a shared resource that they can refer to. Encourage manual testing on a range of devices. Whether these are physical or virtual, it’s one step closer to testing the same way your users interact with your product. Let your QA team see the analytics. Often considered exclusive to the realm of data analysts or developers, the democratisation of this invaluable information is essential for a cross-functional team. Encourage your QA team to understand Jakob Nielsen’s 10 Usability Heuristics for User Interface Design. Sure, your design team are intimately aware of them, but they can be eye-opening for a QA engineer. While they may be considered worlds apart, quality assurance and design are much more closely related than you’d think. In classic terms, quality assurance is “being sure what we’ve built is right”. The more we shift left (and right), it moves ever closer to design’s “being sure we’re building the right thing” in the first place.
https://medium.com/purplebricks-digital/why-your-qa-team-should-be-just-as-friendly-with-ux-as-they-are-with-developers-3a3c6670182a
['Gareth Thomas']
2020-11-16 16:28:03.556000+00:00
['User Experience', 'Agile Methodology', 'Design', 'Testing', 'Quality Assurance']
Future Tech Tuesdays: HMIs and Touchless Tech
A Human-Machine Interface (HMI) is a user interface or dashboard that connects a person to a machine, system, or device. How often do you interact with technology in your life? If you answered ‘every day’ then you should take a step back and think about how you’d like to interact with and how you’d like to see technology evolve in the next year. How about the next five years? Ten? For me, technology isn’t evolving quickly enough. Sounds crazy, I know, but I envision a world where technology like touchless tech and kitchens with built-in holographic interfaces — to name a few — are the norm. I’m not sure how to get from point A to point B, but I definitely would like to try to pave the way… or at least start asking the right questions. Touchless User Interface describes an electronic device which can be controlled by gesture or sound and does not require physical contact in order to be activated or performed. Back in 2016, Google I/O released information on Soli. (Click here to watch a 4-minute video on how Soli works.) Taken directly from Soli’s project website, “The Soli project uses radar to enable new types of intuitive interactions.” To put it in layman’s terms, it’s a small chip that allows a user to interact with a product without touching it. Now this might sound like something straight out of a sci-fi movie, but I’m extremely excited about the potential possibilities moving forward with touchless interfaces. Google is expected to release the Note 4 with a Soli radar chip built into the top bar of the phone. It will enable face unlock, similar to iPhone’s Face ID, but will take it a step further by allowing the user to skip songs, snooze alarms and silence phone calls with only a gesture. Google applied for a waiver to allow the Soli sensors to operate at higher power levels than currently allowed. The Federal Communications Commission (FCC) granted this waiver in January of 2019. But it goes beyond cell phones and skipping songs. Man-Machine Interface (MMI) is becoming increasingly popular across every technical genre and truly has a place in everything we interact with as humans. In fact, you might even have it in your kitchen or your car and even Alexa falls into the touchless device category (but voice-controlled devices are, for today, another topic altogether.) Without going too in-depth, here’s a few more fields where touchless technology has been improving lives in modern-day. Vehicles With the passing of the Hands-Free Laws all over the United States, we’re most likely going to see an increase of Human-Machine Interfaces in vehicles. Volkswagen first debuted a gesture-controlled concept in 2015 and BMW released a touchless dashboard concept in 2016 before implementing it into their more recent models. In these two examples, this technology makes it possible to control displays and functions using hand movements without having to touch anything. For example, a swipe gesture toward the windshield would cause the sunroof to close, while the same movement in the opposite direction opens it. Medicine Microsoft released a white paper on Touchless Multifactor Authentication for the medical field stating that “…touchless MFA solutions can save time and increase convenience…while thwarting common types of breaches, including cybercrime hacking…” It even showed a drastic improvement in hygiene since clinicians didn’t have to break the sterile field during surgery and were able to access previously inaccessible IT resources while using touchless technology such as scans. Video Games Touchless technology is a great way to interact with video games. Things like the Xbox Kinect may not even seem like anything special, but implementing ultrasound technology to control video games is a huge step forward in Human Machine Interface systems. The release of the movie “Ready Player One” in 2018 displayed what our future could easily look like in video game development not too far down the road. Immersive interactive video games have also shown major improvement physically, socially, and cognitively for wheelchair users and those with lacking motor skills. What’s next? While I can’t predict the future, I would like to see more interactive and immersible things popping up soon. Whether it’s holographic watches, desks with built-in sensors for gesture controlling web-pages and navigating computers, or something truly out of Minority Report, I think that setting distinct goals for shaping how we view and interact with technology is something that companies really need to consider and revolutionize. And who knows, maybe you’re the next revolutionary genius.
https://medium.com/@apugia/future-tech-tuesdays-hmis-and-touchless-tech-2a20d48a1a79
['Alexandria Pugia']
2019-09-10 14:01:01.171000+00:00
['Tech', 'Future', 'Medicine', 'Cars', 'Revolution']
Can LinkedIn get you a job?
A big YES!! LinkedIn is like a gold mine for those searching for job opportunities. According to the Jobvite survey, social media is used by 92% professionals in their work today and LinkedIn is the favourite place of 87% recuriters for searching candidates. It provides an overview of the candidate’s experience, education and other activities. How to search jobs on LinkedIn ? There are many ways to search for jobs on LinkedIn. Personally I have got best results from below methods — Use Hashtags (#) You can search using with relevant hashtags on the search bar. And apply the filter “content”, you will get all posts with your hashtag. Example — If you want to search for Android Developer role, you can use #AndroidDeveloper, #recruitment, #job, #developer, etc. for your search. This will give you different posts and people who have written the post. You can apply and if required have conversation with recruiter as well. Connect with recruiters You have got to connect with recruiters/hiring managers of your desired company. You can ask them if there is any vacancy/job opening they can consider you for. This is a great way to know about the company and job openings both. Job ID and Referral You can visit the company page to get any job openings and if you get one, you can ask for referral on LinkedIn. Remember whenever you ask for referral, go with your job id for which you need referral. Connect with college alumni Your college alumni can help you in a lot of ways. If they are in a good company you can ask for referral or other connections who can help you get one. Your 1st degree networks are always going to work in one way or the other so keep networking ! How to optimize your LinkedIn profile ? You can hunt for job on LinkedIn only if you have an appealing and expressive profile. Here are a few tips to optimize the profile that will definitely help you get more job opportunities. Profile picture Profile picture is one of the most important things on LinkedIn. It increases your profile views chances by 11X. The picture needs to be professional. Gresham say LinkedIn prefers complete profile for searches much more than incomplete profile. Be active and make quality connections You need to be active on LinkedIn. Surf the feed at least once a day to know what is going on. Also, Quality connections are very important, for that you have to know people and then send them connection request. Experiences and achievements It’s important to add your previous experiences and whatever achievements you have had. This gives recruiters an insight of your work and increases your chances of being selected. Resume If you are applying to any job, always edit your resume according to job description. This is the biggest mistake that people do. Recruiters spend 10–20 seconds on each resume. If you can highlight yourself within that time through your resume, then only you’ll be selected ! Conclusion LinkedIn can be used by job seekers to get a job, recruiters to get employees and businesses to get leads. It’s the budding social media with 700 million users. Recruiters, CEOs, hiring managers, company pages all are present at the same platform. If used properly, it can help anyone to get their desired results !
https://medium.com/@sandhyagpt45/can-linkedin-get-you-a-job-9ca931166440
['Sandhya Gupta']
2020-12-08 06:14:41.463000+00:00
['Networking', 'Recruiting', 'Jobs', 'Placement', 'LinkedIn']
Facebook Inverse Cooking Algorithm
Facebook Inverse Cooking Algorithm Predicting a full recipe from an image better than humans Figure 1 Predicted ingredients after running Inverse Cooking algorithm in a meal of sushi [3] This recipe retrieval algorithm was developed by Facebook AI Research and it is able to predict ingredients, cooking instructions and a title for a recipe, directly from an image (Figure 2) [1]. Figure 2 Example of a generated recipe by the Inverse Cooking Algorithm [1] In the past, algorithms have been using simple systems of recipe retrieval based on image similarities in an embedding space. This approach is highly dependent on the quality of the learned embedding, dataset size and variability. Therefore, these approaches fail when there is no match between the input image and the static dataset [1]. Inverse cooking algorithm instead of retrieving a recipe directly from an image, proposes a pipeline with an intermediate step where the set of ingredients is first obtained. This allows the generation of the instructions not only taking into account the image, but also the ingredients (Figure 1) [1]. Figure 3 Inverse Cooking recipe generation model with the multiple encoders and decoders, generating the cooking instructions [1] One of the major achievements of this method was to present higher accuracy than a baseline recipe retrieval system [2] and average human [1], while trying to predict the ingredients from an image. Figure 4 Left: IoU and F1 scores for ingredients obtained with retrieval approach [2], Facebook’s method (Ours) and humans. Right: Recipe success rate according to human judgment [1] Inverse Cooking algorithm was included in a food recommendation system app developed and published here. Based on the predicted ingredients in the web application, several suggestions are provided to the user, such as: different ingredient combinations (Figure 1). References [1] A. Salvador, M. Drozdzal, X. Giro-i-Nieto and A. Romero, “Inverse Cooking: Recipe Generation from Food Images,” Computer Vision and Pattern Recognition, 2018. [2] A. Salvador, N. Hynes, Y. Aytar, J. Marin, F. Ofli, I. Weber and A. Torralba, “Learning cross-modal embeddings for cooking recipes and food images,” Computer Vision and Pattern Recognition, 2017. [3] Towards Data Science, “Building a Food Recommendation System,” 2020. [Online]. Available: https://towardsdatascience.com/building-a-food-recommendation-system-90788f78691a. [Accessed 18 May 2020].
https://towardsdatascience.com/facebook-inverse-cooking-algorithm-88aa631e69c7
['Luís Rita']
2020-05-20 15:45:00.665000+00:00
['Health', 'Inverse Cooking Algorithm', 'Deep Learning', 'Food Recommendation', 'Recursive Neural Networks']
Alchemy of the Face
I recently finished Gabriel García Márquez’s landmark novel 100 Years of Solitude, an aptly named book for the current year, and one that contains multiple generations of sons and fathers sitting alone in the same workshop, practicing the magical art of alchemy. These men transmute lead into gold and decipher ancient hieroglyphs that foretell their family's entire lineage over the course of a century. The images of boiling flasks and magic tools of another world struck me the hardest in the mornings when I would sift through my own collection of potions and elixirs in the form of various skincare products I’ve amassed over the years. These products have transformed my face from a wasteland of acne scars into a smooth and fruitful oasis free of blemishes. This is my practice of alchemy. My personal bout of magic to start each day. Today I practice skincare, or as I call it, an alchemy of the face. While my morning and nightly routine of washing myself has become habitual over the years, I feel compelled to experiment and discover new products with a childlike sense of glee due to a continuing subscription I have to the GQ’s Best Stuff box for MEN. As someone who does not identify as a man, it can feel somewhat off-kilter to receive a box promising an improved “men’s experience” for “shaving before that big meeting” or “taking the Jeep for a cruise on rugged terrain” which for some reason requires a soft-knit throw blanket free of charge. This past month I even received a sophisticated watch buried among oak-scented deodorant and protein powder, which I quickly regifted to my father while playing the role of “The Good Son.” Unlike acts like growing out facial hair, receiving elite men’s grooming and lifestyle products feels more akin to the young mage sneaking out of a temple with their master's scrolls in the dead of night. An exciting breach of a world that holds itself above all others; The Boys Club. Well, I was in The Boys Club, and they could use charcoal deodorant sticks with added Magnesium for sensitivity. Those vouchers for three months of Headspace will not be lost on me, but I hope the same can be said for all of the men who were swayed by the prospect that they could be more in tune with their emotions and mental well-being. While I have escaped the ranks of masculinity from which I was designated from a young age, I have snuck back in to reap the physical benefits only to slip on a dress after applying some sea-mist aftershave to my newly smooth face. This is all to say that I’m not trying to come off as a cynic to GQ nor it’s best stuff box, only that the regulation of skincare into gendered camps is silly to me. Saying that a product is meant for a “masculine” audience has the same position as claiming that zaatar is a “feminine” flavor when we all contain mouths that need to be fed and assholes that need to be washed. By dipping into both camps of skincare in order to find the best of the best products that work for my ever-changing body, I have become an alchemist’s creation of my own design. That bar of oat and honey soap wipes away the grime of one day and smooths my skin for the next, while that minimalistic tube of moisturizer provides a layer of gruff-protection from the elements when I go for a night on the town (or at least, for my dreams of going out, in another time). Perhaps the real magic of the GQ box was a new foundation for the relationship between my father and me, two AMABs from two different eras, each completely foreign to the other. Yet we’ve both inhabited the same workshop as a means of escape from the expectations of our daily lives, just like the characters in 100 years of solitude. While he built it for shining shoes as a means of relieving stress from his work and home lives, I repurposed it after a decade of inactivity into an art studio where I could express myself in an environment free of judgment. Like Aureliano in the novel, I am left with a handful of golden fishes, but mine is the renewed skin cells on my face and the renewed fibers of a relationship that past generations would foretell of doom.
https://medium.com/@horsegirls/alchemy-of-the-face-7aefe5fb6111
['P Henry']
2020-12-14 19:22:27.964000+00:00
['Health', 'Skincare', 'Masculinity', 'Gender Identity']
netDocShare — empowering virtual team conversations
netDocShare is an incredibly robust solution that integrates the best of NetDocuments and Microsoft Teams and allows its users to easily access relevant NetDocuments content within a Team conversation thread. It gives the power to collaborate with real-time sharing of NetDocuments within conversations. netDocShare offer its clients a secure infrastructure for document sharing with internal as well as external users. The launch of the netDocShare Teams app has enhanced the power of instant collaboration and communication using the flexibility of Microsoft Teams with advanced document management capabilities of NetDocuments. During this COVID-19 pandemic, netDocShare is empowering organizations to boost their workplace productivity with intuitive conversational interfaces. netDocShare Teams app offers remote workers with conversational AI capabilities (chat feature) to easily refer NetDocuments content within Microsoft Teams’ conversations. netDocShare Teams app empowering users to perform key functions netDocShare Teams app — delivering an enhanced digital experience to remote users Within Teams, users can easily add netDocShare Teams app as a new channel tab or simply pin as a personal app. netDocShare personal app pinned to Teams Navigation bar offers users a chat functionality to ask any queries related to documents stored in NetDocuments DMS. A search function is one of the most used features in DMS, and employees spend a maximum of their time looking for a specific document. netDocShare Teams app allows users to search for relevant documents and easily access the information they are seeking. With the netDocShare Teams app, remote workers can easily interact and access documents during Microsoft Teams conversations. netDocShare Teams app empowers virtual team conversations, enabling general counsels and law firms to maximize knowledge access. This enables organizations to drive the productivity of legal teams and make smarter and efficient business decisions.
https://medium.com/@netdocshare/netdocshare-empowering-virtual-team-conversations-94d797a397a9
[]
2021-06-17 14:13:03.505000+00:00
['Legaltech', 'Tech', 'App', 'Legal', 'Microsoft Teams']
Basic Algorithms — Finding the Closest Pair
Basic Algorithms — Finding the Closest Pair Determining the closest pair of two points on a two-dimensional plane with a split-conquer algorithm Writing cost-efficient algorithms is one of the keys to succeed as a data scientist, and in the previous article we used split-conquer method in counting inversions in an array, which is far less costly than brute force method. This time, we will see how another split-conquer algorithm finds the closest pair of points from a set of points on a two-dimensional plane. Photo by Rémy Penet on Unsplash Looking up at the sky at Times Square or downtown Tokyo, we would find the closest pair of stars quite easily, because you can see only a few stars there. If you are in the middle of nowhere, the infinite number of stars in the dark night sky would make it impossible to determine the pair. It is no different for computers; when they determine the closest pairs of points on a plane, the more points the dataset has, the longer it takes for algorithms to find the couple with the least distance. The cost increase is more than linear to the number of points, and we try to write an algorithm which contains the cost growth as low as possible. The split-conquer method works well in this challenge, in a similar way with the algorithm to count inversions. Brute-Force Method — Finding the Closest Pair The brute-force way is, like one that counts inversions in an array, to calculate the distances of every pair of points in the universe. For n number of points, we would need to measure n(n-1)/2 distances and the cost is square to n, or Θ(n²). With two loops, the code for this algorithm can be written as follows. def find_closest_brute_force(array): result = {} result["p1"] = array[0] result["p2"] = array[1] result["distance"] = np.sqrt((array[0][0]-array[1][0])**2 +(array[0][1]-array[1][1])**2) for i in range(len(array)-1): for j in range(i+1, len(array)): distance = np.sqrt((array[i][0]-array[j][0])**2 +(array[i][1]-array[j][1])**2) if distance < result["distance"]: result["p1"] = array[i] result["p2"] = array[j] result["distance"] = distance return result return result Now we think of a better way whose cost would be O(nlgn). It is possible with presorting and the split-conquer method. To presort the array on one of the coordinates, we also use the slit-conquer method, the merge-sort algorithm. Merge Sort We sort the arrays with an algorithm called merge-sort, which is faster than brute-force sorting algorithms. The merge-sort algorithm splits the array, sorts the subarrays (as a recursive step), compares the youngest numbers in two subarrays and picks the younger, and repeats it until both subarrays are exhausted. Each of the recursive steps costs just Θ(n) so that the total cost of the algorithm stays at Θ(nlgn). def merge_sort(array, coordinate=0): length = len(array) if length == 1: return array if length == 2: if array[0][coordinate] > array[1][coordinate]: return np.array([array[1], array[0]]) else: return array elif length > 2: array_l = array[:length//2] array_r = array[length//2:] array_l_sorted = merge_sort(array_l, coordinate) array_r_sorted = merge_sort(array_r, coordinate) l_length = len(array_l) r_length = len(array_r) l = 0 r = 0 sorted_list = [] for i in range(length): if r == r_length: sorted_list.append(array_l_sorted[l]) l += 1 elif l == l_length: sorted_list.append(array_r_sorted[r]) r += 1 elif array_l_sorted[l][coordinate] > array_r_sorted[r][coordinate]: sorted_list.append(array_r_sorted[r]) r += 1 elif array_l_sorted[l][coordinate] < array_r_sorted[r][coordinate]: sorted_list.append(array_l_sorted[l]) l += 1 return np.array(sorted_list) Split-Conquer Method — Finding the Closest Pair As stated above, we aim to write an algorithm which finds the closest pair of points at a cost of O(nlgn). With a split-conquer algorithm whose recursive steps cost O(n) each would suffice. The algorithm divides the array into subarrays and the key is to see if the closest pair across the two subarrays. The split conquer algorithm sorts the array by X coordinate, divides the sorted array into two, apply the algorithm recursively to the subarrays, and check whether or not there exists a pair with a shorter distance than that found in subarrays. def find_closest_nest(array): X = merge_sort(array, 0) length = len(X) if length < 4: return find_closest_brute_force(array) else: array_l = X[:length//2] array_r = X[length//2:] dict_l = find_closest_nest(array_l) dict_r = find_closest_nest(array_r) if dict_l["distance"] > dict_r["distance"]: dict_both = dict_r else: dict_both = dict_l Y_list = [] for i in range(length): if X[length//2-1][0]-dict_both["distance"] < array[i][0] < X[length//2-1][0]+dict_both["distance"]: Y_list.append(array[i]) Y = merge_sort(np.array(Y_list), 1) if len(Y) == 1: dict_final = dict_both elif len(Y) < 8: dict_y = find_closest_brute_force(Y) if dict_both["distance"] > dict_y["distance"]: dict_final = dict_y else: dict_final = dict_both else: for i in range(len(Y)-7): dict_y = find_closest_brute_force(Y[i:i+7]) if dict_both["distance"] > dict_y["distance"]: dict_final = dict_y else: dict_final = dict_both return dict_final The last step, looking at the pairs across subarrays, needs some tricks to keep the cost at a linear level (e.g. O(n)). First, we will make a subset of the input, which consists of points within d distance from the midpoint in terms of X coordinate; d is the shortest distance between the pair within subarrays. If we find the closest pair across the subarrays, the pair of points should exist within distance d from the line dividing the array into subarrays. In the example shown in the right hand side, the closest pair within subarrays is determined in the right subarray (note that the point on the dashed line belongs to the left subarray) and its distance is d. If the closest pair exists across the left and right subarrays, the points should be within the range of d from the dashed line dividing the array into the two subarrays. Therefore, we can look at the subset within the shaded range. Second, we sort the subset we obtained in the previous step by Y coordinate. We show that we have to look at sets of only eight consecutive points each in the sorted subset. As shown in the figure, the maximum number of points that can exist in 2d*d rectangle across right and left subarrays is eight (points on the dashed line duplicate; two belong to the left subarray and another two are in the right). Third, we look at each set of eight consecutive points in the subset sorted on the Y coordinate. If we find a pair whose distance is less than d, it means the closest pair exists across the subarrays. This step costs O(n), and the total cost of this recursive algorithm stays at O(nlgn). Performance Check We learned that the split-conquer algorithm we developed performs faster than brute-force one. Let’s compare the actual performance of two algorithms. In the chart below, the cost (execution time) of two algorithms are shown by different sizes of arrays. The two lines clearly indicate that the split-conquer method has an advantage as the sample size increases. This result proves the importance of coding efficiently, as discussed in the previous article too.
https://towardsdatascience.com/basic-algorithms-finding-the-closest-pair-5fbef41e9d55
['Keita Miyaki']
2020-02-10 14:07:00.553000+00:00
['Python', 'Divide And Conquer', 'Data Science', 'Algorithms']
Scaling Up Smart: 4 key tips on successfully using cloud-native technology to scale your infrastructure
In this post, I’d like to share some high-level takeaways for engineering managers and backend teams to help them successfully scale their operations while avoiding some of the most common pitfalls and short-sighted decisions. This article follows up on a first post published by Jordan Pittier, lead backend engineer at Streamroot, and a presentation of our journey at HighLoad Moscow this past November. These shared our experience and the challenges we faced throughout the process of moving from a VM-based to a container-based architecture and migrating our infrastructure to Kubernetes running on Google Cloud. Introduction and Context Let me start by giving you a little bit of context about Streamroot and why we take the time to tune our kubernetes Engine architecture not only to scale but also to make our architecture more resilient. Streamroot is a technology provider that serves major content owners — media groups, television networks, and video platforms. Our peer-to-peer video delivery solution offers broadcasters improved quality and lower costs, and works in tandem with their existing CDN infrastructure. One of our (and our customers’) biggest challenges last year was scaling to the record-breaking audiences of the FIFA World Cup. The 2018 World Cup proved to be the largest live event of all time, with Akamai registering 22 Tbps on peak and more than doubling the previous traffic record set by the Super Bowl (1). Streamroot delivered the World Cup for TF1, the largest private broadcaster in France, as well as national television networks in South America. To be able to serve our customers at this scale, we needed to scale our own Kubernetes engines and be able to scale faster. We needed to: Handle massive traffic, with hundreds of thousands of requests per minute to our backend Scale to huge spikes in minutes, at the beginning of each World Cup game Ensure a 100% fail-proof, entirely resilient, robust backend able to withstand any failure. As a sports lover, I know that it is completely unacceptable to have even two minutes of downtime during live sports… And last but not least, we had to do all of this with a startup scale team of only a few backend engineers… If you are interested in our scaling journey over the past few months and you wish to dig more into the technical details, you can have a look at our talk at the HighLoad++ conference in Moscow made by Jordan Pittier and Nikolay Rodionov and our slides here. Takeaway # 1: New things are not always good things: Tread lightly with cloud-native technology. Kubernetes has seen exponential growth since joining the Cloud Native Computing Foundation (CNCF) and there is increasing interest in this complex solution, which is a combination of open-source cloud-native technologies. Last December, The CNCF’s KubeCon+CloudNativeCon gathered more than 8,000 attendees from all over the world in Seattle. Figure: CNCF Annual Report 2018 [2] Kubernetes is one of the Cloud Native technology components. Many other components also exist, some part of the CNCF foundation (https://landscape.cncf.io/) and some outside the foundation, such as Istio. Cloud native technology is still young, and there are various new components springing up in a different field every month: storage, security, service discovery, package management, etc. Our advice: use these new components with caution, and keep it simple (, stupid). These technologies are new, sometimes still rough around the edges, and are evolving at an incredible pace. There is no point in trying to use all the latest shiny technologies, especially in production, unless these are motivated by a real need. Even if you have a large team of talented engineers, you need to take into consideration the cost (in resources and time) to maintain, operate and debug these new technologies which can sometimes lack stability. As a manager and CNCF ambassador, I recommend following the CNCF classification (https://www.cncf.io/projects/ ) to select the native components having a sufficient maturity level. The CNCF-defined criteria include a rate of adoption, longevity and whether the open-source project can be relied upon to build production tools. Today Streamroot harnesses only 3 projects (Kubernetes, Prometheus and Envoy), which are at these maturity levels and have “graduated” according to the CNCF foundation. A large number of the components out there are still at the incubating or sandbox stages. You can still use these, but keep in mind that you will face some risks: stability, bugs, limited community, learning curve, etc. Most importantly, understand that even if there may be widespread confidence that all native projects in the incubating or the sandbox stage may be able to fill in the blanks and become mature for production, it is also a question of not multiplying the complexity of your architecture. Make sure to ask yourself the following before adding any new components from the CNCF or outside the CNCF: Do I really need this component? Am I solving a real problem in my infrastructure? Are my engineers going to be able to handle it now and in the long run? Figure: CNCF Project Maturity Levels [2] Takeaway # 2: Control your costs When starting a significant project like moving your service from a VM-based to a container-based architecture supported by Kubernetes, your primary focus is likely not cost but having a successful migration. While the cost of your backend may not be an immediate or medium-term concern, it is something to take into account from day one. I highly recommend that you track as early as possible your Kubernetes Engine scaling costs for the following reasons: To have a clear vision of your resource usage and the efficiency of your software. A backend team’s primary concern is delivery, and it is often difficult from a management perspective to relay the importance of efficient software and resource usage. To discover room for improvement in your architecture. Triangulating the information from monitoring and cost progression helped us identify improvements in our architecture. In particular, we were able to reduce our costs by 22% by simply better adapting our instances to our usage, and understand better how the resources are used and consumed. To take advantage of volume-based cost savings. Most cloud providers, including Google Cloud and Amazon AWS, offer interesting discounts for commited instances. Don’t hesitate to use infrastructure cost accounting (and reduction) to your benefit. Once you reach a certain spend, even a 10% cost reduction can add a few thousand, or even dozens of thousands of dollars back into your budget, which can be used to send your teams to conferences, or even hire a new resource to build your product faster! To illustrate my third point, GCP provides a sustained use discounts option that offers significant discounts for long term committed instances. For example, if you commit a resource for an entire year, you get a flat 30% discount (for once, it’s actually nice to see the bill at the end of the month!). Those discounts can run up to 57% (!) for a 3-year commitment. Of course, I suggest waiting at least 6 months before committing anything, in order to identify the average CPU & RAM resources you are using at minimum. Don’t fear! You do not need to be an expert in corporate finance or billing to track your costs effectively. For example, you can enable cost alerting per project by default if you would like to track your monthly usage, and then use the CSV export to feed into your favorite spreadsheet tool. Alternatively, on GCP, you can enable the Bigquery Billing Export option for a daily export of all of the details of your resource consumption. Then, take a few minutes to build a simple dashboard with an SQL export or Excel (do not forget to ask your engineers to set the resource labels correctly in order to identify the different lines). Takeaway # 3: Isolate and keep your production safe Many blogs and articles recommend that you use only one K8s cluster but use different namespaces for your different environments (Dev, Staging & Production for instance). Namespaces are a very powerful feature that can help you organize your Kubernetes resources and increase the velocity of your teams. But that kind of setup doesn’t come easily: you need to make sure to have a polished CI/CD environment in place, to avoid any interferences between your staging & prod environments, as well as “stupid” mistakes like deploying the wrong component in the wrong namespace. When reading this, you might think: “sure, but we have a super smart team, so we’ll be able to handle it.” Stop right there: everyone makes stupid mistakes, and the more stupid the mistake is, the more chance it has to happen… So unless you want to spend your most stressful days extinguishing fires in production because you pushed a staging build there, you MUST spend a few weeks building a top notch CI/CD workflow if go with the namespaces option. On our side, we chose another option to keep our environments separated: we decided to create fully autonomous clusters for our staging and production environments. This eliminates all risks of human error and security failure propagation, as both clusters are completely isolated. The downside of this approach is that it increases your fixed costs: you need more machines to keep both clusters up and running. But the safety and peace of mind it brings is more than worth it for us. Moreover, you can mitigate the cost overhead by using ephemeral instances with GCP, which are 80% less expensive than normal instances. Of course this comes with a catch: those instances can get shut down at any time if Google Cloud needs them for another customer. But as we use them for our staging environment only, losing one machine doesn’t really impact us, and we even learned to use it at our advantage. It is for us the perfect test to see how our architecture reacts to a random failure of one of our components: a sort of perfectly unpredictable red team trying to destroy the system, given you for free by Google Cloud. Takeaway # 4: Unify and automate your workflow from the start When you start a new project, the last thing you want to think about is how you will be sharing your code with other developers, or how you will be pushing your builds between production and staging when you need to do an emergency rollback. It’s normal and very wise: there’s no need to over-optimise before you have actually built anything that you can show to the world. But on the other hand, it’s a common mistake to let these questions sit latent for eternity because you don’t have time and need to release the next feature that is going to make your product finally cross the chasm and magically bring millions of users. My advice on this would be to take the time to create a simple and efficient workflow as early as possible. First, as soon as you start collaborating with other people, you should take a step back to create a unified and easily transferable development environment. 10 years ago, that wasn’t an easy feat: you needed to configure special VMs on everyone’s computers, or hack together conversions between your Mac & Windows users. It was a real nightmare and caused a lot of unnecessary and undebuggable issues. Today, thanks to containerization tools like Docker, it can be done in less than a few days, so why not implement it from the start? This will greatly simplify the lives of all your developers and make the onboarding of new employees easy and straightforward. It’s a very small investment for all the weeks of debugging and setting up that you will save. Second, as soon as you have production traffic, you should think about creating a simple but efficient QA/CI/CD workflow. No need to over engineer too early, but we are very lucky to live in the golden age of automation & CI tools that give you the possibility to implement an automated first class CI & CD without trouble. The list is long of tool CI compliant with kubernetes API, example in version 10.1 GitLab introduced integration with Kubernetes or Jenkins X. Most of those companies offer low cost plans for small scale projects, and free plans for open-source projects, so you really don’t have any excuse not to use them! It’s not rocket science, and will save you both time, energy, and numerous headaches, and make your developers’ life a lot easier! Summing it all up Kubernetes and Cloud Native offer great technologies that bring simplicity and support to build a scalable and resilient solution on the cloud. It won’t be long until we take Kubernetes for granted as a ubiquitous part of the cloud technology as we do technologies like Linux and TCP/IP today. Thanks to our successful migration to these services, we were able to durably scale our infrastructure to World Cup audiences and beyond. During the biggest sporting event in history, we delivered more than 1.2 Tbps of traffic with zero minutes of downtime — and all of this with a team of only two backend engineers. We are now able to handle video streams with millions of viewers, with tens of thousands of new requests arriving per second on peak. Thanks to the best practices that I have discussed in this article, we were not only able to achieve our short-term delivery goals but also the long-term scalability of our infrastructure from an architecture, cost and resource perspective. To sum up our key takeaways: Use Kubernetes and cloud native carefully and pragmatically, when the tool corresponds to a real need. Think about the future today, whether it comes to cost, separating environments or putting in place automated workflows. The more effectively these challenges are incorporated into your project from day one, the less time and fewer resources you will waste adjusting later on when these considerations become mission critical. As a startup, we are always striving to keep on improving our tech and workflows, and after all the lessons learned during our scaling journey, we are looking forward to tackling our next challenge: building a multi-cloud architecture! Check back soon to learn more, and if you are interested in taking part in these exciting challenges, check out streamroot.io/careers. [1] Akamai measured a peak volume of over 22 Tbps. That’s 3 times the peak they saw in the 2014 edition. [2] CNCF Annual Report 2018 : https://www.cncf.io/wp-content/uploads/2019/02/CNCF_Annual_Report_2018.pdf
https://medium.com/lumen-engineering-blog/scaling-up-smart-4-key-tips-on-successfully-using-cloud-native-technology-to-scale-your-e4b521003f94
['Reda Benzair']
2019-03-08 10:34:40.126000+00:00
['Cloud Native', 'Backend Development', 'Docker', 'Kubernetes', 'Cncf']
Ask A Silly Question
Written by Almost famous cartoonist who laughs at her own jokes and hopes you will, too.
https://marcialiss17.medium.com/ask-a-silly-question-1b4316cd3876
[]
2020-01-09 11:02:25.900000+00:00
['Humor', 'Celebrity', 'Comics', 'Funny', 'Retirement']
Tyler Perry doesn’t want a seat at the table
I’m still looking for the person who unequivocally loves Madea. Either people can’t stand the character or think it’s been overdone to the point that they can’t watch any Tyler Perry movie that she’s a part of. As I say that, I bet the overwhelming majority of you know exactly who Madea is. You can picture her curly grey hair and hear her annoying, high pitched voice. You’re imagining her curse someone out or hit someone with her purse. That alone makes Perry one of the greatest writers of our generation. He’s created a character that’s become popular enough to be instantly recognized by name alone without attachment to any one particular movie. But that hasn’t been enough for Perry. I don’t think it ever was. He famously self-financed his first play. That was after living out of his car not knowing if his dreams would ever come true. So when the announcement was made that he is the first African American to independently own his own studio, as amazing an accomplishment an entertainer can imagine, it couldn’t have been too surprising. Tyler Perry has never wanted a seat at the table It’s a curious time to be black on this side of the world. Even if your eyes are closed it’s impossible not to feel the way we’re celebrating each other. It’s also impossible to ignore our demands. That we be respected, that we be let into spaces that previously took us for granted despite our impact or presence. Read: Why do I have to write about being black? Counter that with what almost feels like a racial civil war. Our demands haven’t been met without resistance and that resistance is creating deep fractures in our society. Some will argue those fractures were always there, and they have been. But it’s also true that social media has made this division more contentious, or at least more public. In the midst of this, Perry has quietly been constructing his empire away from the noise of Hollywood. He’s stayed in Atlanta despite his commercial success in the film industry. The newly built Tyler Perry Studios is on over 300 acres of former slave ground in Georgia, an irony that shouldn’t be overlooked. Image by Clarke Sanders Perry has unapologetically catered his art specifically to black culture. “My audience and the stories that I tell are African-American, stories specific to a certain audience, specific to a certain group of people that I know, that I grew up with, and we speak a language. Hollywood doesn’t necessarily speak the language.” he said. “A lot of critics don’t speak that language. So, to them, it’s like, ‘What is this?’ “ That choice has made Perry an “other.” Someone who’s acknowledgement outside of black culture doesn’t equal his achievements or influence within it. But that has never seemed to bother Perry. He came into the game on his own dime so he’s always understood where he would stand. He would have to do it on his own, and so he has. Tyler Perry Studios needs to work It won’t be enough for this to be some kind of symbolic gesture. In twenty years, we can’t look back and ask, “what ever happened to Tyler Perry’s studio.” We need to make this work. Great movies must be born there. Iconic TV series must live in one of the twelve sound stages named after black people who’ve inspired Perry to control his own destiny. This is not a test and shouldn’t be treated as such. It’s a waving flag of victory for a battle we still need to win. Tyler Perry has taken a leap. He’s built a table we can call our own. We creators should all feel welcome to pull up chairs and begin sharing our stories. CRY
https://medium.com/cry-mag/tyler-perry-doesnt-want-a-seat-at-the-table-cc96d39c8e73
['Kern Carter']
2019-10-10 15:46:03.117000+00:00
['Writing', 'BlackLivesMatter', 'Pop Culture', 'Creativity', 'Culture']
130 Banks Worth $47 Trillion Pledge to Move Away From Fossil Fuels
Protest outside St Kilda and Balaclava Commonwealth Bank branches in Melbourne, Australia on Thursday May 11, 2017. The Commonwealth Bank has some $20,590 million invested in fossil fuel projects. The Bank has been an aggressive investor in fossil fuel projects investing $3886 million just in 2016, a ratio of 4.6 to one of fossil fuel investments versus investment in renewable energy, out of line with it’s commitment for sustainable investment under the Paris Agreement 2C temperature targets. (Photo: Takver, Flickr) “When the financial system shifts its capital away from resource-hungry, brown investments to those that back nature as solution, everybody wins in the long-term.” On the eve of the United Nations climate summit in New York in Monday, a major segment of the global banking industry pledged to adopt U.N. “responsible banking” principles in an effort to combat climate change. The banks’ commitment to the U.N. principles suggests the banking industry will pivot its loan and investment portfolios away from fossil fuels and towards greener business ventures. Deutsche Bank, Citigroup and Barclays were among the 130 banks to sign on to the U.N. pledge. Together the banks represent over $47 trillion in assets, representing approximately one-third of the global banking industry. The banking principles were developed by the U.N. Environment Programme and a core group of 30 banks from around the world with the goal to align signatories with the U.N. Sustainable Development Goals and the Paris Agreement on Climate Change. Six principles make up the responsible banking pledge: We will align our business strategy to be consistent with and contribute to individuals’ needs and society’s goals, as expressed in the Sustainable Development Goals, the Paris Climate Agreement and relevant national and regional frameworks. We will continuously increase our positive impacts while reducing the negative impacts on, and managing the risks to, people and environment resulting from our activities, products, and services. To this end, we will set and publish targets where we can have the most significant impacts. We will work responsibly with our clients and our customers to encourage sustainable practices and enable economic activities that create shared prosperity for current and future generations. We will proactively and responsibly consult, engage and partner with relevant stakeholders to achieve society’s goals. We will implement our commitment to these Principles through effective governance and a culture of responsible banking. We will periodically review our individual and collective implementation of these Principles and be transparent about and accountable for our positive and negative impacts and our contribution to society’s goals. On Sunday, U.N. Secretary-General António Guterres said at the launch event, attended by the 130 Founding Signatories and over 45 of their CEOs, that “the UN Principles for Responsible Banking are a guide for the global banking industry to respond to, drive and benefit from a sustainable development economy. The Principles create the accountability that can realize responsibility, and the ambition that can drive action.” The commitment to environmentally responsible banking principles by 130 banks drastically increases the pressure on fossil fuel companies in an industry already facing a rising tide of calls for various financial institutions and organizations to divest from. In late July the European Investment Bank, one of the world’s most powerful financial institutions, announced that it would divest from all fossil fuel projects by the end of next year. As Peter Castagno previously reported for Citizen Truth, the European Investment Bank’s decision comes as the latest victory in a movement that claims to have secured more than $8 trillion in divestment commitments from over 1,000 philanthropies, pension funds, universities and other institutions. Additionally, this summer Chubb became the first major U.S. insurance company to pledge to stop insuring and investing in coal, announcing that Chubb “will not underwrite new risks for companies that generate more than 30% of revenues from thermal coal mining … [and] will phase out coverage of existing risks that exceed this threshold by 2022.” So while the public sector in the U.S. and elsewhere hems and haws over the reality and seriousness of climate change, it is the often-maligned finance industry that may come to the rescue of environmentalists, as Inger Andersen, Executive Director of UNEP illustrated on Sunday. “A banking industry that plans for the risks associated with climate change and other environmental challenges can not only drive the transition to low-carbon and climate-resilient economies, it can benefit from it. When the financial system shifts its capital away from resource-hungry, brown investments to those that back nature as solution, everybody wins in the long-term.” Support independent news and get our newsletter three times a week. Read the original article and sign up for our newsletter at https://citizentruth.org
https://medium.com/citizen-truth/130-banks-worth-47-trillion-pledge-to-move-away-from-fossil-fuels-fd4e6cef22f0
['Citizen Truth Staff']
2019-09-25 21:47:21.243000+00:00
['Fossil Fuels', 'Fossil Fuel Industry', 'Global Warming', 'Big Oil', 'Climate Change']
To Pimp A Butterfly: The Benchmark of Rap Music
To Pimp A Butterfly isn’t just sonically impressive. At the time of its release, it was a much-needed glimmer of hope amongst poor, African American households and communities across the USA. Following the death of Trayvon Martin in 2012, the Black Lives Matter movement was begun to combat the recent acts of police brutality and racism against members of the black community. Despite the start of this political movement, it didn’t seem to get the message across, as 2 years later we mourned the deaths of Eric Garner, Michael Brown and Tamir Rice. All were innocently shot and murdered, and was thought to be because of the fact they were of an African American decent. When all hope seemed to be lost, the voice of Kendrick Lamar provided closure and reassurance to the black community. With fans immediately recognising the stories of Kendrick’s battle with racism, the album became a driving force behind the BLM movement. It re-ignited the much-needed fire of the movement and acted as a platform to recruit more members to fight against the system. 2 tracks in particular were used often within BLM protests. The 7th track ‘Alright’ is Kendrick’s way of providing assurance to those who are troubled. A reminder that ‘we gon’ be alright!’ despite the harshness of their reality and the hardships they encounter. This lyric was used as a chant at BLM protests. Whilst ‘Alright’ stands with the black community, ‘The Blacker the Berry’ takes an angry jab at his country for the wrong doings it has committed, which ultimately puts him in the place he is in today. But also, he takes time during the conclusion of the track to study the irony of black-on-black violence, a topic that Kendrick has plenty of experience with. The lyric ‘So why did I weep when Trayvon Martin was in the street when gang-banigin’ make me kill a n*gga blacker than me? Hypocrite!’ highlights the frustration and confusion Kendrick faces when exploring his feelings towards this certain topic. As a result of him writing this, it spiralled into mass debate online, and is used as an argument for those who criticise BLM.
https://medium.com/@owenbinns/to-pimp-a-butterfly-the-benchmark-of-rap-music-868b833fb837
['Owen Binns']
2020-12-15 11:03:35.643000+00:00
['Music', 'Culture', 'Kendrick Lamar', 'To Pimp A Butterfly', 'Hip Hop']
Trump has enablers on both sides of aisle.
Trump has enablers on both sides of aisle. Let me be blunt, not a single elected official did anything aside from using a few choice words to basically distance themselves from him. Nobody was willing to risk their career in order to defend the constitution. Not one. Unfortunately we now know how vulnerable we are as a nation. Once the Republicans figure out the general lessons learned, we can count on a replay.
https://medium.com/@throtol/trump-has-enablers-on-both-sides-of-aisle-f9a35567ed7d
['Marc Gordon']
2020-12-21 01:19:34.666000+00:00
['Trump Administration', '2020 Presidential Race', 'Trump', 'Politics']
Why Remote Learning Was a Big Hot Mess
This September, we started the 2020–21 school year fully remote and it only took three weeks for our family to breakdown and send the kids back to school on a hybrid schedule. I served on the re-opening committees at school, made suggestions, asked questions, and offered to help — but here we are — worse off. Being an involved parent is an understatement in 2020. Aside from helping plan for the re-opening from brainstorming social-emotional learning ideas to inclusion, keeping on top of the kids each day while working remotely, and explaining to our teenager how to log into Zoom securely, nothing could have prepared us for this year. And I’m left wondering — Why are our kids held to impossible standards during a worldwide pandemic? It started on the first day of school when our son’s teacher posted a Zoom link to google classroom. He introduced himself while looking at the kids who attended in person, read them the book, wrote on the board while we watched, like peeping toms through a keyhole of sorts (Zoom). He muted his mic and then walked away and didn’t return. The next day, the google classroom remained unchanged, with no new links, no classwork, nothing. The same happened on Monday. Photo Credit: Canva.com In another class, his teacher hit the mute button while talking and continued to talk for over twenty-five minutes while a dozen students sat there staring patiently at the screen. That same teacher has given him zeros for not completing work with vague instructions while he has no access to the textbook online. And I’m not pointing the finger — I’m saying that we are all stuck in miscommunication limbo waiting for this cycle to end. But it isn’t going to end soon. Now that the flood gates of remote possibilities have opened, remote learning to some capacity is a permanent part of our lives. A few mismarked absences and zeros later, our son fell apart. He fell apart before me. He said he felt invisible after more e-mails about missed Zoom calls and classwork came. Eager to help him, I logged into his classes and saw for myself — vague assignment instructions and Zoom links buried under the announcements stream in unmanaged google classrooms. Teachers responded to his private comments with e-mails that went unread. It was like going on a scavenger hunt for information and none of us wanted to play anymore. Remote only learning is a big hot mess because it requires parents and caregivers to be more present than any of us know how to be, or have time to be. We’re also asking teachers to deliver instruction to separate populations of students with completely different needs at the same time. On top of everything, we are asking kids to act like adults. Our teenager has been silently struggling, not asking for help, upset about miscommunications with teachers he has never met in a school he has never actually stepped foot in while accumulating absences in the empty shells of his virtual classrooms. And it is all his fault, allegedly. Why is it assumed that teenagers are mature enough to manage their own remote learning? Why are they left to fend for themselves in this sink or swim environment? This isn’t a job and they aren’t at work. But their parents probably are. Not every student has someone at home to help them during the day. What they need are clear instructions for everything from logging in to meetings to assignment prompts. They need broken-down grading rubrics and reminders. They need grace periods and brain breaks. They need compassion.
https://medium.com/age-of-awareness/why-remote-learning-was-a-big-hot-mess-12fd89f6b161
['Laura J. Murphy']
2020-10-27 00:44:32.223000+00:00
['Education', 'Remote Learning', 'Mental Health', 'Parenting', 'Education Reform']
PerlinX V.2 Launch & Partnership with UMA Protocol
PerlinX is the 1st implementation of synthetic asset minting based on UMA Protocol Earn DOUBLE rewards for minting PxUSD using PERL as collateral & staking in the new PERL<>PxUSD pool We’re honoured and excited to be an important part of UMA’s ecosystem and to progress Perlin’s path to bridging legacy financial systems and products to DeFi — democratising access to markets for all. democratizing the decentralised trading of real-world assets and DeFi. “Perlin really dug deep into the UMA protocol to design synthetic assets that leverage the priceless contract design. Theirs is the first UI of this caliber, and I’m excited to see what products they can bring to market.” Hart Lambur — UMA Co-founder TLDR — What Does PerlinX V.2 Do? PERL is now a collateral asset on UMA Protocol , allowing users to mint and farm synthetic token assets of any kind ( PxAssets ). , allowing users to mint and farm synthetic token assets of any kind ( ). The 1st PxAsset you can mint is PxUSD , which is a ‘yield dollar’ pegged to USD and minted with PERL as collateral (a more detailed post on PxUSD will be published soon). , which is a ‘yield dollar’ pegged to USD and minted with PERL as collateral (a more detailed post on PxUSD will be published soon). DOUBLE REWARDS for minting PxUSD and staking in the new PERL<>PxUSD liquidity pool. in the new liquidity pool. Weekly rewards increased by 20% (from 1mil PERL/week to 1.2mil PERL/week). (from 1mil PERL/week to 1.2mil PERL/week). You can start minting PxUSD today using 100% PERLs and no other assets. and no other assets. TRIPLE REWARDS will soon be available as part of a joint liquidity program in PERL, BAL (now available) and UMA tokens (coming soon). You can now earn DOUBLE PERL rewards for minting PxUSD and staking in the PERL<>PxUSD liquidity pool What are PxAsset Tokens? Synthetic PxAsset tokens allow people to take a position on the price of any asset without holding the actual asset. In partnership with UMA Protocol, PerlinX V.2 allows users to create PxAssets that are securely collateralized without an on-chain price feed (e.g. “Priceless” assets). These PxAssets are designed with mechanisms to incentivize token sponsors (those who create the synths) to properly collateralize their positions with PERL and blend features of prediction markets, futures markets, and collateralized loans. Our PxUSD is the 1st synth users can mint on PerlinX today. Why Create Synthetic PxAssets on PerlinX? A few key reasons you will want to create synthetic PxAssets to hold or trade: Earn DOUBLE incentive rewards in PERL when you mint PxUSD and stake it in the new PERL<>PxUSD pool. Mint any synthetic PxAsset you want to trade on the price of any real-world asset without needing to hold the underlying asset itself. Farm PERL, BAL (now available) and UMA (coming soon!) for minting synthetic PxAssets and providing liquidity on PerlinX. Take long or short positions on the underlying asset, depending on where you think the asset price is going. This allows you to participate in the unlimited potential for price movements and volatility. Short sell assets that don’t have an easily accessible derivatives market and assist in price discovery. If you believe the price of the asset will go down, you can short sell the asset (which creates more ‘temporary supply’) and buy back later. How Do I Create Synthetic PxAssets on PerlinX? PERL tokens are required as collateral to create synthetic tokens on PerlinX. Here below are a few simplified diagrams on how PxAssets are minted, maintained and settled on PerlinX. For more detailed instructions, see our Gitbook User Guide HERE. The PxAsset and PxUSD minting process Users need to maintain the required collateral amount to avoid being liquidated Based on UMA, PxAssets will expire and settle each month Other Synthetic PxAssets Coming Soon PxUSD is just the first of many synthetic assets you will be able to mint using PerlinX. We are actively exploring other PxAssets to onboard soon. Please let us know what other PxAssets you want us to list (and why) in our Discord community discussion channel HERE. USEFUL LINKS PerlinX & UMA Synthetic PxAsset Minting Part 1: Introducing PxUSD & what you can do with it Part 2: All the Calculations & Values You Need to Know PerlinX Synthetic Asset Guide: https://docs.perlinx.finance/perlin-community/perlinx-synthetic-asset-guide Start Staking now: http://app.perlinx.finance Community APY dashboard: bit.ly/PerlinX Warning This post is not investment advice. As with exposure to all assets, there are risks involved in trading synthetic tokens, which you need to assess for yourself before participating. Minting synthetic PxAssets using PERL means that you will be exposed to the price volatility of both the PERL and the PxAsset (both upside and downside!). Stay tuned for more updates and announcements on our channels: Twitter | Discord | Telegram Announcements | Telegram Discussion
https://medium.com/perlin-network/perlinx-v-2-launch-partnership-with-uma-protocol-bc1b132900c7
['Darren Toh']
2020-10-17 15:54:47.827000+00:00
['Defi', 'Trade', 'Yield Farming', 'Derivatives', 'Blockchain']
3 Reasons Why ‘Play’ Is Not a Luxury
1. Play Strengthens Survival Skills The first reason why we play is that it strengthens our ability to survive. Karl Groos was a leading pioneer in applying evolutionary theory to the study of play and contended that “higher beings”, or mammals, needed to practice behaviors and excercise their bodies in order to become competent adults. For example, his theory could explain why Alaskan bear cubs that play more are also more likely to survive the winter, as was found in a study by Fegan and Fegan (2009). This research suggested that the cubs were able to develop the physical and emotional skills critical to survival. Groos’s theory also explains why species that rely more on learning than instinct play more than instinct-dependent species. He also wrote that humans practice skills specific to our species; for example, many of our children’s games exercise our mental and language capacities. Personally, as a kid, I remember constantly wanting to play a memory card game, which involved flipping cards laid out in a grid and remembering their matches faster than my competitor (usually my grandma). This was from the time I was five or six. In fact, many of my favorite “active” childhood games involved some sort of mental strategy (e.g., Capture the Flag, Hide and Go Seek, Marco Polo, etc.). Humans, according to Groos, use play to practice the skills we need later in life, and that includes having a robust mental capacity. After reading about Karl Groos’s theory, I now wonder if reading fiction also fits within it. For both children and adults, reading fiction might be a kind of play that exercises empathy and emotional capacity, which are critical to thriving as humans. For example, a squirrel doesn’t need to understand how another squirrel might react based on his complex emotional state, but humans sure do. Like Groos said, we rely on learning and processing more than instinct, and that learning is done through our real as well as imaginative experiences.
https://medium.com/curious/3-reasons-why-play-is-not-a-luxury-f2400e95a53f
['Katie Martin']
2020-11-03 13:26:18.624000+00:00
['Mindfulness', 'Humanity', 'Innovation', 'Wellness', 'Excercise']
Gamification and the Dual Loop Famework in 2 Minutes
Let’s begin with a quote by Jane McGonigal, American Designer and author. She said, “Games give us unnecessary obstacles that we volunteer to tackle.” We are inclined to do this, maybe because it is fun. But it makes us think. Despite being efficient and usable, we are creating very serious user interfaces. Maybe we’ve to dial down a bit. Maybe. Well, That is when we started looking into games; to absorb the fun elements. In other words, we started sneaking in gameplay elements into our user interfaces to liven it up. Gamification Gamification is the insertion of gameplay elements in non-gaming settings to enhance user engagement with a product or service. It is a loop consisting of 4 elements, and it goes by Goals, Rules, Feedback or Rewards, and Motivation. Motivation is a critical factor that drives you to get that user engagement. So, let’s look at a simple loop. We have a goal, we have a reward for that, and we make an investment and go again, and there is some motivation to go again at that next task. In this loop, we have action to kill or fight the monster, and we get a reward from it by killing the beast, and we level up and make the investment to build a better character and go again for the next shot. And, the investment is critical. The Problem But the problem is that this simple loop can get tedious and lackluster in the long run because as we move along, these tasks and goals can obtain more significant or more challenging, and we might put more investment into that and can tire users a bit. The Dual Loop Framework This is when a dual loop framework comes in. It helps because we explode this simple loop into inner and outer circles, wherein the inner loops we have minor tasks or minor actions to do, and we’ll get little rewards for that. In the outer loop, there are significant actions and significant rewards for it. It is not disappointing because you are encouraged to go on the front foot in either case and do the task. Google Maps Now let’s take a look at one company that has invested in dual-loop frameworks. Even Waze can be considered. The other being Google Maps. Both are map oriented products. For a better perspective, let’s look into what Google Maps is doing. Here, the inner loop is a basic rating system, perhaps where you rate service, and the more significant task can be that you can write a review about a service or maybe take a photograph. All these things add up to reward you with points and the accumulation of those points, in fact, give you more important things like a freebie that Google cares to give away. Who knows. Apart from all this, Google’s driving factor is looking forward to the critical elements of motivation is building a better Google Map community.
https://uxplanet.org/gamification-and-the-dual-loop-famework-in-2-minutes-6f1a4f855f16
[]
2020-12-23 21:47:55.462000+00:00
['Gamification', 'UX', 'User Research', 'User Experience Design', 'UX Design']
Türk Tarihinin Kırk Atlısı BİLGİNLER VE SANATÇILAR
in Both Sides of the Table
https://medium.com/yolcumisali/t%C3%BCrk-tarihinin-k%C4%B1rk-atl%C4%B1s%C4%B1-bi%CC%87lgi%CC%87nler-ve-sanat%C3%A7ilar-a6eaf4045cc7
['Ahmet Çadırcı']
2018-06-01 12:19:31.614000+00:00
['Bilim', 'Bilginler Ve Sanatçılar', 'Osmanlı Tarihi', 'Siyaset', 'Tarih']
Why metadata matters
Why metadata matters Addressing science’s missing data problem by putting metadata on the blockchain Photo by Tobias Fischer on Unsplash When we think about the key features of blockchain — its decentralised, immutable, and universally accessible nature — it’s easy to see why people are excited about the potential for improving science. Blockchain technology is evolving rapidly, unlocking new opportunities for science and scientists. At Frankl, we’re excited about the possibilities for the not too distant future. But our immediate concern is to provide pragmatic solutions that can be implemented today. Our first step towards a blockchainified science involves metadata. By that we mean data about the data (when it was collected, what software was used, where it’s stored and so on) but not the actual data itself. It sounds simple — and possibly quite boring). But putting metadata on the blockchain is ultimately a very powerful remedy to one of science’s biggest and oldest problems. Where has all the data gone? When we look at problems affecting science today, one of the most pressing issues is disappearing data. Scientific data can go “absent without leave” for a number of different reasons: Scientists don’t archive their data properly and they lose track of it, can’t make sense of it, or their hard-drive dies and they don’t have a back-up. This happens surprisingly (and embarrassingly) often. Scientists begin a study but abandon it before it is completed due to lack of funds, unpromising preliminary results, or other priorities. The data might be useful in combination with data from other studies, but it’s not publishable on its own. Scientists selectively publish data that supports a particular theory. Inconvenient data are quietly forgotten. Scientists try and publish data but are unsuccessful because the results aren’t considered interesting enough by the scientific journals. Knowing how difficult it will be to publish a null result, scientists prioritise writing up studies that gave them more publishable results. The end result is what’s become known as the “file drawer problem”. The published scientific literature represents only a small and biased sample of the research that has actually been conducted. The rest is stuffed away at the back of the metaphorical filing cabinet. There’s a lot of wasted effort here — data collected and then not used. But the bigger problem is the bias in what is published. Publication bias in action An example of this “publication bias” comes from my own research on autism. In 2011, my colleagues and I published a study in which we gave a group of university students a simple “visual search” task that involved locating small objects in a complex display. We found that students with high levels of autism-like personality traits performed better on the task than those with fewer such traits. It was an interesting finding because many studies have shown that people with an actual clinical diagnosis of autism also perform very well on such tasks. So our results were consistent with the idea that autism lies on a continuum that fades into the general population. Another group in Western Australia reported similar findings around the same time, so we felt confident that this was a real effect. But a few years later, I was asked to peer review a study by researchers at Cambridge University. They had followed our methods exactly but with a much bigger sample — and found absolutely no effect. At this point, I started asking other researchers in the field if they had looked at the same question. It turned out that several labs had done so, but hadn’t published their results because they didn’t find an effect. Redundant research This is just one example. But publication bias is a major problem throughout science. If we want to know whether an effect is real or not —whether that effect involves correlates of autistic traits or the efficacy of the latest cancer drug — we need to see the data in its messy totality. We don’t just want the data that tells a good story. Scientists waste a huge amount of time and money on research that is effectively redundant — because the research has already been done but was considered unpublishable, or because it’s attempting to build on research that itself doesn’t stand up to scrutiny. In a 2016 survey of over 1500 scientists, published in Nature, “Selective Reporting” was rated as the most important cause of irreproducible results. The first step is that we need to know about all the data. And this is where blockchain can play a really important role. Metadata on the blockchain At Frankl, we’re building apps that facilitate open science by design. Data collected using Frankl apps will be archived on the fly (no more relying on dodgy hard-drives). Sharing data then becomes a simple exercise in changing the access privileges. Where and in what form that data is archived will depend on a number of factors including the size of the data files, its private or public nature, as well as any ethical and legal considerations such as GDPR compliance. This will naturally vary from project to project and will evolve with technology. But whatever happens to the actual data, the default setting will be for the metadata to be written to the blockchain. Whenever a Frankl app is used to collect data, a smart contract within the application will write a short message to the blockchain. This could include: The time and date of data collection The Frankl ID of the researcher The identity and version number of the Frankl application used to collect the data A pointer to the archived data (e.g., the URL of the repository) The hash of the actual data [The hash is a short character string derived from the data. You can’t recover the data from the hash. But if have the data, you can match it to the hash in the metadata and confirm that it hasn’t been altered.] Why metadata matters Writing metadata to the blockchain is a simple and easy first step. It depends only on existing technology (there’s no reliance on tech that is still in beta version). The metadata itself is small so the cost (i.e., ‘gas’) involved in writing to the blockchain is also small. And because there’s absolutely nothing that would allow anyone to identify the subjects of the research, there should be no ethical or privacy issues to contend with. Yet even this small step is incredibly powerful. Because the blockchain is public and immutable, there’s no pretending that inconvenient data don’t exist. If scientists exclude data from their reported analyses, they have to justify it. Public, immutable metadata helps prevent the actual data disappearing. It makes it easier to conduct meta-analyses that combine the results of different studies looking at the same effect (which can now include studies that haven’t been published and perhaps never will). It also provides a mechanism by which scientists can demonstrate their contribution to a collaborative project. The beauty of metadata is not just its simplicity but its generality. We can employ the same structure to the metadata regardless of how large the related data file is, how it’s organised, where it’s stored, or even what scientific field it pertains to. Metadata on the blockchain can be the thread that links all Frankl projects, the foundation on which all future developments build. It’s the first small but important step towards a blockchainified science — and away from science as normal.
https://towardsdatascience.com/why-metadata-matters-ab7253ea35c7
['Jon Brock']
2018-07-21 23:27:22.056000+00:00
['Open Science', 'Data Science', 'Science', 'Blockchain', 'Metadata']
Using python collecting real-world data by web scraping real estate website and doing data wrangling.
In any data science project one of the most asked questions is how to get the data, where is the data. I would say there is plenty of data around you, you just need to extract it. For example, on the internet, there are millions of petabytes of data available and most of it is free. All you need to know is how to extract it and make it useful for your organisation. I would say any type of organisation can make use of the free data available on the internet for their business gains. They can use web scraping to extract it. For demonstrating web scrapping in this article I will be scraping data from domian.com. Domain.com is a real estate website. I will be scrapping the price, number of bedrooms, number of bathrooms, number of parking, address and location(latitude and longitude) of each house in Melbourne, Australia. Before diving into python programming you need to know some basics about html. All the web pages are written in HTML(Hyper Text Markup Language). HTML is the standard markup language for creating Web pages HTML describes the structure of a Web page HTML elements tell the browser how to display the content HTML elements label pieces of content such as “this is a heading”, “this is a paragraph”, “this is a link”, etc. A simple HTML document looks like this: <!DOCTYPE html> <html> <head> <title>Page Title</title> </head> <body> <h1>This is a Heading</h1> <p>This is a paragraph.</p> </body> </html> Where, The <!DOCTYPE html> declaration defines that this document is an HTML5 document declaration defines that this document is an HTML5 document The <html> element is the root element of an HTML page element is the root element of an HTML page The <head> element contains meta information about the HTML page element contains meta information about the HTML page The <title> element specifies a title for the HTML page (which is shown in the browser's title bar or in the page's tab) element specifies a title for the HTML page (which is shown in the browser's title bar or in the page's tab) The <body> element defines the document's body, and is a container for all the visible contents, such as headings, paragraphs, images, hyperlinks, tables, lists, etc. element defines the document's body, and is a container for all the visible contents, such as headings, paragraphs, images, hyperlinks, tables, lists, etc. The <h1> element defines a large heading element defines a large heading The <p> element defines a paragraph You can get this HTML document of any website by doing a right-click on a webpage and then selecting “View page source”(available in Microsoft Edge and Google Chrome). All the content on the webpage will be inside this HTML document in a well-structured format, all you need to do is extract the required data from this HTML document. 1. Data Collection There are various libraries available in Python to get this HTML document and parse it into the required format you want. # sample code to get a HTML document and parse it into the required format you want from urllib.request import urlopen from bs4 import BeautifulSoup bsobj = BeautifulSoup(html, “lxml”) html = urlopen(“ https://www.domain.com.au/sale/melbourne-region-vic/ ")bsobj = BeautifulSoup(html, “lxml”) In the above code, urlopen is extracting HTML document of given web page and BeautifulSoup is phrasing it into lxml format. lxml format is easy to understand, you can use another format you want such as json etc. Search result for melbourne houses on Domain.com The above screenshot showing “https://www.domain.com.au/sale/melbourne-region-vic/” URL result and it is showing all the properties available for selling in Melbourne but we need to find webpages for each Melbourne properties available on “https://www.domain.com.au/sale/melbourne-region-vic/”(this search result page). We can do it by extracting all the URLs available on this page and storing them into a list. One more thing to add here is that there are 50 pages for Melbourne house search on Domain.com and this is only 1st page so we need to go to every 50 pages and extract all the URLs for each advertised house in Melbourne. We need to apply the loop for 50 iterations, each iteration for each page. from bs4 import BeautifulSoup import re # home url of domian.com australia home_url = " # number of pages of search result are 50, so we need to page_numbers = list(range(50))[1:50] # list to store all the urls of properties list_of_links = [] # for loop for all 50 search(melbourne region) pages for page in page_numbers: # extracting html document of search page html = urlopen(home_url + "/sale/melbourne-region-vic/?sort=price-desc&page=" + str(page)) # parsing html document to 'lxml' format bsobj = BeautifulSoup(html, "lxml") # finding all the links available in 'ul' tag whos 'data-testid' is 'results' all_links = bsobj.find("ul", {"data-testid": "results"}).findAll("a", href=re.compile(" from urllib.request import urlopenfrom bs4 import BeautifulSoupimport re# home url of domian.com australiahome_url = " https://www.domain.com.au # number of pages of search result are 50, so we need topage_numbers = list(range(50))[1:50]# list to store all the urls of propertieslist_of_links = []# for loop for all 50 search(melbourne region) pagesfor page in page_numbers:# extracting html document of search pagehtml = urlopen(home_url + "/sale/melbourne-region-vic/?sort=price-desc&page=" + str(page))# parsing html document to 'lxml' formatbsobj = BeautifulSoup(html, "lxml")# finding all the links available in 'ul' tag whos 'data-testid' is 'results'all_links = bsobj.find("ul", {"data-testid": "results"}).findAll("a", href=re.compile(" https://www.domain.com.au/* ")) for link1 in all_links: # checking if it is a project and then performing similar thing I did above if 'project' in link1.attrs['href']: inner1_html = urlopen(link1.attrs['href']) inner1_bsobj = BeautifulSoup(inner1_html, "lxml") for link2 in inner1_bsobj.find("div", {"name": "listing-details__other-listings"}).findAll("a", href=re.compile(" if 'href' in link2.attrs: list_of_links.append(link2.attrs['href']) else: list_of_links.append(link1.attrs['href']) # inner loop to find links inside each property page because few properties are project so they have more properties inside their project pagefor link1 in all_links:# checking if it is a project and then performing similar thing I did aboveif 'project' in link1.attrs['href']:inner1_html = urlopen(link1.attrs['href'])inner1_bsobj = BeautifulSoup(inner1_html, "lxml")for link2 in inner1_bsobj.find("div", {"name": "listing-details__other-listings"}).findAll("a", href=re.compile(" https://www.domain.com.au/* ")):if 'href' in link2.attrs:list_of_links.append(link2.attrs['href'])else:list_of_links.append(link1.attrs['href']) You can simply copy and paste the above code and do some modifications according to your needs, and try to run. You can download my jupyter notebook here (github). This Jupyter notebook contains full detailed description with complete python code. Above I did a few different things: I used the search page which is sorted by price. I did this so that it will be easier to impute missing price of houses. I will explain it more in data wrangling part below. The Inner loop is used because few properties are not just properties, they are projects and each project have more properties URL links inside their page. Now we have all the URLs of each property in Melbourne, Australia. Each URL is unique for each property in Melbourne. Our next step will be, go inside each URL and extract price, number of bedrooms, number of bathrooms, number of parking, address and location(latitude and longitude). # removing duplicate links while maintaining the order of urls abc_links = [] for i in list_of_links: if i not in abc_links: abc_links.append(i) # defining required regural expression for data extraction pattern = re.compile(r'>(.+)<!.*>(.+?)</span>.*') pattern1 = re.compile(r'>(.+)<.') pattern2 = re.compile(r'destination=(.+)" rel=.') basic_feature_list = [] # loop to iterate through each url for link in abc_links: # opening urls html = urlopen(link) # converting html document to 'lxml' format bsobj = BeautifulSoup(html, "lxml") # extracting address/name of property property_name = bsobj.find("h1", {"class": "css-164r41r"}) # extracting baths, rooms, parking etc all_basic_features = bsobj.find("div", {"class": "listing-details__listing-summary-features css-er59q5"}).findAll("span", {"data-testid": "property-features-text-container"}) # extracting property price property_price = bsobj.find("div", {"data-testid": "listing-details__summary-title"}) # extracting latitudes and longitudes lat_long = bsobj.find("a", {"target": "_blank", 'rel': "noopener noreferer"}) # dictionary to store temporary data basic_feature_dict = {} # few properties does not contain all the 4 features such as rooms, baths, parkings, area. So need to check # how many features they contain if len(all_basic_features) == 4: basic_feature_dict[pattern.findall(str(all_basic_features[0]))[0][1]] = pattern.findall(str(all_basic_features[0]))[0][0] basic_feature_dict[pattern.findall(str(all_basic_features[1]))[0][1]] = pattern.findall(str(all_basic_features[1]))[0][0] basic_feature_dict[pattern.findall(str(all_basic_features[2]))[0][1]] = pattern.findall(str(all_basic_features[2]))[0][0] basic_feature_dict[pattern.findall(str(all_basic_features[3]))[0][1]] = pattern.findall(str(all_basic_features[3]))[0][0] elif len(all_basic_features) == 3: basic_feature_dict[pattern.findall(str(all_basic_features[0]))[0][1]] = pattern.findall(str(all_basic_features[0]))[0][0] basic_feature_dict[pattern.findall(str(all_basic_features[1]))[0][1]] = pattern.findall(str(all_basic_features[1]))[0][0] basic_feature_dict[pattern.findall(str(all_basic_features[2]))[0][1]] = pattern.findall(str(all_basic_features[2]))[0][0] elif len(all_basic_features) == 2: basic_feature_dict[pattern.findall(str(all_basic_features[0]))[0][1]] = pattern.findall(str(all_basic_features[0]))[0][0] basic_feature_dict[pattern.findall(str(all_basic_features[1]))[0][1]] = pattern.findall(str(all_basic_features[1]))[0][0] elif len(all_basic_features) == 1: basic_feature_dict[pattern.findall(str(all_basic_features[0]))[0][1]] = pattern.findall(str(all_basic_features[0]))[0][0] # putting 'none' if price is missing if property_price is None: basic_feature_dict['price'] = None else: basic_feature_dict['price'] = pattern1.findall(str(property_price))[0] # putting 'none' if property name/address is missing if property_name is None: basic_feature_dict['name'] = None else: basic_feature_dict['name'] = pattern1.findall(str(property_name))[0] # putting 'none' if latitude and logitude are missing if lat_long is None: basic_feature_dict['lat'] = None basic_feature_dict['long'] = None else: basic_feature_dict['lat'] = pattern2.findall(str(lat_long))[0].split(',')[0] basic_feature_dict['long'] = pattern2.findall(str(lat_long))[0].split(',')[1] # appending all the data into a list basic_feature_list.append(basic_feature_dict) Now, the output of the above code gives us a list of dictionaries with all the available extracted data. In the below code we will be converting it into many individual lists because we have to do little more cleaning and extraction of above-extracted data and it will be easier to do in lists. # creating empty lists beds_list = [] baths_list = [] parking_list = [] area_list = [] name_list = [] lat_list = [] long_list = [] price_list = [] # interating through list created above with data for row in basic_feature_list: # checking if the row cointains 'Beds', 'Bed' or nothing if 'Beds' in row: beds_list.append(row['Beds']) elif 'bed' in row: beds_list.append(row['Bed']) else: beds_list.append(None) # checking if the row cointains 'Baths', 'Bath' or nothing if 'Baths' in row: baths_list.append(row['Baths']) elif 'Bath ' in row: baths_list.append(row['Bath']) else: baths_list.append(None) # checking if the row cointains 'Parking', '-' or nothing if 'Parking' in row and row['Parking'] != '−': parking_list.append(row['Parking']) else: parking_list.append(None) # checking if the row cointains ' ', or nothing. Because empty space (i.e. ' ') reprsents area if ' ' in row: area_list.append(row[' ']) else: area_list.append(None) # checking if the row cointains 'name' that is address of property if 'name' in row: name_list.append(row['name']) else: name_list.append(None) # checking if the row cointains 'price' if 'price' in row: price_list.append(row['price']) else: price_list.append(None) # checking if the row cointains 'lat' that is lattitude of property if 'lat' in row: lat_list.append(row['lat']) else: lat_list.append(None) # checking if the row cointains 'long' that is lattitude of property if 'long' in row: long_list.append(row['long']) else: long_list.append(None) Now, we have all the data in list format. 2. Data Wrangling Few people do not want to show the price of the property so they do not put price in their property advertisement. Some times they do not put anything in the price column and sometimes they put something like ‘contact dealer’ or ‘after inspection price’ or something else. And also some people do not put directly price they put a range of price or price with some extra text before the price or after the price or both. So we need to handle all these situations and extract only the price and if the price is not given then put ‘none’ their. Below code is doing the same. import random # creating a new empty price list actual_price_list = [] # defining some regural expressions, they will be used to extract price of properties pattern1 = re.compile(r'\$\s?([0-9,\.]+).*\s?.+\s?\$\s?([0-9,\.]+)') pattern2 = re.compile(r'\$([0-9,\.]+)') # interating through price_list for i in range(len(price_list)): # check that a price is given or range of price is given if str(price_list[i]).count('$') == 1: b_num = pattern2.findall(str(price_list[i])) # checking length of string, if it is less than or equal to 5 then price is in millions so need to convert the price if len(b_num[0].replace(',', '')) > 5: actual_price_list.append(float(b_num[0].replace(',', ''))) else: actual_price_list.append(float(b_num[0].replace(',', ''))*1000000) elif str(price_list[i]).count('$') == 2: a_num = pattern1.findall(str(price_list[i])) random_error = random.randint(0, 10000) # checking length of string, if it is less than or equal to 5 then price is in millions so need to convert the price if len(a_num[0][0].replace(',', '')) > 5 and len(a_num[0][1].replace(',', '')) > 5: # to take average price between two price range given avg_price = (float(a_num[0][0].replace(',', '')) + float(a_num[0][1].replace(',','')))/2 else: avg_price = (float(a_num[0][0].replace(',', '')) + float(a_num[0][1].replace(',',''))/2)*1000000 # adding or subtracting the amount from the average price by normally distributed generated random number avg_price = avg_price + random_error actual_price_list.append(avg_price) else: actual_price_list.append('middle_price') There are alot of missing values in price because many people do not want to give or show house price on the website. Now we need to impute the missing price and I came up with a trick. The trick is that we sort houses by their price then all the houses with or without the shown price will be sorted. The sorting by the website is done using price given by the owners of the houses to the website but it is not shown on the website for users. This is why we extracted houses data from the website when website results are sorted by price. Let us understand it using an example, suppose there are 10 houses and price of two houses is missing but we can sort houses according to their price, so 1st we sort them according to their price then we see that price of house 4 and house 5 is missing so we will take mean of the price of house 3 and house 6. And then impute missing prices with that mean value. Similar kind of thing we will be doing in the below code. # for loop to impute missing values at the start of list, because here we cannot take mean for i in range(len(actual_price_list)): if actual_price_list[i] != 'middle_price': for a in range(i, -1, -1): actual_price_list[a] = actual_price_list[i] break # here we will be taking mean and then add random number with same standard deviation normal distribution and then impute it for i in range(len(actual_price_list)): if actual_price_list[i] == 'middle_price': for j in range(i, len(actual_price_list)): if actual_price_list[j] != 'middle_price': mid = (actual_price_list[i-1] + actual_price_list[j])/2 if actual_price_list[j] > 12000000: for k in range(i, j): random_error = random.randint(-1000000, 1000000) mid = mid + random_error actual_price_list[k] = mid i = j break elif actual_price_list[j] > 5000000: for k in range(i, j): random_error = random.randint(-100000, 100000) mid = mid + random_error actual_price_list[k] = mid i = j break else: for k in range(i, j): random_error = random.randint(-10000, 10000) mid = mid + random_error actual_price_list[k] = mid i = j break elif j == len(actual_price_list)-1: for n in range(i, len(actual_price_list)): random_error = random.randint(-1000, 1000) a_price = actual_price_list[i-1] a_price = a_price + random_error actual_price_list[n] = a_price break Creating Dataframe. import pandas as pd house_dict = {} house_dict['Beds'] = beds_list house_dict['Baths'] = baths_list house_dict['Parking'] = parking_list house_dict['Area'] = area_list house_dict['Address'] = name_list house_dict['Latitude'] = lat_list house_dict['Longitude'] = long_list house_dict['Price'] = actual_price_list house_df = pd.DataFrame(house_dict) house_df.info() ‘area’ column have many null values which cannot be imputed so we will be deleting ‘area’ column. house_df.drop('Area', axis=1, inplace=True) Also converting beds, baths, parking string type into numeric type. house_df["Beds"] = pd.to_numeric(house_df["Beds"]) house_df["Baths"] = pd.to_numeric(house_df["Baths"]) house_df["Parking"] = pd.to_numeric(house_df["Parking"]) Now perform some explanatory data analytics to find problems in the data and then solve those issues. For example use scatter plot to check outliers in the data or use histogram to see the distribution of data etc. # scatter plot house_df.plot.scatter(x='Beds', y='Baths') # histogram house_df["Price"].plot.hist(bins = 50) Data cleansing is an iterative process. The first step of the cleansing process is data auditing. In this step, we identify the types of anomalies that reduce the data quality. Data auditing is about programmatically checking the data using some validation rules that are pre-specified, and then creating a report of the quality of the data and its problems. We often apply some statistical tests in this step for examining the data. Data Anomalies can be classified at a high level into three categories: Syntactic Anomalies: describe characteristics concerning the format and values used for representation of the entities. Syntactic anomalies such as: lexical errors, domain format errors, syntactical error and irregularities. Semantic Anomalies: hinder the data collection from being a comprehensive and non-redundant representation of the mini-world. These types of anomalies include: Integrity constraint violations, contradictions, duplicates and invalid tuples. Coverage Anomalies: decrease the amount of entities and entity properties from the mini-world that are represented in the data collection. Coverage anomalies are categorized as: missing values and missing tuples. There are many ways to handle these anomalies, I will not be going in details on how to handle these anomalies because our extracted data does not have these anomalies. Data can also be transformed according to the need. This problem of predicting house price is a regression problem. If it is a linear regression problem then we can perform some transformation to make data satisfy linear regression assumptions. We can add or create new features using existing features in the data set so that we can make data more enriched. I am creating a new column which will contain the distance of the house from the city(Flinder Street Station, Melbourne). Missing values Source of missing values: Data Extraction: It is possible that there are problems with extraction process. In such cases, we should double-check for correct data with data guardians. Some hashing procedures can also be used to make sure that data extraction is correct. Errors at data extraction stage are typically easy to find and can be corrected easily as well. Data collection: These errors occur at time of data collection and are harder to correct. They can be categorized in four types: Missing completely at random: This is a case when the probability of missing variable is same for all observations. For example: respondents of data collection process decide that they will declare their earning after tossing a fair coin. If an head occurs, respondent declares his / her earnings & vice versa. Here each observation has equal chance of missing value. Missing at random: This is a case when variable is missing at random and missing ratio varies for different values / level of other input variables. For example: We are collecting data for age and female has higher missing value compare to male. Missing that depends on unobserved predictors: This is a case when the missing values are not random and are related to the unobserved input variable. For example: In a medical study, if a particular diagnostic causes discomfort, then there is higher chance of drop out from the study. This missing value is not at random unless we have included “discomfort” as an input variable for all patients. Missing that depends on the missing value itself: This is a case when the probability of missing value is directly correlated with missing value itself. For example: People with higher or lower income are likely to provide non-response to their earning. According to the research and after performing EDA(explanatory data analytics) you can safely find out what type of missingness in your data. Our extracted data is Missing completely at random and data set is huge so I have deleted all the rows with any ‘none’ value. import math cleaned_house_df = house_df.dropna(how='any') cleaned_house_df.reset_index(drop = True, inplace = True) # radius of earth is 6378 r = 6378 dis_to_city = [] for i in range(len(cleaned_house_df)): lat1_n = math.radians(-37.818078) lat2 = math.radians(float(cleaned_house_df['Latitude'][i])) lon1_n = math.radians(144.96681) lon2 = math.radians(float(cleaned_house_df['Longitude'][i])) lon_diff_n = lon2 - lon1_n lat_diff_n = lat2 - lat1_n a_n = math.sin(lat_diff_n / 2)**2 + math.cos(lat1_n) * math.cos(lat2) * math.sin(lon_diff_n / 2)**2 c_n = 2 * math.atan2(math.sqrt(a_n), math.sqrt(1 - a_n)) dis_to_city.append(round(r*c_n, 4)) cleaned_house_df['distance_to_city'] = dis_to_city The final step is to export Dataframe to any other tabular format file like excel file or CSV file etc.
https://medium.com/rakesh-nain/using-python-collecting-real-world-data-by-web-scraping-real-estate-website-and-doing-data-2feb50a3b94f
['Rakesh Nain']
2020-12-27 21:24:33.084000+00:00
['Python', 'Web Scraping', 'Data Wrangling', 'Data Collection', 'Data Science']
6 Inspiring Tips for Taking the Leap into Software Development
6 Inspiring Tips for Taking the Leap into Software Development We’ve just launched a new ebook featuring advice from passionate software developers of the Makers community Makers Follow Dec 2, 2020 · 2 min read Here at Makers, we’re excited about playing a role in creating a new generation of tech talent — one that has people from a wide variety of backgrounds. However, we know that the number of women in the tech sector has remained stagnant over the past 10 years. And we know that role models play a part. This is why we’ve taken a number of measures to help diversify the tech industry of tomorrow. For starters, we run an annual Women in Software Power List celebrating some of the top rising stars in the UK coding community. We also offer a women’s discount to those applying to train at Makers, as well as having a new series of scholarship options available. In short, we want to make women more visible and prominent in tech. Our brand new ebook gathers some hard-won wisdom from six of our Makers alumni, in an easy and accessible format. It shares advice from six women, all of whom trained with Makers, and talks about the struggles they faced when they were starting out as junior developers and how they overcame them. This candid and engaging read is inspiring and relevant to anyone who’s thinking of becoming a developer; those who have recently trained at Makers; or those who are simply interested in exploring a new career and are considering retraining into tech. Here’s a sneak peek at the contents: “If you want to change your life, you can.” (Chiaki Mizuta) “Find yourself a community + set achievable goals.” (Kim Diep) “Plan before you code!” (Kavita Kalaichelvan) “You’re always re-evaluating what you think you know, so dealing with that is important.” (Ruth Earle) “Be kind to yourself.” (Eithel Anderson) “Don’t be afraid about putting yourself out there and to really aim for whatever company that interests you.” (Funmilayo Adewodu) To read more, download the full ebook here.
https://blog.makersacademy.com/6-inspiring-tips-for-taking-the-leap-into-software-development-f7306da6ba29
[]
2020-12-02 13:49:09.526000+00:00
['Career Change', 'Women In Business', 'Women In Tech', 'Junior Developer', 'Career Change Advice']
Amazon’s Fire TV Stick 4K, the ‘media streamer to beat,’ drops to $35
If you've been looking to upgrade your streaming setup but haven't wanted to splurge for more advanced capabilities, don't miss today's deal. Fire TV Stick 4K" data-vars-link-position-id="000" data-vars-link-position="Body Text" data-po="amazon-ajax" data-product-id="1441706" data-vars-product-id="1441706" data-bkc="ConsumerElectronics" data-bkmfr="Amazon" data-vars-bkmfr="Amazon" data-bkvndr="" data-vars-bkvndr="" data-amazon-ajax-link="true" data-amazon-ajax-link-loaded="false" data-amazon-ajax-link-asin="B079QHML21" data-amazon-ajax-link-subtag="US-003-3572273-000-1441706-web-20">Amazon is selling the Fire TV Stick 4K for $35. This is only the second time it's been this low since social distancing began, and matches the cheapest it's been this year. [ techconnect.com/">Want more great deals? Check out TechConnect, our home for the best tech deals, all hand-picked by the PCWorld, Macworld and TechHive editors. ] We reviewed the Fire TV Stick 4K in mid-2019, giving it 4.5 out of 5 stars while calling it the "media streamer to beat" and "an unbeatable value." The Fire TV Stick fits into an HDMI port on your TV, and the interface gives you access to a wide range of premium video services, from Amazon Prime Video to Netflix and Apple TV. It also comes with an Alexa-powered remote for voice control that can integrate with compatible TVs, soundbars, receivers and other equipment. On top of that you can use the Fire TV Stick and Alexa to control smart home devices, including viewing live video feeds on your TV with compatible equipment. Of course, it also comes packing 4K video support, as the name suggests, and it supports a number or HDR standards including HDR 10, HDR 10+, and Dolby Vision. [name="Amazon Fire TV Stick 4K" data-vars-link-position-id="000" data-vars-link-position="Body Text" data-po="amazon-ajax" data-product-id="1441706" data-vars-product-id="1441706" data-bkc="ConsumerElectronics" data-bkmfr="Amazon" data-vars-bkmfr="Amazon" data-bkvndr="" data-vars-bkvndr="" data-amazon-ajax-link="true" data-amazon-ajax-link-loaded="false" data-amazon-ajax-link-asin="B079QHML21" data-amazon-ajax-link-subtag="US-003-3572273-000-1441706-web-20">Today's deal: Amazon Fire TV Stick 4K for $35.] Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
https://medium.com/@Ryan44192227/amazons-fire-tv-stick-4k-the-media-streamer-to-beat-drops-to-35-f4750127c9f2
[]
2020-08-27 06:17:41.988000+00:00
['Tvs', 'Surveillance', 'Chargers', 'Internet']
Hadoop & Spark for beginners
I have noticed that many people are not familiar with Spark tools, so I wanted to put together few simple but important concepts about Spark below. Firstly, some definitions (extracted from here): Hadoop is a framework that facilitates software to store and process volumes of data in a distributed environment across a network of computers. Several parts make up the Hadoop framework, which includes: Hadoop Distributed File System (HDFS): Stores files in Hadoop centric format and places them across the Hadoop cluster where they provide high bandwidth. MapReduce Programming Model: This is a Hadoop feature that facilitates a batch engine to process large scale data in Hadoop clusters. YARN (Yet Another Resource Negotiator): A platform for managing computing resources within the Hadoop clusters. Spark is a framework that focuses on fast computation using in-memory processing. Spark includes the following components: Spark Core: it facilitates task dispatching, several input/output functionalities, and task scheduling with an application programming interface. Spark SQL: Supports DataFrames (data abstraction model) that, in turn, provides support for both structured and unstructured data. Spark Streaming: This deploys Spark’s fast scheduling power to facilitate streaming analytics. Spark can run on EC2, on Hadoop YARN (i.e.: spark data processing jobs run on yarn), on Mesos, or on Kubernetes. Hadoop is usually bundled with the following components: Now the concepts: hadoop hadoop client allows you to interact with the file system and it uses kerberos to authenticate, sample commands (hadoop fs can talk to different file system, hdfs can only talk to HDFS): To list version: hadoop version (output.: Hadoop 3.0.0-cdh6.2.1) Command to see root folder name: hadoop fs -ls -d Command to list directories: hadoop fs -ls Command to list files in a directory: hadoop fs -ls pattern To copy a file from linux to hadoop: hadoop fs -put /path/in/linux /hdfs/path kerberos hadoop is usually secured with kerberos, some sample commands: To delegate privileges to other user: pbrun user To verify tickets use: klist To list the obtained tickets: klist -a To obtain and cache a kerberos ticket: kinit -k -t /opt/Cloudera/keytabs/ whoami . hostname -s .keytab whoami / hostname -f @domain . .keytab / @domain Yarn is the cluster management tool.Sample command: yarn application -list Spark You can run apps in Spark using the Spark client OR self contained applications OR scripts. Spark has modules to interact via SQL or DataFrames or streaming. Sample spark client commands: spark-submit -version (e.g.: version 2.4.0-cdh6.2.1) To run spark-shell and load class path: spark2-shell -jars path/to/file.jar To run the program -> :load example.scala hive To connect to hive using beeline: beeline -u “jdbc:hive2://server:10000/;principal=hive/server@domain;ssl=true” SHOW DATABASES; use db; use tables; Thanks, Javier Caceres
https://medium.com/@jacace/hadoop-spark-for-beginners-75eb571fc183
['Javier Caceres']
2020-12-02 16:21:58.737000+00:00
['Yarn', 'Hadoop', 'Spark']
The Tale of Harry Elephante
A long, long time ago, there was an elephant named Harry Elephante. Harry was known far and wide for his famous singing ability. Crowds of elephants would gather to listen to Harry croon his majestic tones and hit notes once deemed impossible by elephant musical historians. For Harry Elephante, life was good. The years rolled by, and Harry traveled the globe, performing for adoring crowds of not just elephants, but hippos, rhinoceroses, giraffes, and other woodland creatures who could appreciate a good tune. Harry always wore a smile on stage, but offstage was a different story. Not depressed, not dejected, simply aware that something was missing from his life. He didn’t know what this “something” was, and depending on his mood, he either pondered the thought or washed it away in a torrent flood of booze, loose elephants, and a copious drug habit. After one riveting performance in the Florida everglades, Harry sat backstage on his enormous, elephant-sized leather couch. The room was filled with an assortment of musicians, production folks, hangers-on, and groupies, but one pair of elephant eyes caught Harry’s attention. He told one of his security pandas to bring over the particular elephant from across the room. Her name was Brandi, and two years later, they were wed — on one condition: Harry had to clean up his act. Drugs and loose elephants were no longer on the menu. She could live with the booze. For Harry Elephante, life was good. A couple of years later, Brandi squatted underneath her favorite Banya tree, trying to pee on a pregnancy reed. She stared at the reed for what seemed to be an eternity before the reed turned blue. She was pregnant! Harry was overjoyed and happily joined his wife, Brandi, watching every newborn elephant video and reading every newborn elephant book they could find. He joined all the newborn elephant parent groups online and participated in all the arguments over vaccinations, elephant rearing, daycare, and the best movie sequel of all time. The one thing Harry knew was he knew nothing. But no one truly knows anything, and that made his anxiety lessen a bit. Twenty-two long months passed until Brandi’s trunk awoke Harry. It was time! They raced to the Animal Cracker Hospital and let nature take its course. Brandi pushed and pushed and pushed some more until Harry saw the top of his baby elephant’s head coming out of Brandi’s trunk. The sight of his daughter on the precipice of life overwhelmed Harry, and tears of joy poured down his elephant face. *Note: This is exactly how elephants give birth. Please don’t ruin a good narrative flow by stopping here and googling, “How do elephants give birth?” and for God’s sake, don’t go down the elephant hole by watching elephant births on youtube. You’ll never come back and finish the rest of this story* Anyway… The newborn elephant crawled its way out of Brandi’s trunk and into the tiny hands of their doctor. Brandi’s normal OB-GYN was an ostrich named Lucy, but she wasn’t available that day. Instead, a mongoose named George safely delivered their baby. George announced they had a baby girl and placed her on Brandi’s chest. Harry wept even more tears of joy and alternated between kissing Brandi and staring at his daughter, an elephant he had just met but loved more than anything else he had ever seen. And that’s Harry heard the commotion. Suddenly the doors flung open, and a rich asshole named Yates, along with his college lacrosse buddies Chip, Yancy, Fieldwith, Spencer, and Thurston, stormed into the delivery room. Yates grabbed his assault rifle, dropped down into position, and fired, killing Harry instantly. His lacrosse bros cheered and took pictures of Yates standing next to his fresh kill. C’mon. Really? No — obviously, that didn’t happen. Do you think I would write this lovely tale and then end it on such a horrible note? Of course not! I’m not a monster. I’m not an elephant either — just a man. A man who believes hunting elephants is wrong, and anyone who does is probably a rich asshole with a jerkoff first name. Anyway… Back to the story. Brandi cradled their newborn daughter while the mongoose doctor finished cleaning up Brandi. One of the nurses on staff, a raccoon with a gambling problem, congratulated them and asked for the baby’s name. Harry and Brandi smiled and said at the same time, Chloe Shay. The raccoon nurse nodded and asked, “Shay? Like Shea Stadium?” Brandi shook her head no while Harry nodded his head yes. More time passed before Brandi looked up at her husband and asked if he wanted to hold his daughter? Harry once again nodded his head and held Chloe for the first time. He stared into her eyes, and that’s when everything clicked. He had found it. The something that was missing. What he had been searching for all his life. He was finally at peace and the happiest he had ever been, until the next day when he was even happier. Harry, Brandi, and Chloe were a family, and they lived happily ever after. The End.
https://medium.com/the-haven/the-tale-of-harry-elephante-44beff5adfcb
['Tom Starita']
2020-12-24 00:10:09.489000+00:00
['Flash Fiction', 'Writing', 'Fiction', 'Baby', 'Short Story']
You Know What You Think
The Buddha Before you think it. If you stop and think about it (!), you will realize that this is the case. Thoughts are words inside your head. Where do they come from? Do you know? The content has to come from somewhere, and you must know what it is before you formulate it in words. It is possible to think up a random series of words, but even then, you have to have some idea of what “random” means and how to arrange a set of words deliberately so that they are random, meaning not in any combination that would formulate a sentence or coherent statement in whatever language you think in. In order to break the rules, you have to know what they are. You can state the rules if you stop and think about them (!), but that means that the knowledge must reside somewhere in your consciousness (!!) before you think the thought in words. The Buddha’s awakening gave him knowledge that we are just now confirming with the most advanced scientific research we can muster. If the Buddha could awaken, if we can awaken, and the Buddha said we can, then the universe must operate according to laws, physical and moral. Those laws are knowable. The knowledge is already there, waiting for us to get our minds quiet enough to let it enter into our awareness. You get there by meditating. So keep up your consistent practice. We have a book. Please help spread the word.
https://medium.com/@wbtphdjd/you-know-what-you-think-3d00dfb5acf0
['William B. Turner']
2020-12-21 02:43:06.448000+00:00
['Thoughts', 'Enlightenment', 'Buddha', 'Buddhism', 'Meditation']
Fighting Obesity
Excess amount of body fat is Obesity. Excess weight of muscles, bone, fat and water in the body (like body builders and athletes) is Overweight. Over weighted persons are at increased health risk than normal persons. They are more prone to chronic diseases like heart diseases, type-2 diabetes, high blood pressure, stroke, and few types of cancers. Is fat necessary to our body? Certain amount of body fat does the following function 1. Heat insulation. 2. Absorption of shock. 3. Storage of energy. Etc. This means in normal conditions the body fat keeps the body moisturized , causes sweating , gives energy to body (by storing energy) and nourishes bones. (By protecting them from shock) Distribution of fat Women have more body fat than men. In women usually the fat accumulates around hips giving them a pear shape. In men it accumulates around belly giving them an apple shape. The obesity related problems start when fat accumulates around waist. Fat gets deposited in and around belly in all living beings. It is also present in bone. Hence when a person becomes obese his/her stomach bulges out. Which means the hips, belly and breasts of an obese person sag and sagged parts flap as that person moves. An obese person will not be active. Causes of Obesity :When a person consumes more calories than he burns then the excess calories get stored in the form of fat causing obesity. 1. Genetic factors–Obesity tends to run in families. If parents are fat then the offspring also show a tendency to accumulate fat. Even the diet and lifestyle habits which are practiced in family also contribute to obesity. 2. Environment. — A person’s eating habits and the level of physical activities a person has also contribute for excess deposition of fat. When a person eats food containing more calories and has a sedentary work then the calories consumed are more than calories burnt. The excess amount calories are stored as fat. 3. Psychological disturbances.- There is a tendency to over eat in response to negative emotions like boredom, sadness or anger. This leads to obesity. 4. Binge eating disorder. 5. Diseases and conditions like Hypothyroidism, Cushing’s syndrome, Depression, and certain neurological problems lead to overeating which in turn leads to accumulation of fat. 6. Medicines such as steroids and some antidepressants may cause weight gain. There is an easy way especially for all women to fight against obesity without working yourself hard everyday A new formula has been made for you to lose weight every single day with exercising your body. If you want to see how fast you can do it, please follow the link below click here: https://pastelink.net/2e1ep This post contain affiliate links
https://medium.com/@chesserol/fighting-obesity-687348bf6efb
['Alex Sebitama']
2020-12-17 08:03:59.164000+00:00
['Obesity', 'Body', 'Bodybuilding', 'Woman Health', 'Womanhood']
Public domain for climate
Public domain for climate Frame from the #ZEROWASTECULTURE Animation, CC-BY SA 4.0. During the summer of the 2020 pandemic, Centrum Cyfrowe (Polish think tank working with digital culture) partnered with the Polish School Strike for Climate and the hip-hop collective Kompost to create three viral animations explaining the cause and potential effects of the drought in Poland. The animations appeared on the social media pages (Facebook, Instagram, Youtube) of the organization. In the videos, we made use of graphics available through Creative Commons licenses and the public domain. #ZeroWasteCulture Animation, CC BY-SA 4.0. Culture, especially pop culture, is based on, reproduces, and sublimates social fears. It’s hardly surprising that people reach for topics about the end of the world and the coming climate crisis. One of the challenges the modern world poses to our culture is not how to talk about the crisis, but rather how to wade through a glut of discussions. The United Nations has developed a list of sustainable development goals that cultural institutions can use to guide their operations and work for the public good, and the Museums for Climate Manifesto likewise discusses these realities. The recommendations include taking control of the waste created by the organization, as well as larger points about ethical management. One example of responsible recommended actions (in the digital sense) for cultural institutions is to avoid overproducing content, and instead partner with other organizations and focus on what your organization can bring to the table in terms of resources while letting your partner handle their end. These above-mentioned inspirations led to a collaboration between Centrum Cyfrowe, the School Strike for Climate, and the musical group Kompost as part of the #NoWorries campaign. Instead of creating our own content, which would only compete with information already available, we decided to take part in a different way, strengthening the message of a different organization and including a context that was important to us. #ZeroWasteCulture Animation, CC BY-SA 4.0. “Today our problem lies — it seems — in the fact that we do not yet have ready narratives not only for the future, but even for a concrete now, for the ultra-rapid transformations of today’s world.” – Olga Tokarczuk, Nobel Lecture The public domain includes resources (available physically and on the Internet) that can be used without any limitations — these limitations usually stem from the creator’s economic rights, which expire (in Poland) 70 years after the death of the creator. Along with works available through free licenses, the public domain is a common good, or commons, belonging to anyone who wishes to make use of it. We believe that entering a creative dialogue with the past, making use of its riches — especially artistically — can help us create new and important works that are a direct response to the issues we face today. Making use of open resources to discuss the climate crisis is one such example — it encourages people to recycle already existing resources, and to creatively think about how the past can be material for today’s burning issues. Promotional video for the campaign explaining what about Public Domain is (you can turn English subtitles on!), CC BY-SA 4.0. The videos, whose keyframes were created by Ewelina Gąska Studio, and then animated by Tomasz Kuczarczyk, have been viewed over 46,000 times. The #NoWorries campaign, which we are now wrapping up, was aimed at promoting open resources and their ease of use, as well as explaining the ins and outs of copyright law to young creators. As part of our Internet campaign, we also created AR Instagram filters, which transport users into paintings from the National Museum in Warsaw and the National Gallery of Denmark. We sincerely hope that our works will continue to reach users, virally increasing our message of the public domain and the artistic possibilities available. The Polish language version of the above text has been originally published on the website dedicated to the #NoWorries campaign and you can find it here.
https://medium.com/@alicja-peas/public-domain-for-climate-58396ff6b850
['Alicja Peszkowska']
2020-12-10 14:30:04.737000+00:00
['Climate Change', 'Digital Culture', 'Climate Action', 'Public Domain']
Design In Sweden
In this podcast episode we talk with Johan Berndtsson about design and business in Sweden. Johan invites listeners to submit suggestions for speakers for his next “From Business To Buttons Conference” and also invites people to come do design work in Sweden. Your browser does not support the audio element. Throughout the podcast we refer to videos, give web addesses and so on. Here is the list of links and recommendations from Johan: Check out Europe’s greatest Business, Service, and UX-design conference, From Business to Buttons at https://frombusinesstobuttons.com/, and the videos from past conferences at https://frombusinesstobuttons.com/archive . All of them are excellent, but be sure to watch: Jared Spool Mike Monteiro (both talks) Kim Goodwin (both talks) Eric Meyer Golden Krishna Patricia Moore, and of course Susan Weinschenk Also, if you want to learn more about inUse check us out at http://www.inuseexperience.com, and e-mail [email protected] if you have questions or if you’re interested in joining. =) Further reading: The story behind the conference: http://www.inuseexperience.com/blog/story-behind-business-buttons/ Thoughts behind UX and Service Design moving out into the physical world: http://www.inuseexperience.com/blog/experiences-services-and-space/ A template for the Impact Map, our perhaps best tool to connect business goals to user behavior and design: http://www.inuseexperience.com/blog/template-impact-maps-here/ The history behind the Impact Map (http://www.inuseexperience.com/blog/evolution-impact-mapping/) and how it has evolved over the years (http://www.inuseexperience.com/blog/evolution-impact-mapping/). And… The invitation for designers to come to Sweden: http://www.inuseexperience.com/blog/dear-us-designers-welcome-sweden/ Human Tech is a podcast at the intersection of humans, brain science, and technology. Your hosts Guthrie and Dr. Susan Weinschenk explore how behavioral and brain science affects our technologies and how technologies affect our brains. You can subscribe to the HumanTech podcast through iTunes, Stitcher, or where ever you listen to podcasts.
https://medium.com/theteamw/design-in-sweden-b7d29780e808
['The Team W']
2018-06-04 16:38:18.510000+00:00
['Design', 'In Use', 'Sweden']
Bar Chart vs Column Chart — What is the difference?
Both bar and column charts display discrete categorical data and answer the question of ‘how many?’ or ‘how much?’ in each category. The categories are usually qualitative data such as movie titles, types of houseplants or geographical locations. Bar and column charts are different to histograms as they do not show continuous developments over any interval. Different types of data may suit either a bar chart or a column chart better. Bar Chart Bar charts use horizontal bars to display data and are used to compare values across categories. The lengths of the bars are proportional to the values they represent. For a bar chart the Y axis typically displays a category such as top grossing movies of 2019 in the example below, whilst the X axis displays a discrete value. Create your own Column Chart A simple column chart uses vertical bars to display data. Column charts are used to compare values across categories and can be used to show change over a period of time. In the case of showing change over a period of time, a column chart can also be displayed as a line chart. In a column chart the Y axis typically displays a discrete value whilst the X axis displays the category. Create your own Some best practice tips:
https://medium.com/@bigcrunch/bar-chart-vs-column-chart-what-is-the-difference-533a29a38a65
['The Big Crunch']
2019-08-19 03:51:34.128000+00:00
['Data Science', 'Charts', 'Data', 'Data Visualization', 'Charts And Graphs']
The Healthiest Addiction in Life: Travelling
Magnify your Existence “The World is Book and those who do not travel read only a Page.” — Saint Augustine Life is the most precious divine Gift, and the World is the Most Beautiful divine Creation, so isn’t it cruel to stop the One, who deserves to see and experience The Mystery of World. Yes!! Guys so just turn a corner and pull up your sleeves to run into the most invigorating, revitalizing, and stimulating venture of the Life. Ready, so here we Go!!! The World is Waiting for You Travelling is not about only luxury or fun, it is an essential and compelling attribute for a peaceful and adventurous journey of life. Scientists have proven that Travelling is good for both mind and body, as it lowers the chances of Heart Disease by relieving Stress and Anxiety. Travelling comes with the advent of Luggage Accessories. Nowadays one can surf the Online Marketplace for the best Travelling packages and can discover the most suitable package according to an individual need. Marketplaces like eBay do have personalized offers to lure their customers. Let’s dive in deeper in the Ocean of the most fascinating Benefits of Travelling: Achieve your Mental Health: The daily routine which involves the navigation through urban landscapes and crowded public transport and the hectic payout at the workplace often accumulates stress, but the travelling is the best antagonist to fight back such mental harms. Travelling lets you live your life for own sake. Get your Mental Health with Travelling Enhance your Creativity: Travelling introduces novelty to our brain and improves our cognition. As we are, on the expedition where we might face some challenges in the combination of joyful moments. As being human, we utilize the best what we have payed for, so we tend to opt for problem solving approach which further helps us to induce creativity and innovation in our thinking pattern. Inspiration to do something Broaden your Horizons: Being in aTechnological Era, we spend more time on Social media accounts and surfing the internet. But travelling helps us to break our own limitations that we are bounded with and introduces us with the New World, just beyond our imaginations. It helps us to experience the Wider Perspective of the World. Exploration is Must See the Real Deal: “To travel is to discover that everyone is wrong about other countries.” — Aldous Huxley. Travelling helps us to explore the reality other than that what we have read and studied in our textbooks. It provides you with the Oppotunity to Deal with the Real and that’s what makes this journey more exhilarating. Even now you can book your place on online selling sites. To confront the Actuality Travel is the Cure: TRUST me guys! Travelling is the cure for every existing problem in your lightened Soul, you jus need to unveil the benefits that are hidden in Tour and Travelling. It helps you to strengthen your relations and networking, gifts you with the best Life memories, helps you to discover yourself, boosts up your confidence and many more waiting for you to discover them. The Soul healing Travelling So, guys Pack your bags, book your tickets and just explore the hidden beauty in the World, Can explore here.
https://medium.com/@nidhisatija/the-healthiest-addiction-in-life-travelling-9a7571964ceb
['Nidhi Satija']
2020-12-19 08:49:18.686000+00:00
['Travel Writing', 'Passion', 'Discovery', 'World', 'Traveling']
Managing Ubuntu Snaps: the stuff no one tells you
The snapcraft.io site: where snap developers and users meet Canonical’s Snaps are definitely the real deal. The secure and portable Linux package management system is more than a geeky tool for showing off your tech creds. Just consider the growing list of companies that have already bought in and are providing their desktop software through snaps, including Blender, Slack, Spotify, Android Studio, and Microsoft’s (Microsoft!) Visual Studio Code. And don’t forget that the real growth of the snap system is in the world of IoT devices and servers rather than desktops. But as the popularity of snaps grows — some new Linux distros come with the snapd service installed by default — you might be forgiven for wondering how you’re supposed to make them work. Don’t get me wrong: there are all kinds of web-based guides for finding, installing, and removing snaps. And there are places developers can go for help building their applications as snaps. But right now I’m talking about configuring their behavior or troubleshooting when things go wrong. Just for the record, you search for new snaps to install using something like: $ snap find aws When you find a package you like, you install it using: $ snap install aws-cli Oh, and you delete ‘em with remove. $ snap remove aws-cli There. You can’t say I never taught you anything. But that’s not what this article is about. What we are going to talk about is real management stuff, like changing configurations or troubleshooting things that broke. Understanding the snap file system Well, how’s that going to be different from the way you’d normally do it on Linux? Configuration files are usually going to be in /etc/, processes will reveal their deepest secrets through systemctl, and logs will find their way to /var/log/. Not so fast there, pilgrim. That’s not always how things work in Snapland. You see, a snap is really nothing more than a single compressed file (named using the .snap extension) containing the entire file system needed for running a package. These files are never actually decompressed and “installed,” but are mounted dynamically at run time and exposed to the user as a virtual environment. This means that the resources used by a program might not actually exist on the host system. Thus, for example, the Nextcloud snap creates its own versions of Apache and MySQL for its backend. So if, say, you want to configure a new virtual host in /etc/apache2/sites-available/ or create a new MySQL user the traditional way, you’re out of luck. The advantages of this approach are significant: installation and setup will generally be much smoother and you’re far less likely to run into dependency issues and conflicts. But it also at least appears to mean that you get less access to the vital organs that power your software. So, then, where does everything snappy happen? Take a look through your host file system for yourself: you’ll probably find more snap directories than you can shake a stick at (should you be so inclined). Here are the directories the snap install process probably created: /snap/ /var/snap/ /var/lib/snapd/ /home/username/snap/ That many? What for? Let’s go through those one at a time. Feel free to poke around your own Linux machine to see all this for yourself. The actual .snap files are kept in the /var/lib/snapd/snaps/ directory. When running, those files will be mounted within the root directory /snap/. Looking over there — in the /snap/core/ subdirectory — you’ll see what looks like a regular Linux file system. It’s actually the virtual file system that’s being used by active snaps. $ ls /snap/core/current bin dev home lib64 meta opt root sbin srv tmp var boot etc lib media mnt proc run snap sys usr writable And here’s a subdirectory containing (read-only) configuration files used by the Nextcloud snap. That’ll only be there, of course, if you’ve installed Nextcloud (snap install nextcloud). $ ls /snap/nextcloud/current/conf/ httpd.conf mime.types ssl.conf Ok. Now what about /var/snap/? Very much like traditional inhabitants of /var/, the files within /var/snap/ contain various forms of user data and log files — the kind of data that’s generated and consumed by applications during operations. This example shows directories for data used by some desktop-related snaps, including the AWS CLI and the Slack team communication tool. (OK, technically speaking, the AWS CLI isn’t a desktop tool.) $ ls /var/snap aws-cli core18 gnome-system-monitor gnome-calculator brave gnome-3-26-1604 gnome-characters gtk-common-themes core gnome-3-28-1804 gnome-logs slack Dive deep into the subdirectories within /var/snap/on your machine and see what you can discover. That leaves just the ~/snap directory that exists in a user’s home directory on at least some Linux file systems. It’ll contain directories using some of the names you’ll see in /var/snap. What’s going on in there? $ ls ~/snap aws-cli brave gnome-calculator slack As far as I can tell, these directories are meant to store versioned data related to settings used by your user account. Snap administration tools So far I’ve shown you how to find various classes of data kept in configuration files (within /var/snap/), virtual file systems (/snap/), and collections of user settings (~/snap). I also showed you where not to look — /var/lib/snapd/ — which is where the .snap files themselves live; nothing to see here, move along now. Now what about actual administration? This is a bit more complicated. Some snaps — like Nextcloud — expose a fully-featured admin interface. I talk about that in my Administrating Nextcloud as a Snap article. But it seems that the simplicity of snaps sometimes means that there just isn’t much hands-on configuration that’s possible. However, that’s not always the case. But first, you’ll need to know about snap services. Some more complex applications require multi-layer software stacks. Nextcloud, for instance, creates and manages its own versions of Apache, MySQL, PHP, and Redis. Each one of those “layers” is, in snap terms, called a service. If any snaps installed on your machine have their own services, you’ll be able to list them along with their status using this snapd command: $ snap services Service Startup Current Notes nextcloud.apache enabled active - nextcloud.mdns-publisher enabled active - nextcloud.mysql enabled active - nextcloud.nextcloud-cron enabled active - nextcloud.nextcloud-fixer enabled inactive - nextcloud.php-fpm enabled active - nextcloud.redis-server enabled active - nextcloud.renew-certs enabled active - You can also control the run and startup status of a service. This example will stop Nextcloud’s Apache service and ensure that it doesn’t launch when the system reboots (although, just remember that this will disable Nextcloud — you probably don’t want to do that): $ snap stop --disable nextcloud.apache You can also use systemctl to manage snap service processes: $ systemctl status snap.nextcloud.apache If your snap includes at least one service, you can view its logs using snapd: $ snap logs nextcloud You can also specify a particular service: $ snap logs nextcloud.mysql For some snaps (like Nextcloud), snapd makes useful configurations available from the command line. You can display available settings using snap get: $ snap get nextcloud Key Value mode production nextcloud {...} php {...} ports {...} private {...} Drop down a level by adding the name of a specific setting. This example shows us that Nextcloud is currently listening on only ports 80 (HTTP) and 443 (HTTPS). $ snap get nextcloud ports Key Value ports.http 80 ports.https 443 You could change a setting using the set command. This one would tell Nextcloud to listen on port 8080 for insecure HTTP requests instead of 80. $ snap set nextcloud ports.http=8080 Snapd also offers some system-wide configuration settings that are described here, documentation of environment variables is maintained here, and information on keeping your snaps updated can be found here. All that’ll get you started when things need fixing. So get to it. Looking for more? You might enjoy my books and Pluralsight courses on Linux, AWS, and Docker-related topics.
https://medium.com/hackernoon/managing-ubuntu-snaps-the-stuff-no-one-tells-you-625dfbe4b26c
['David Clinton']
2019-04-17 12:31:09.338000+00:00
['Administration', 'Sysadmin', 'Snap', 'Package Management', 'Linux']
The power of satire and Birds Aren’t Real
Psychologists have speculated on what makes a person suceptible to believing in consipiracy theories. What element of the human psyche pulls someone into believing something so outrageous, it defies all common sense? There are different consiparcies running around out there in the web. They range from harmless celebrity gossip columns (Avril Lavigne is dead, Tupac is alive, Matt Groening is a time traveller). To the ones about aliens (Area 51, UFO’s, the Pyramids, Stonehenge and the Easter Island heads). The ones about monsters (Sasquatch, Loch Ness, Ogopogo). The ones about sports (the NBA is rigged, Michael Jordan was poisoned by a Utah based, mafioso run pizza joint, The All Blacks were poisoned by the South African government, Tom Brady didn’t have those balls deflated). To the scientific (we’re in a simulation like the Matrix, there are infinite universes). To the more sinister (The Illuminati, Bush did 9/11 etc). Taken from Google. Whether you beleive any of these or not, you’ve definitely heard about them. Some of them, you may even believe yourself, or are at least sceptical of. And that is the point. Whatever evidence there may or may not be, for any of the theories listed above, is irrelevant. What’s important is the belief. Belief is irrefutable. There isn’t any evidence you can provide that will convince somebody Bigfoot isn’t real if they choose to ignore it. It’s why our legal system and science is based on observation. It isn’t about what you know, it’s about what you can prove. It’s tough to prove anything to a judge, jury and legal team comprised of one kook. And that’s where Birds Aren’t Real comes in. This thinly veiled marketing scheme, started by a 20 something year old college student, has garnered noticeable traction. Sure, you can think of it as just a ploy to sell merchandise, but don’t hate on it. There’s some real potential here. This real life political monty-python-esque piece of satiric comedy has all the makings of an intellectual Trojan horse. Our man on the inside of this asinine world of conspiracy. Our chance to bring these suckers down. Part of the reason Bird’s Aren’t Real is able to do this is because it looks and operates like a legitimate conspiracy. There’s a full website devoted to the “facts” about the movements origin. Those birds that crapped on you while you drank patio beers with your buddies? That’s the government deploying its tracking compound. The meat you see on your plate during Turkey Dinner? Synthetic material. All that juicy goodness you see on the side of the highway? Clearly, you know nothing about the advancements in modern robotics. Deny, deny, deny. Die with the lie. That’s how any good argument gets dissmantled. Throw shade on the evidence until everybody gets lost in the dark. You begin explaining that birds are real, and end failing to prove that the government can’t create synthetic bird meat and cover up a mass genocide. How’d you get here? “Never argue with stupid people, they will drag you down to their level and then beat you with experience” — Mark Twain Conspiracy theories have long been the intellectual refuge of the shit disturber. Fed up with the facts, they resort to the lowest form of critical thinking. Casting doubt. Some people just want to watch the world burn. They flip us the bird and expect us to argue proof of its existence while they stick it up our butts. And they act all high and mighty about it. Well I’m fed up with taking the high road. I can’t convince my roomate that Avril Lavigne hasn’t been replaced by a dopleganger, anymore than you can convince Elon Musk we’re not in a simulation. Evidently, we have no way of proving them wrong. Unless… Birds Aren’t Real. It’s time to wake up people. Get with the program. If you can’t beat em, join em. Enlist the force of a consipiracy so far fetched it’ll make Y2K nuts and moon landing sceptics say, “Come on guys, seriously?” If anybody can smear the truth with government issued tracking compounds, then I’m free to argue, on their behalf, that JFK was assassinated because he didn’t O.K. the genocide of the bird species when government officials were mad that they were crapping all over their cars. Some of you may be thinking that I’m going to far. I’m in too deep over my head. That, in my heroic quest to find the narrative necessary to defeat these assholes, I’ve become the villain. That I’ve become one of them. Fine. Maybe I am preaching from this mountain top I’ve made out of an internet molehill. Maybe I’m not. If you don’t believe me, then prove it. Prove birds are real.
https://medium.com/@lmccollombin/the-power-of-satire-and-birds-arent-real-dca2b17d8bee
['Small W S']
2020-10-16 00:14:20.602000+00:00
['Satire', 'Conspiracy Theories', 'Thoughts', 'Humor', 'Conversations']
Getting into data visualization — where should I start?
No coding First, if you haven’t pushed Excel’s boundaries, it’s worth doing. Seriously. Learn pivot tables at least. It may sound lame, but Excel can do a lot more than people expect. It can even make pretty charts if you try hard enough. If you have some data already and just want a good tool to explore it visually or to export more compelling charts, Tableau is incredibly popular and powerful. There is a free public version and a very expensive paid version which you can get for free as a student. It can publish to the web, or to static graphics to include in research papers, post to Instagram or print out as giant wall-sized charts. The Tableau Public website has a lot of quality examples posted for you to get inspiration from. Sadly, the next “No coding” tool I like to recommend, Infoactive, is shutting down…but on the bright side it is because they were acquired by Tableau. This hopefully means good things for Tableau Public in the future. I will plug a free book spearheaded by the Infoactive team that is useful background on data visualization design using any of the tools I cover here: Some coding If I were picking one single programming language to use solely with data I would pick R. It’s free, supported by tons of ongoing development adding useful packages on top of the base language, and there are great free resources to learn it. First among those resources — I cannot recommend these Coursera classes highly enough: Taking all of them might be overkill for a true beginner, but the track of classes walks a nice line from the introduction of key data science terms and ideas, through exploratory data analysis (which covers useful packages for R like ggplot, a very popular visualization tool) all the way to adding interactivity, publishing to the web via Shiny and storytelling with data. R is what I use most frequently for small, quick analyses and ad hoc visualization — if you’ve got a dataset that Excel is struggling with (too big, not flexible enough, poor visualizations), R is perfect for exploring quickly. This is also the time for a quick “yes, you should probably learn some SQL.” SQL is very targeted in scope compared to R (really, it’s far from an apples to apples comparison)—but if there are databases that you need to dive into to gather data for use with any of these other tools or languages, there is a good chance you’ll want to know SQL, and it will pay dividends in the long run. I ♥ code More often than not, the question of “where should I start?” comes in response to a fantastic interactive visualization presented on the web. I’m a huge fan of all the recent innovation in this area (see my in-depth survey of innovative work here). Unfortunately, if you really love this piece: …it can be disheartening to find out how much you have to learn to be able to build your own. It’s worth reiterating up front that “being as good as the New York Times” is a tough goal. A worthy one, but tough. Fortunately, there are many great resources to help. The library behind the interactive piece above, and many of the data visualizations running in the browser today is D3.js, created by Mike Bostock. If you want to publish online or make interactives, D3.js is a great tool to learn. This does mean you’ll need to learn some Javascript in general and then D3.js specifically. Bostock’s website is a gold mine of examples and tutorials (you can’t beat learning from the creator of the library…). I’d also recommend Interactive Data Visualization for the Web by Scott Murray, which you can either buy from O’Reilly or work through for free online: The online version is excellent — you actually write code snippets within the book itself, run them, and compare your output to interactive examples that run within the book itself too. Murray also does a nice job of targeting the book at beginners, walking you through the basics of how web browsers work, HTML/CSS and Javascript, before diving headlong into the details of D3. One area to call out as a particular strength of D3 is geospatial visualizations. D3 is great at creating maps of many flavors, and there are nice dedicated tutorials available if that’s your area of focus: D3 can be difficult to use directly, but there are many tools you can use on top of it to make your life a little easier. I’d recommend learning at least the basics of D3 rather than only using a more abstract plotting library, but if that proves intractable, a tool like Plot.ly can help make things feel more approachable. Finally, if you really want to learn a do-it-all programming language that just happens to be great at data visualization, go with Python. Python is the most general purpose and powerful tool of anything I’ve listed, and it’s quite popular in the data science community. I find Python very approachable as a multi-purpose programming language, but in truth it is probably overkill if all you want to do is explore and visualize data. Youtube is built with Python, for example…1 million lines of it. If you do go the Python route, the Code Academy course is a short (10–20 hours) and fun introduction to the language. Finally, much like D3.js for Javascript or ggplot for R, there are many Python libraries dedicated to data visualization. Seaborn (which builds on an older popular library, matplotlib) and Bokeh are probably the best-in-class right now, but this is a quickly evolving and improving landscape. Both the Seaborn and Bokeh websites include galleries showing off the kinds of visualizations you can create with those tools.
https://medium.com/datavisualization/where-should-i-start-c53acdf04a1c
['Nick Brown']
2017-06-11 18:05:29.564000+00:00
['Data Visualization', 'Design', 'Data Science', 'Data']
For Those Who Ruminate
For Those Who Ruminate Part 1 of a series on Rumination Rumination might not be the most widely used word that we hear every single day, but there’s a good chance most of us may have a general idea of what it means. I never gave it much thought, but when I was recently reading about rumination, I swiftly realized that it’s a topic of that I can greatly connect to. Many experts have spent the last several years researching rumination, and are beginning to come to some conclusions that the act of it (ruminating) is a human behaviour that links to mental health diseases like Massive Depressive Disorder. The term, ruminating, is a bit of a broad word, and it has come to represent many different branches on the mental health tree of behaviours. It covers a large spectrum of levels too, some minor, while others have the chance to be debilitating. Ruminating comes in many forms. I won’t say that it’s always all totally bad. Some people do believe certain types of ruminating is necessary in their lives. It’s not always black and white. Just look at the type of ways it manifests. You’ll see some of things it includes, carry some complexity with them. Rumination can sometimes carry an obsessive nature with it. Because of that, we see many references to obsessive compulsive disorder, and the theory that ruminating can have a strong hold on those suffering from OCD. It also emcompasses similar methods, and it shows itself in the forms of things like over-worrying, especially when the worrying may be of something small or exaggerated. We worry, we think about the same things for long periods of time, we “stew” about things, rack our brains, we overanalyze, and of course, we can become obsessive over repetitive thoughts. Timing can be terrible. What doesn’t help most of this, is the fact that a majority of this stuff is done right as we’re lying in bed, all is quiet, and we are attempting to start our nights sleep. So right there, we can have occasional sleep issues to full blown insomnia. Just another piece on the rumination list. So, like mentioned at the beginning, I came to a realization that I connect to rumination a lot. I think I knew that subconsciously for decades, but after all that time, I’m only coming to discover it only now. It was my new venture of learning about Mindfulness that seemed to guide me right to this topic. I can relate to the ruminating (at maximum peak) right in that first hour of the night that we’re trying to go to sleep. I’ve suffered those brutal effects many times. I’ve always done it, and it’s also had many peaks and valleys throughout my life. It connected to my mental health issues, and past addiction issues. The addiction itself was a way of self medicating or emotion dodging that I did in order to just shut the damn ruminating off in my head. Was I ever really shutting it off? Or was it more like simply muting it? I also suffered greatly in some past relationships. With all my erratic behaviour, came rumination out of guilt. When I was losing all kinds of people and trust, I would spend days stuck in my own head, trying to convince myself of who alleged hated me, who was about to discover something bad I did, who was soon going to call me and berate me. Or worse, when would police be knocking at my door. And for what reason, as there were usually multiple things they could choose from. There is nothing worse than the ruminating that develops as a result of doing things immoral, dishonest, illegal, or just plain ethically disgusting. It brings a strong paranoia that is so difficult to shake off. Finance issues were big parts of this for me as well. Living a life of addiction or just being erratic in general, made financial situations develop. It would sure be tough times, when ruminating how to get the rent paid, when it’s seven days late, and the bank account balance is $0. Sometimes, it was those kind of situations, where rumination would lead to even more erratic decisions. That was the thing with ruminating. For as much thinking, as the process of ruminating seems to be, it’s actually very illogical thinking. Endless, constant, illogical thinking. Taking the form of rumination. Not to mention that usually, ruminating thoughts never took the form of true problem solving. With a start, a plan, and a solution. Because the problem with rumination is the fact that its thinking and repeating the same thing, over, over and over again. We seem to have this one subject in our minds, spinning fast. However it’s in a tornado motion. So, it’s fast, churning, but stuck at the same time. This is why it is connected to mental health issues. Especially the fact that ruminating already existing obsessive thoughts is known to usually strengthen or prolong mental health issues. It’s something that seems quite familiar to me. Ruminating is something I’ve put myself through at almost all periods of my life at some point. When I look back at its presence as it connects to my mental health, I know it certainly prolonged it at times. It also most certainly made it worse at times too. I feel strongly that it seems to be something that takes form of a fuel that is behind issues of mine relating to habits of procrastination and being unproductive. Especially when it came to projects that got, slightly started, yet never completed. That can be another unwanted trait of all of this. Plenty of ruminating, but not a lot of getting things done. Can we completely rid ourselves of rumination? The human factor makes my opinion say, probably not. Control has to be gained. That is the goal to face and address. Next time, we take another look at rumination; and the techniques we can learn on how to control it.
https://medium.com/illumination/for-those-who-ruminate-9ecf2eb5e375
['Michael Patanella']
2020-12-09 01:47:35.894000+00:00
['Self Improvement', 'Life Lessons', 'Self', 'Mental Health', 'Life']
Checking the Stormwater Backflow Challenge
Controlling stormwater is a growing challenge for utilities and municipal authorities as they look to mitigate the economic and health impacts of excess water. Choosing the right check valve is a key part of the stormwater solution. Proco Products Jun 17·1 min read For water utilities and municipal authorities, stormwater management is a growing challenge. Changing weather patterns as a result of climate change is prompting more intense events that are increasing the volume of snow and rain water entering water management systems. At the same time increasing urbanisation is reducing the capacity of the environment to absorb surface runoff. With impervious surfaces such as pavements and roofs preventing precipitation from naturally soaking into the ground water is increasingly directed into storm drains, sewer systems and drainage ditches. However, often aging drainage infrastructure can be overwhelmed and as a result, poorly managed stormwater can see flooding events with backflow into residential areas, office parks and commercial areas. Inevitably, such events result in significant problems. For example, flooding can cause erosion, turbidity due to suspended solids, property damage, and traffic disruption. https://www.procoproducts.com/checking-the-stormwater-backflow-challenge/
https://medium.com/@procoproductsinc11/checking-the-stormwater-backflow-challenge-c4c19dbc03fc
['Proco Products']
2021-06-17 06:26:37.963000+00:00
['Valves Manufacturers', 'Industrial Pipe Fittings', 'Expansion Joints', 'Valves For Mining', 'Pipeline']
Are You Using Outdated Assessment Techniques? 4 Reasons to Shake Things Up
Assessments are an integral component for all RTOs — they’re how we track that students are making progress and keeping up with their courses. Traditional approaches like pen and paper tests or face-to-face evaluation are currently the go-to choices for most assessors, but as more and more powerful technology emerges, online assessments are proving to be a practical alternative. Granted, moving from a paper-based environment to a computer-based one can be a tad expensive — depending on the existing infrastructure — but the advantage considerably outweighs the cost. Having an online assessment system isn’t just about giving your students tests to take on a computer — it’s about getting the entire assessment workflow online: competency framework mapping, test authoring, evaluation, and results analysis. Less Marking One of the biggest advantages of e-testing is the automated evaluation. If questions are formatted in a way that can be evaluated objectively, computers can quickly take over the tedious scoring process with a next-to-zero error rate, allowing trainers to devote their full attention to assessing the subjective answers. Digital results are stored in the cloud, saving physical space (goodbye, paper!) and eliminating the potential for misplaced files. Helpful Analytics Most importantly, advanced analytical tools can be applied to the test results to monitor the students’ progress closely, conducting horizontal comparison within the class or vertical analysis of individual development. These analytical insights can also help trainers offer deep and timely feedback. Security Often the biggest roadblock to the popularisation of online assessments is the fear of cheating, and you may be wondering whether there are substantial measures in place to prevent online assessment fraud. The answer is a resounding yes. Cameras and facial recognition tools are reliable ways to verify test takers’ identities, while a cheaper alternative is proctoring. Screen recording is effective against any cheating activity on computers, and keystroke recognition devices could be used to detect abnormal typing, which is often a sign of cheating offline. Cover Your Ass…essments Last but not least, moving test authoring into the digital space will let your RTO take advantage of easy competency framework mapping — which is especially helpful in the compliance-heavy environment in which RTOs operate. Making sure that tests are designed per the competency framework enables both trainers and CEOs to double-check the relevancy of their courses. As the pairing technology matures, the practicality and necessity of online assessment will become more and more non-negligible. RTOs should make the shift to online assessment as soon as possible to start delivering exceptional testing services to their students.
https://medium.com/vetexpress/are-you-using-outdated-assessment-techniques-4-reasons-to-shake-things-up-458b6a600257
[]
2018-05-14 01:02:11.712000+00:00
['Edtech', 'Teaching', 'Classroom', 'Education', 'Assessment']
10 Holiday Marketing Tips from Larry Kim, Neil Patel and More
It’s time to amp up and adjust our marketing strategies for the holidays! If you want to get ahead of the marketing game and stand out from the crowd, check out these incredible unicorn tips from the top social media marketing experts. We’ve got insights from Mari Smith, Neil Patel, Virginia Nussey, Dennis Yu, Lilach Bullock, Lisa Dougherty, Marsha Collier, Sujan Patel and Kristel Cuenta-Cortez. Among the tips? Leveraging live videos, launching Facebook Messenger chatbots, running social media ads and more — all with the aim of increasing brand visibility, ramping up your holiday sales and boosting ROI. So let’s jump right in — and I’ll start with my own №1 holiday marketing tip! 1. Run Facebook Messenger Ads | Larry Kim, CEO of MobileMonkey Ad prices get crazy competitive around the holidays! Since most of your sales are going to come from customers with pre-existing brand affinity, focus the majority of your social ads budget using remarketing as the targeting option rather than trying out new, unproven audiences at this critical time. People’s inboxes will be full of offers, so try reaching your audience using new higher-engagement marketing channels like Facebook Messenger ads in Facebook and Instagram to ensure your targeted audience actually sees your important marketing messages 2. Go Live on Facebook | Mari Smith, Facebook Marketing Expert Use holiday-themed Facebook Live videos to really engage with your audience this holiday season. Facebook continues to favor content that generates meaningful social interaction, specifically conversations between people within the comments on Page posts. Live video typically leads to discussion among viewers on Facebook, which helps bump up the algorithms and you should see even more reach on your posts. In fact, Facebook states that live videos on average get six times as many interactions as regular videos. Strive to stand out in the news feed and create “thumb-stopping” live video content that draws your audience in. What if you did a whole “bah humbug” Facebook Live centered around how crazy it is that stores seem to start pushing the Holidays earlier and earlier every year? Use the broadcast as a fun way to get your audience talking to you — and with one another — about their preferences around the Holidays. You can then retarget your video viewers with different content driving to your website, offers, etc. Or, perhaps someone in your office would be willing to dress up as Santa Claus and do a whole series of Facebook Live videos where you do prize drawings and giveaways! Or, mobilize some team members to come on live video as “Santa’s elves” and show behind-the-scenes of how your products are created, or your service is developed. Think outside the box and get creative to put a smile on the faces of your prospects and customers and have your business/brand be top of feed and top of mind! 3. Collaborate with Influencers and Create Gift Suggestions | Lilach Bullock, Content Marketing and Social Media Specialist It’s difficult to stand out during the holiday season when everybody is sharing special offers and discounts. But one way to stand out and generate better results during this period, is to collaborate with a relevant social influencer as they can help you reach a wider audience. However, you need to start working on this campaign way ahead of time: from finding the ideal influencers to work with to planning the actual content, it’s a big project but one that can yield amazing results. Another tip I have to mention is to create remarketing campaigns on social media and target all of those people who viewed your products but didn’t buy. Everyone is looking for gifts during this time period so chances are, they’re checking out a lot of ideas and products — remind them of your products at the right time and it can have an amazing effect on your sales. 4. Give Your Social Media Channels a Holiday Makeover | Virginia Nussey, Marketing Director at MobileMonkey Holiday fever is not just for ecommerce. B2B should get hyped for the holidays, too. Holidays are an occasion for a company to reveal its customer appreciation along with its culture, brand and staff appreciation. And doing so can have a positive marketing impact through visibility and brand affinity during the cheery time of year. Give your Facebook chatbot and social media avatars a holiday makeover — and that will mean something different for every brand. Just because B2B marketers don’t have a Black Holiday sale to promote for the holidays (although, you certainly could!), doesn’t mean you shouldn’t have some holiday fun. Your customers (and future customers) may fall a little more in love with you when you take the opportunity to get in the spirit! 5. Curate Sentimental User-Generated Content | Dennis Yu, CEO of BlitzMetrics My №1 tip for the holidays … ask customers and employees what they’re grateful for, collecting the pictures and videos. Then after getting their permission, you now have a massive library of UGC (user-generated content) that you can mix and match to drive sales without having to rely as much on sales and discounts. And now you’ve solved your content issue, too. 6. Run Remarketing Ads | Neil Patel, Founder of Neil Patel Digital During the holiday season, expect your ad costs to increase. Consider pushing out more educational content and sharing them on your social profiles. You can even spend a bit of ad money to promote these educational pieces. From there remarket all of those users and pitch them your product/service through remarketing ads. It’s one of the cheapest ways to acquire customers from the social web at an affordable rate. 7. Show the Human Side of Your Business | Sujan Patel, Co-founder of Mailshake Something I’ve seen that customers and followers of our brand engage with around the holidays is learning more about the team behind the scenes. We are fully remote, and have employees working literally around the world. We’ll work with our employees to share interesting stories about them with our audience to give people the human side of our business. People are in “family” mode, not “business” mode around the holidays. Sharing our company family with them pulls on that thread a bit. 8. Start Early | Marsha Collier, Social Media Author It’s a two-pronged approach. Start by reconnecting with your existing customers very early on without a hard sell. Let them know you’re there to help make their holidays easier. Then during the season, your ads should always go for the hard close — make your offer ads irresistible. 9. Create Holiday-Themed Content | Kristel Cuenta-Cortez, Social Media Strategist There’s so much truth in the statement “If you fail to plan, you plan to fail,” especially when crafting a social media campaign for your brand. One best practice successful brands do to ramp up their campaigns is to put together a holiday-themed content schedule based on their goals. For example, if your goal is to solicit customer reviews and collect user-generated content that you can utilize in the future, you can run a simple photo contest where you ask your customers to submit their entries with a branded hashtag. Pick a relevant prize and decide on the theme, and find the best time to launch it! Monitor your results and adjust your strategy as you go along! This doesn’t only provide social proof, but it also saves valuable time and effort since user-generated content is generally free. 10. Leverage Influencers | Lisa Dougherty, Community Manager at Content Marketing Institute My number one social media marketing tip for B2C marketers is to work with top influencers in your niche. People like to scroll through their newsfeeds looking for gift-giving ideas. I know I do. And, they tend to trust brand recommendations from individuals (even if they don’t know them). But, before you get started, make sure you’ve set a clear goal that aligns with your business objectives. Once you’ve determined your goals, you’ll need to find the right influencers in your industry to work with. Once you do, put those influencers to work as your brand’s little elves creating customized content for your social media channels to help increase visibility, trustworthiness, and generate ROI for your brand. Be a Unicorn in a Sea of Donkeys Get my very best Unicorn marketing & entrepreneurship growth hacks: 2. Sign up for occasional Facebook Messenger Marketing news & tips via Facebook Messenger. About the Author Larry Kim is the CEO of MobileMonkey — provider of the World’s Best Facebook Messenger Marketing Platform. He’s also the founder of WordStream. You can connect with him on Facebook Messenger, Twitter, LinkedIn, Instagram. Do you want a Free Facebook Chatbot builder for your Facebook page? Check out MobileMonkey! Originally posted on Inc.com
https://medium.com/marketing-and-entrepreneurship/10-holiday-marketing-tips-from-larry-kim-neil-patel-and-more-ac0731e1e7a7
['Larry Kim']
2020-10-20 08:08:14.523000+00:00
['Marketing', 'Entrepreneurship', 'Business', 'Social Media', 'Marketing Tips']
In Theory, we all Need sex
All humans are sexual creatures. If we weren’t, our species would have gone extinct long ago. And yet, many of us remain reluctant to accept sex as part of our shared humanness, a key component of our relationships and interactions. Some of us have been conditioned to view sex as dirty and reprehensible, something we should endeavor not to want, think about, or even discuss. As a result, we either go without, have bad sex, or do not trust ourselves to state our needs, preferences, or fantasies. We all have them but we pretend we don’t so as not to appear wanton or lewd, so as not to jeopardize the carefully curated exterior many of us present to the world. This unwillingness to embrace our sexual selves creates myriad issues that can destroy lives. Gay folks end up trapped in heterosexual relationships; trans folks end up trapped in a body that doesn’t fit; hetero folks end up trapped in lies. With courage and communication, almost any situation is reversible. But not everyone will find it in themselves to accept their sexuality in all its complexity. Further, not everyone has the good fortune of living in an open-minded and supportive society. Or of having a partner with whom we’ve created a safe space conducive to the joint exploration of sexuality. Although sex is never a solo pursuit, how many of us go it alone regardless of partnership status? How many of us hoard our desires for lack of a willing interlocutor? How many of us outsource our fantasies to strangers via specialized websites or by hiring a sex worker? How many of us have resigned ourselves to a sexless existence in which the only relief we get comes in the form of erotica or porn, when we’re able to react to it at all?
https://asingularstory.medium.com/in-theory-we-all-need-sex-2f533095e51f
['A Singular Story']
2020-09-09 19:12:41.340000+00:00
['Sexuality', 'Mental Health', 'Self', 'Relationships', 'Culture']
The Now — Understand the Gift of the Present Moment
Eckhart Tolle, a well-known mindfulness teacher, the author and advocate of ‘The Power of Now’ who’s brought a lot of perspective on this subject, says: In today’s rush we all think too much, seek too much, want too much and forget about the joy of just Being. Nothing has happened in the past; it happened in the Now. Nothing will ever happen in the future; it will happen in the Now. All you really need to do is accept this moment fully. You are then at ease in the here and now and at ease with yourself. Not knowing how to get to that state and just be there, sadly brings out a multitude of unwanted consequences. Unease, anxiety, tension, stress, worry — all forms of fear — are caused by too much future and not enough presence. Guilt, regret, resentment, grievances, sadness, bitterness and all forms of non-forgiveness are caused by too much past, and not enough presence. All negativity is caused by an accumulation of psychological time and denial of the present. — Eckhart Tolle Why do we struggle to be present? Well, to give you a simple answer first, it’s because we are human. All non-human species — such as the birds, flies and squirrels — find it natural to be in the present moment all the time. That’s what powers their senses, their behaviour, and actions. Photo by Dušan Smetana on Unsplash They do it seamlessly, instinctively, intuitively — without even being aware of doing it. But it is precisely awareness, which distinguishes human beings from other species, that makes it so hard for us to live in the present. — Eyal Winter Ph.D. Professor of economics at the Hebrew University, Mr Eyal Winters, says that we as humans — compared to animals, are evolutionarily hard-wired to live in the past and the future thanks to our mental cognitive functions. Our minds deny the existence of the present moment as for them, the time always passes, its linear, and so according to that, there’s no present moment — we only have the past and the future. Any millisecond before the present moment is already past and any millisecond later is already a future. — Eyal Winter Ph.D. He further explains this phenomenon. Other species have instincts and reflexes to help them with their survival, but human survival relies very much on learning and planning. You can’t learn without taking from the past, and you can’t plan without living in the future. Regret, for example, which makes many of us miserable by reflecting on the past, is an indispensable mental mechanism for learning from one’s own mistakes to avoid repeating them. Fears about the future are likewise essential to motivate us to do something that is somewhat unpleasant today but has an enormous benefit for our well-being in the future. Without this fear we would not acquire education or invest in our future; we wouldn’t be able to take responsibility for our health; we wouldn’t even store food. We would simply eat as much as we feel like and dispose of the rest. — Eyal Winter Ph.D. And although that’s all true and explains well why we struggle to acknowledge and live in the present moment, it doesn’t mean that present is non-existent. Its non-existence is only a concept of our minds, not of our whole being, not of our body and our senses for instance. It is there all the time in fact. But if we only use our minds to guide us in life, we miss out on it completely. People live as if the present moment were an obstacle that they need to overcome in order to get to some better point which never arrives so that is a mad way to live and it makes living hard, it makes living into an effort. Time isn’t precious at all, because it is an illusion. What you perceive as precious is not time but the one point that is out of time: the Now. That is precious. — Eckhart Tolle See, I’ve been there as well. The wast majority of my life so far, this is how I lived, what I thought and how I’ve been acting. I was rejecting the existence of the present, forgetting about it and disregarding it. I was soo busy with my mind and being constantly in my mind that there was no space for anything else left. Such as to pause and acknowledge whether I am not missing out on anything. The moment of now perhaps. Or to even acknowledge that it existed. This year taught me a lesson — not only the present moment exists, it’s more real than anything else. Better than anything else. And it’s not that impossible to be in it. How to balance our past, present and future It’s been scientifically proven that too much overthinking (thinking the past) as well as planning (thinking the future) is not good for our health, mental health and overall wellbeing. It brings out anxiety, fear, frustration, stress, apathy, passiveness, sadness, resentment and many more emotional states that later turn into a depression and chronic unhappiness. On the other hand, people who are able to be in the present more — focused on what they’re experiencing ‘now’ are much happier than the rest. We can just look at the Buddhist monks as evidence. And trust me, there are easier ways of getting present than meditating. The more you are focused on time — past and future — the more you miss the Now, the most precious thing there is. — Eckhart Tolle Do you know how sometimes we are in the present, but aren’t really there — present? Let’s say you’re in the room and on our laptop, but a part of you — your mind is not there. It’s wandering — in the past or future, or somewhere unrelated to the moment and the task you’re attempting to do in that present time. That’s when and how we’re losing our present moment. When we’re there physically, but not fully — mentally and emotionally. Those two keep drifting around in the times that are long passed or those that are yet to happen. Or, we’re there physically and mentally, but not emotionally. Say while you do the tasks that feel robotic, mundane, repetitive such as doing shopping, washing, cleaning, tidying up, admin work. We don’t want to be there, that’s the thing, so we try to rush the moment whatever we are doing in it, just so it can pass quickly because we can’t wait to be somewhere else doing something else. That’s how we’re depriving ourselves of our presence. We end up devaluing the present moment itself if we don’t pay all activties an equal amount of respect. Regardless of whether one task we enjoy more or less than other. When it is there, it’s there for us and at that moment. The moment belongs to it. And so it deserves our full attention. They all deserve our full attention and respect as a part of our present. Regardless of the level of comfort they give us or the enjoyability score.
https://medium.com/journey-to-self/to-be-or-not-to-be-present-4db6a3b3d8c5
['Lucy Milanova']
2020-11-10 15:19:31.277000+00:00
['Inspiration', 'Mindfulness', 'Mental Health', 'Self Improvement', 'Education']
Sins Of The Past Part 7
My name is Sunny Alexander-Johnson. And I’m Henry James, and we’re writers for Dark Sides of the Truth magazine. Part 1, Part 2, Part 3, Part 4, Part 5, Part 6 The idea to find Dr. Hickom’s home and poke around was a good one but had its particular drawback. It’s not like we were willing to ask the police department where the man lived, and we doubted Judy was going to give us the doctor’s address simply because we asked for it. However, we had a couple of tools at our disposal who could give us what we needed. A quick call to the offices of Robert Johnson and Manny Hermanos and a brief conversation with the cyber twins Donnie Martin and Becca Wu, and we got what we needed. Ten minutes later, we cruised past the deceased doctor’s house, spying the tale-tell strips of yellow and black tape blocking entry through the front door. We parked several houses down then slowly walked along the sidewalk, viewing the surrounding neighborhood for signs of life. The house, built on a corner lot, offered a view of the street on one side, and a neighboring house on the other. After satisfying ourselves, our stroll was unobserved, we veered off the sidewalk and walked along the side of the house. And that’s when we reached our first challenge. From the back edge of the house to the property line sat a wooden fence. In the middle of the expanse of the fence was a gate. Hanging from the gate’s handle was a lock. A lock that could only be opened by moving four numbered wheels to the correct combination. “Ah, shit. It had to be one of these.” “Can’t you just pick it?” “No, Henry, I can’t just pick it. See these numbers on these wheels and the four hash marks?” “Of course, I see them.” “The only way to unlock it is to line up the correct number sequence.” “Shit. We could be here all day. And there’s no way I’m going to try and skinny over this damn fence. Not in my condition.” “Think Henry. The Fixer probably knew we’re going to check out the doctor’s house. Hell, for all we know, the Fixer already knew the doctor was dead. I think we already have the code. We just didn’t know it at the time.” “19,6,496,4,6 — Don’t just look for the gold, look into it.” “Exactly.” “Okay, princess. Try the first four digits.” “Nope.” “9,6,4,9, maybe?” “Again, no. Hang on Henry, let me think.” “Hurry up. We’re sticking out like a sore thumb here.” “Just stifle it for a minute, old man.” Fifteen seconds later and after a hard yank on the base of the lock, it sprang open, and we quickly moved into the back yard of the doctor’s house. “What was it?” “6,4,6,4.” “Obvious.” “To me, maybe. You? Not so much.” “Bite me, Johnson.” As we skirted the edge of a pool and headed toward the back door, we saw a dog house beneath a large shade tree in the back. Aside from the dog house, and food and water bowls sitting on the wood deck beside the back door, we didn’t see any evidence of a dog hanging about. We’d never even thought about the doctor having an animal in the back yard. We guessed someone had seen to it to the doctor’s animal had been taken from the premises so it could be cared for properly. “We need to thank our lucky stars we don’t have to deal with a dog. It could have been a Dobberman or something. I think we both need to buy some lottery tickets after this. Did you bring your lock pick shit?” “Got it right here, James. You know I never leave home without it.” “What. You practicing for an American Express commercial or something? Just pick the damned lock, and let’s get inside.” “Damn James get a grip, will you?” We heard a subtle click as the door lock moved back and then let ourselves in. Fortunately for us, the electricity company hadn’t yet been informed of the doctor’s demise. Stepping into a large kitchen area, we noticed a light still on beneath the vent hood over the stove, We grabbed a hand towel from the stove handle then searched for another hand towel and after finding it, began a systematic search through the house. “What are we looking for, Henry?” “We’ll know when it finds us.” When we found the doctor’s office, we realized that’s where he’d been when he’d been killed. If the room, seemingly reduced to shambles, didn’t tell us this office was ground zero, the bloodstains on the floor and the splatters of blood on the wall behind the desk did. “You think they found what they were looking for?” “Whatever it was? No clue, princess? Let’s take a look.” Books had been pulled from a set of bookshelves and tossed haphazardly on the floor. Almost every drawer in the doctor’s desk had been yanked out, the contents dumped on the floor, and the drawers tossed aside. There was an empty spot on the desk where we guessed a computer had once sat, the monitor attached to it yanked across the desk when whoever grabbed the computer and ripped it away. While one of us inspected the remaining books in the bookshelf, the other began checking the sheaves of papers scattered all over the floor. “Henry?” “Yeah.” “What exactly did the Fixer say about gold?” “Don’t just look for the gold. Look into it?” “Look at this.” “Gold’s Storage? This looks like an invoice for some storage facility here in Lebanon. Look into the gold? As in find the doctor’s storage shed and look inside?” “This is freaking creepy, Henry. If this is what the Fixer meant, then he knew we’d come to the doctor’s house. He was counting on us finding this and tying what he said to this.” “Which means again, whoever this Fixer is, they know more than they’re willing to let on.” “And using us as pawns in some kind of sick game.” “Sounds like the Fixer we know and love don’t it princess?” Let’s go see what kind of Gold we can find.” Read On — Sins Of The Past Part 8 Let’s s keep in touch: P.G. & Sharon Barnett ([email protected]) © P.G. Barnett, 2020. All Rights Reserved.
https://medium.com/illumination/sins-of-the-past-part-7-50ab41a70f3e
['P.G. Barnett']
2020-08-21 18:32:41.183000+00:00
['Short Story', 'Fiction', 'Henry And Sunny', 'Fiction Series', 'Short Fiction']
Personal Diets
Personal Diets Why are we turning out diets into personality traits? If you regularly scroll through social media (or even Medium), you’ve probably noticed some of the people you follow eat a “carnivore”, “keto”, “plant-based” or “generic vegan” diet. Maybe you are one of those people. Maybe you don’t fit in with either “popular” clique, so to feel included you identify as a vegetarian or pescetarian in your social media bio. One time, I saw someone on Twitter identify as a “pesca-vegan” or “pescetarian with a little chicken.” In the past, being evangelical about diets appeared to only really be a thing on television programs, magazines, or the friend who is selling diet supplements as their side hustle. But fast-forward to today’s age of social media, and you seem to find a diet around every corner. While the diet trends of today may appear new and revolutionary, many of the popular “diet camps” aren’t so new after all. During the Stone Age, people hunted animals since they were able to create the tools and weapons suitable for killing and processing meat. They also consumed seasonal fruits, vegetables, and seeds. This way of eating most likely inspired the creators of the “Atkins diet”. Today, many people follow an Atkins variation called the “Paleo Diet” or the “Ketogenic Diet”. Both diets are typically higher in protein and fats, and have less carbohydrates than usual. During the global rise of agriculture and farming (primarily in regions more suitable for farming), people started to consume more grains and vegetables like barley, rye, and corn and may have relied on these types of foods more than meat. Some people today would say that this is similar to the “Starch Solution Diet,” since it is high in carbohydrates — except nowadays a high-carb diet like this is often associated as a type of plant-based vegan diet. When meeting someone for the first time, one usually tries to gauge what type of person they are. The more you spend time with the person and learn more about them, the more you notice certain qualities and characteristics, known as personality traits. A personality trait is a pattern of one’s thoughts, feelings and behaviours. According to psychology experts and mainstream personality tests, one’s character is based on what is known as the “Big Five.” This includes the following parameters: openness, conscientiousness, extraversion, agreeableness, and emotional stability. Each person ranks either high, low or moderate, which can predict or determine things like one’s ideal communication and learning style, how to address complex situations, their approach and role in friendships and relationships, ideal careers and more. But now, people don’t stop at wondering about someone’s openness or agreeableness. They start worrying about the person’s diet too. People are becoming evangelical about their way of eating as if their diets were a religion, and it’s not just food-related bloggers who’re now repping their diet on their bio. Online support groups bring together people who eat the same way, allowing them to share recipes and success stories for motivation. Social media pages post memes throwing comical jabs at those who don’t eat the same way, trying to convince them that their way of eating is better. These claims are sometimes in serious need of a citation — but what’s nastier is social media pages making outrageous claims about people who don’t eat like them. It seems like one’s diet has become a clique or trait, to the point where some people can’t find a way to properly cooperate with them. It has gotten to a point where people need to ask for advice on how to deal with someone who doesn’t eat like them. When browsing through the menu of foods in different countries, one will notice that the type of foods one eats varies between different regions of the world. There are many factors that shape how the ideal diet is formed across the world. We live in a world of different climates and geographic features. Have you ever wondered why you don’t have açaí berries in the national cuisines of countries outside of Central and South America? It’s because they only thrive in tropical climates with swamps and floodplains. Food culture and what one eats is largely shaped by geography first. We can see this when looking at people groups who haven’t completely bought into the idea of modernization and eating imported or factory-made foods. In the Highlands of Papua New Guinea, one can find indigenous tribes that still maintain a standard of living similar to what may have been practised centuries ago — like the Huli people. Due to the tropical climate, there is accessibility to many plant-based staples such as taro, sweet potatoes, corn and bananas. People of the Huli tribe do hunt pigs and possums; however, these foods are typically consumed on special occasions. The Massai people are a semi-nomadic group that live in the arid regions of Kenya and Tanzania. Unlike the Huli and their contemporaries, the Massai live in land that severely lacks vegetation. As a result, they rely on a diet high in meat, dairy, and blood for proper nourishment. Plant-based foods are rarely consumed as they are limited in their home region. Both groups eat diets that fall on complete opposite ends of the spectrum, yet both of these groups are healthy, fit and can kick ass…probably yours too. These variations of eating have existed for thousands of years. The changes in diet weren’t because one was necessarily better than the other, but because people needed to respond to the not-always-so-lovely Mother Nature in order to survive. Our ancestors were simply playing an everlasting game of “survival of the fittest” every time nature threw a hard punch at them. The only things that have really changed diet are the opportunistic tools of today’s technology age. And, the attitudes of some of the followers who make a lot of noise on social media. I’ve heard people say they won’t associate with someone who eats a diet they don’t agree with. We’ve reached new levels of discrimination in this so-called advanced, inclusive and modern world. It’s crazy that this is a thing because how one eats is one of the least interesting things about a person. While it’s true that a diet full of junk food may reflect the lack of self-care one has or something psychological like an eating disorder, I have never met a person and thought “Hmm, I wonder if they eat carbs.” Why do people make such alliances? Perhaps it’s what happens to societies surrounded by an overabundance of food. After all, there are pockets in the world where people are starving and couldn’t care less about meat or plant-based diets. Diet isn’t always about geography. Values and beliefs largely shape a food culture. Religious beliefs often come with a text explaining all of the things one should do to live a good life — which is mostly behavioural suggestions on the “appropriate way” to do certain rituals. And one thing many texts do talk about is diet. In religious texts like the Bible and the Quran, one could find guidelines on which foods are considered clean or unclean for consumption. Nations that have a culture largely influenced by a particular religion may have minimal or zero supply of a certain type of food because the demand is low. For example, pork may be hard to find in Muslim-majority nations. Beef may be hard to find in Hindu-majority India. Life-changing events in history can alter a nation’s food culture too. This is especially true for many dark points in world history like slavery or famine. In times of struggle, people are forced to become more creative when preparing food in order to be full and energized, because they don’t know where the next meal will come from. The phrase “comfort food” often refers to foods that are calorifically dense and can be heavy on the stomach. Some of the comfort foods that we enjoy today may have originated from a time when food was scarce. If you were a picky eater growing up, you may have been told “Your yuck is someone else’s yum,” or some variation of this phrase. If you were to ask your parents or grandparents what they ate growing up, depending on where they lived and their economic class, you may receive some bizarre responses. Maybe they ate something that you wouldn’t consider edible or necessary. During the Khmer Rouge regime in the 1970s, much of the Cambodian population was left to starve. In order to survive, the people had to think outside of the box when looking for nourishment. Their answer? Insects. What was once referred to as “hunger food” became a new addition to the Cambodian cuisine, which was later found to be a good source of protein and iron. Whether it was their personal decision or not, people have been migrating to different parts of the world for a very long time. When looking at various international cuisines, one can notice one country eats something that mirrors another one located in a different part of the world. Ever wondered why people eat curry dishes in Guyana or why Americans in Louisiana and surrounding areas eat a rice dish that looks like the offspring of jollof and paella? When people go to another part of the world, one thing they bring with them is their food culture. They’ll often try their best to replicate their favourite meals from home in their new land or enhance it using newly-discovered ingredients and cooking techniques. In the past, diet-related illnesses were largely due to nutrient deficiencies from food scarcity and poor hygiene practices due to the lack of food safety knowledge. In today’s advanced world, most diet-related illnesses are largely due to the downside of modernization: the abundance of processed foods in combination with the inventions of vehicles, sitting toilets and desk jobs making people more sedentary and causing them to lose basic mobility functions. Because life became significantly easier for us, we use less energy during our day-to-day tasks. As a result, people are using less of their natural muscle functions. And because many of the yummy comfort foods from our respective childhoods and cultures are typically calorifically dense, we may consume more calories than we burned. Back then, it wasn’t a big deal because people moved more and didn’t eat as often. Now, many people are in a surplus which can explain the global increase in obesity. Minimizing or completely avoiding processed foods and getting enough movement throughout the day, would position someone well in regards to general health. But it also depends on the person. When a group of people drink alcohol, it becomes crystal clear that some people are functional drunks after downing a six-pack of beer while others are dancing on a table after half the amount of drinks. One’s level of alcohol tolerance depends on gender, genetics, body weight, and the frequency one drinks. Assuming everyone has eaten the same amount of food and drinks alcoholic beverages at the same level of frequency, a heavier person will hold their liquor better than a lighter person, and a woman is more likely going to get drunk faster than a man. Like alcohol, drugs and medications can also have varying results from person to person too. Age, gender, weight experience, and frequency can determine how one will react. If you were to look at the label of an over-the-counter medication, you may notice that the recommended dosage is usually different for children for good reason. At your local pharmacy, you may have noticed that there may be a children’s version of cold, fever or pain medication. That’s because a child is much smaller than an adult and their bodies are still developing. Two spoonfuls of cough syrup may be okay for an adult body, but a five-year-old taking the same amount would be too much for their body to handle. An overdose in non-prescription medication is dangerous. It can cause harmful things such as seizures, nausea, dizziness or rapid heartbeat. In reverse, an adult taking a child’s dosage of medication will probably not be enough to fight the symptoms they are experiencing. When a doctor prescribes medication, the patient’s doses are perfectly tailored to their gender, age, weight, allergies and other pre-existing conditions. Depending on the dosage and condition of the user, a medication like Percocet could either be a pain reliever or a way to get high. If things like alcohol, drugs, and medications can cause varying reactions, then what about food? Humans are animals. Like other animals, we too adapt to our surrounding environment when looking for proper nourishment. The perfect diet is something that not only keeps you alive, but it helps you thrive. This “perfect diet” is something that varies from person to person, not whatever is trending in the media. Everyone has different nutritional needs based on their lifestyle and genetics. Find what works for you, or if you can, hire an unbiased nutritionist and see how you feel after a few months. So next time you’re tempted to jump on the latest diet bandwagon, or feel the pressure to join a certain diet club, maybe that’s the time to pose yourself a question. What if there’s more than one way to eat healthy? Curious for more? Sources and references for this article can be found here. Locked down? We are, too. But we’re using this opportunity to get some writing done — and you’re welcome to join us! Whether you’re an aspiring writer or a researcher looking to share their work with the public, everyone has something to say. And we can help you get started.
https://medium.com/snipette/personal-diets-9dd2fd782ea1
['Nicole Cooper']
2020-04-17 07:01:00.933000+00:00
['Diet', 'Food', 'Health Foods', 'Culture', 'Health']
It was preached in the First and Second Great Awakenings, it was preached by the circuit riders, and at local Baptist revivals every year or many times a year.”
It was preached in the First and Second Great Awakenings, it was preached by the circuit riders, and at local Baptist revivals every year or many times a year.” Wsinper Nov 19, 2020·6 min read Evangelicalism in America is nearing extinction due to the movement’s devotion to politics at the expense of its original calling to share the gospel, according to Mark Galli, former editor-in-chief of Christianity Today. “The evangelicalism that transformed the world is, for all practical purposes, dying if not already dead,” Galli said during the “Conversations that Matter” webinar hosted by Baptist News Global Oct. 8. He spoke with BNG Executive Director and Publisher Mark Wingfield in an hour-long webinar that is available for viewing on BNG’s YouTube channel. Image for post https://www.reddit.com/r/TourfinalstennisLive/ https://www.reddit.com/r/Fostervsromanlive/ https://www.reddit.com/r/Latingrammyawards2020/ https://www.reddit.com/r/Fostervsromanlive/hot/ https://www.reddit.com/r/Fostervsromanlive/top/ https://www.reddit.com/r/Fostervsromanlive/new/ https://www.reddit.com/r/Latingrammyawards2020/ https://www.reddit.com/r/Latingrammyawards2020/new/ Now semi-retired, Galli served 20 years at Christianity Today and is the author of a new book, When Did We Start Forgetting God: The Root of the Evangelical Crisis and Hope for the Future. While he has identified at times as Presbyterian, Episcopalian, Anglican and recently becoming Roman Catholic, Galli said he has remained true to his evangelical upbringing that emphasized evangelism and spiritual renewal. “I am an evangelical Catholic,” he said. Galli spoke on an array of other topics including the culture war divisions between Americans, the polities that divide churches, and how dialogue may help pastors and others hurdle those barriers. That editorial But he hit on a very high-profile topic, too: his December 2019 Christianity Today editorial describing President Donald Trump as morally unfit to hold office and arguing for his removal. It was published during the Congressional impeachment hearings. “He himself has admitted to immoral actions in business and his relationship with women, about which he remains proud,” Galli wrote. “His Twitter feed alone — with its habitual string of mischaracterizations, lies and slanders — is a near perfect example of a human being who is morally lost and confused.” The piece generated severe backlash from the right, including from the president himself. The viciousness of responses often was hard to bear, Galli said. The one possible thing he would redo, he said, is the headline — “Trump Should Be Removed from Office” — that placed the emphasis on politics, when it was faith that motivated his position, Galli explained. “I was making moral arguments to fellow evangelicals. But it sounded like a political comment.” The editorial was not, as some claimed, an effort to back Trump’s opponent in the 2020 election. It’s just that Trump has “such deeply flawed moral character” that he needs to leave office, Galli said. Trump has “such deeply flawed moral character” that he needs to leave office. He has no quarrel with conservative evangelicals who acknowledge Trump’s flaws but still vote for him because he lines up on issues important to them, Galli said. There were certainly plenty of those in 2016, according to a pre-election Pew survey that Christianity Today published titled, “Most Evangelicals Will Vote Trump, But Not for Trump.” Rather than citing issues like abortion, religious freedom and support for Israel as rationale for voting Trump, white evangelicals were much more concerned about the economy four years ago, Galli recalled. “I get it. I disagree with their choice, but I respect their wrestling.” On the other hand, he said he does not understand those evangelicals who refuse to criticize Trump on moral grounds, who believe liberals need some shaking up and describe the president in messianic terms. He recalled an anecdote about a pro-Trump Christian describing the president as sitting “at the right hand of the Father” and said of this ideology: “That’s idolatry, clearly and simply.” Demise of evangelicalism To explain the demise of evangelicalism, Galli cited the legacy of Billy Graham, who even in advanced age preached to invite men and women of all races and cultures to Christ. “He was the glue that held evangelicalism together for many years,” Galli said. “An unfortunate symbol of what evangelicalism has become is epitomized by his son, Franklin,” he continued. “Franklin stands for evangelicals on both the right and the left who believe that politics is an essential work of evangelical faith.” “Franklin (Graham) stands for evangelicals on both the right and the left who believe that politics is an essential work of evangelical faith.” One symbol of that politicization is an organization called Evangelicals for Trump. “In describing themselves in that way, they have become just another political interest group, taking the great name ‘evangelical,’ with all its theological and doctrinal and gospel history and meaning and putting it in the service of a political candidate,” Galli asserted. And from his vantage point, the news is no better from the evangelical left. “What’s really troubling to me is that instead of decrying this coopting of the term ‘evangelical’ for political gain, the evangelical left has only mirrored this tragic move when they recently formed a group called Evangelicals for Joe Biden.” Evangelical groups that focus almost solely on social justice and cultural change, instead of sharing the gospel, are contributing to the decline, too, he said. “As a result, we’ve started to let the agenda of the world determine the agenda of the church, and we’ve sidelined evangelism and church renewal as the result.” Galli said he noticed this trend during the hiring process at Christianity Today beginning in the 1990s. Candidates overwhelmingly were interested in cultural analysis, and perhaps one in 10 story ideas pitched was about evangelistic outreach. For the most part, he added, the lack of interest in that founding mission of faith sharing exists across the board. “I am going to go so far as to say that our fascination with social amelioration, and political activism, has watered down the evangelical faith to the point that it looks little different than mainline Christianity,” he said. “We’ve forgotten that the genius of evangelical faith was its singular focus: spiritual renewal. ‘You must be born again’ was preached to individuals and to whole churches and denominations, from George Whitefield, John Wesley, to Charles Finney, to Dwight Moody to Billy Graham. It was preached in the First and Second Great Awakenings, it was preached by the circuit riders, and at local Baptist revivals every year or many times a year.” Yet, that message is not being preached much nowadays, and there will be consequences, he said. “Evangelicals today no longer have a laser focus on evangelism and spiritual renewal. As a result, I believe they will fade away as will the very term.” Who will the Lord raise up? But Galli predicted the mission of evangelism will continue, possibly under a different name. “In every generation, the Lord raises up some Christians to whom he gives the charism of evangelism and spiritual renewal. What they will be called in the future, I don’t know.” “In every generation, the Lord raises up some Christians to whom he gives the charism of evangelism and spiritual renewal. What they will be called in the future, I don’t know.” Citing the tradition of various orders within Roman Catholicism — Benedictines, Franciscans, Jesuits and so forth — he suggested one way to reclaim evangelicalism is for those called to evangelism to rise up as a holy order across the church universal. With some portion of the church focused on evangelism, then Christians can be involved in the public square, love their neighbors and work for social and political justice, he added. “Christians should not run away from culture but dash right into the middle of it and do whatever it takes to show forth the righteousness of God.” Friendship amid differences Galli explained that he’s developed these insights partly in becoming Catholic, which has provided a different vantage point from which to view evangelicalism and the wider church. Regarding Christian unity, he said: “I don’t know if there is a reason for us to be apart, but it’s hard to get together because no one is willing to give up anything. For example, talking to a Methodist and a Presbyterian reveals little difference, “but Methodists don’t want to give up their bishops and Presbyterians don’t want to submit to bishops.” Divisions within congregations, especially politically driven ones, must be addressed delicately, Galli said, suggesting pastors preach on the Bible from the pulpit and speak with parishioners aside from their sermons about politics. But he acknowledged that even the Bible has been politicized in the current climate. “Unfortunately, everything is perceived as political,” he said. “We just have to remind ourselves there are more important things than politics.” Former Supreme Court Justices Ruth Bader Ginsburg and Antonin Scalia lived that approach, he said. They did not let ideological differences prevent a friendship. “That is something American leaders ought to be promoting,” he concluded.
https://medium.com/@wsinper847/evangelicalism-in-america-is-nearing-extinction-due-to-the-movements-devotion-to-politics-at-the-e3c652acb70d
[]
2020-11-19 18:49:42.772000+00:00
['Social Media', 'Sports', 'Live Streaming']
Data is the driver of NASCAR and F1. And NoSQL is on pole
Data is the driver of NASCAR and F1. And NoSQL is on pole George Russel in car number 63 during a Pit Stop in 2020. Image via Williams Racing under license to Daniel Foulkes Leon. When it comes to racing and speed, the objectives of Formula 1 and NASCAR teams are the same. Winning. And they aim to do so by having the fastest car, recruiting the most skilled drivers, and making the best strategic and competitive decisions both before and during the race. Enter the era of gargantuan data volumes that are growing exponentially: from a plethora of sensors within the cars, the drivers’ uniforms, trackside scanners, video feeds, weather reports; you name it. Database technologies and their management are now factors that can impact not only the decisions happening at the track, they are the key to securing that podium finish. Formula 1 and NASCAR derive clear benefits from data analysis and processing, but their implementations and approaches are very different from one another. Within Formula 1, the team’s telemetry data is proprietary and highly guarded. NASCAR, on the other hand, makes telemetry data from all vehicles and teams available in real time to fans, teams, and OEMs (Original Equipment Manufacturers, which are Ford, Chevy and Toyota for NASCAR). But both Formula 1 and NASCAR share one key element: the rapid adoption of NoSQL technologies. Augusto Cardoso is Lead Engineer at SportMedia Technologies (SMT), the partner with NASCAR that aggregates, processes, and transmits all race data to various audiences. Cardoso indicated the pace of their adoption is growing across multiple sports. “We first adopted MongoDB in the Motorsports group, starting around 3 years ago. It replaced a SQL database. We are expanding our use of MongoDB in other sports like hockey and baseball. Each sport has its own requirement and MongoDB allows for great flexibility.” Kevin Harvick in car number 04 during a Pit Stop in 2020. Image via SMT under license to Daniel Foulkes Leon. Phillip Thomason, Lead Engineer at the British Formula 1 racing team and constructor Williams Racing, identifies two big advantages of using NoSQL technologies: breaking the walls between previously siloed repositories, and allowing for collaboration across multiple teams. NoSQL helps solve particular problems that would typically occur when silos are well dug in. Thomason describes what it was like before they began using NoSQL technologies, “These queries were slow, manual and often practically impossible. NoSQL has allowed users to access the potential of all data and improved inter-departmental communication.” In this data-rich environment, embracing the variety and diversity of data through a “right tool for the job” philosophy has allowed NoSQL technologies to provide the power and speed that these teams require. This is no easy feat, since Formula 1 teams generate at least 3TB of data per race, and over 100 million data points alone are created in a single weekend by NASCAR. Data is transmitted from the cars to the base stations across the track, and tools need to process this fast, both for analysis by the teams as well as for video animation pairing for broadcasting. “In the case of NASCAR, the data needs to be ready for multiple audiences in real time,” Cardoso says. “In Motorsports, we have some unique applications where we have a production truck that travels to each venue, weekly. We provide services for at-track users, including teams, car manufacturers (OEMs), and TV broadcasters. The challenge is that we need to have data replicated in both the truck and in the cloud. Customers use the data both directly from our truck or from the cloud. In case of an internet issue, the TV broadcast can’t stop. Our local infrastructure can operate independently, but the majority of the performance analysis and data science users connect to the cloud.” High-octane data performance is an essential part of the business, but it can’t come at the cost of usability. Cardoso adds, ”Since we moved to MongoDB, with an “in the truck” and cloud presences, we haven’t had a single database related issue.” Within F1, teams are actively trying to gain a competitive advantage over one another, and their use and combination of various technologies is no exception. “F1 teams are obliged to develop much of the required software “in-house” as it is simply not available on the market,” explains Thomason. “As we’re a relatively small team, technology choice is often driven by existing skills within the team. Often there is insufficient time (or resources) to recruit additional skills. We rely on very capable team members that all work on the ‘full stack’, and day-to-day their roles involve research into new tech.” Within Williams, the process of bringing together structured and unstructured data is a key competitive area and one where not much else can be publicly disclosed. “NoSQL has allowed users to access the potential of all data and it’s also improved inter-departmental communication.” Phillip Thomason, Williams Racing As database technologies advance in potential and opportunities, so do methods. Motorsports is no exception in how the push for DevOps processes are leading the way for Continuous Integration — Continuous Development (CI-CD). On DevOps processes, Thomason added that for Williams in F1, “We are always faced with time and resource challenges, and a drive to improve efficiency across all areas of the business. Tools like Docker have allowed us to move the traditional software development team towards the DevOps arena and helped to better define the boundary between IT and Software departments. We always had a very fast software update mechanism (releasing on a race-to-race basis) but DevOps has certainly given us more flexibility.” Telemetry data is not the only type of data that can be gathered during a race; other sources can also have huge impacts on the teams and their bonuses. One such example is within Pit Stops. Since the banning of mid-race refuelling in Formula 1, their Pit Stops are now consistently in the sub three-second mark, with Williams achieving the fastest Pit Stop in 2016. The push for shorter Pit Stops is also seen within NASCAR, where teams can track and monitor the Pit Stop times of all cars during the race. This has led teams to award special bonuses to their teams based exclusively on the data aggregated by SMT. An example of a PitStop report within the NASCAR Team Analytics app developed by SMT. Image via SMT under license to Daniel Foulkes Leon. NASCAR and SMT process their data entirely through MongoDB and microservices. Cardoso even indicated that they haven’t had a single production failure since they implemented them. “The microservices are so much quicker. I already have a funnel with scripts and everything set up… So in terms of CI-CD, it’s much easier for me to keep rolling these things out… and I can just add it in and edit”. On the adoption of data, Thomason added: “F1 has always been data-driven, and with the governing body restricting what we can do with testing (limited tyre, Computational Fluid Dynamics, wind tunnel, and track testing), this had lead to an efficiency drive around the technology to extract the maximum benefit from the running we are allowed. Telemetry data expansion was therefore driven by the teams seeking a competitive advantage.” Whereas the two sports place a high value on data and both reap clear rewards from its analysis, their approach is very different. For Formula 1, Thomason describes it as a highly valuable and coveted resource: “Given the history of F1, any data publicly released would be jumped on by teams to analyse their competitor’s performance and would require significant investment in resources for teams to remain competitive.” NASCAR on the other hand, is strikingly different. “That’s one thing NASCAR did,” Cardoso added, “everybody can see everybody’s data. So that is a very important thing. And that was all NASCAR policy! For that I commend them, because that makes it more accessible.” This has led to a very different approach to the data and how competitiveness is built around it. Cardoso is part of a team that developed an application used directly by NASCAR, the teams, and its OEMs that helps calculate the fuel efficiency and estimate of every car in real time. The application runs instantaneously, when showing the speed of the results, Cardoso added: “I don’t know if you can appreciate how fast this was. This aggregation there is 28 stages, I did the whole thing in Studio 3T.” Not only can a trailing team see in real time if the cars in the lead have enough fuel to finish the race, but fans and competitors too can strategize along as the race develops. Example of fuel metrics within a data application SMT has developed for NASCAR. Image via SMT under license to Daniel Foulkes Leon. Formula 1 and NASCAR might treat telemetry data differently, but both want their data to be transferred as fast as possible. Data transfer rates are the general bottleneck for both F1 and NASCAR, as issues with connectivity can happen due to network errors or base stations that flood alongside the track during rainfall. Even so, they have achieved impressive latency speeds already. Within NASCAR, SMT is able to process the data from the track to their on-prem Truck, to the cloud and to their users even faster than the general broadcast delay. Cardoso said, “So just to give you an idea, when you are at the track the latency to the data that we send is about 5 to 6 milliseconds. It’s really fast. The data latency when it gets to the cloud, it gets to be closer to 100 milliseconds. And that big gap, basically going from the truck to the Data Center, the fastest I can get that is about 60 milliseconds of latency.” The push for faster data speeds is also shared by Williams, as Thomason indicated, “The drive is to move processing as close to the race as possible in order to reduce latency in the data processing/stream enrichment suite. A tenth of a second saved here has a real race-performance benefit.” Speed, power, and versatility are the requirements of every element of motorsports, from aerodynamics, to tyre quality, and telemetry streams. Database technologies and architecture models are no exception, NoSQL is now another reliable element to help gain that next fraction of a second.
https://towardsdatascience.com/data-is-the-driver-of-nascar-and-f1-and-nosql-is-on-pole-e5898f212d38
['Daniel Foulkes Leon']
2020-12-12 14:26:53.571000+00:00
['NoSQL', 'NASCAR', 'Racing', 'F1', 'Mongodb']
The top 5 crypto myths of 2018
Following on from our previous blog on things that are not true about cryptocurrency, we wanted to deliver part 2, to address some new rumours that we are seeing floating around the media following the current market situation. Cryptocurrency mining will consume all of the world’s energy by 2020 There is no denying that the mining of cryptocurrency is a costly process. Miners use special equipment to mine crypto, which is energy intensive. With global warming being one of humanity’s greatest challenges it is no surprise that there is a lot of negativity surrounding cryptocurrency, energy consumption and its potentially harmful impact on the environment. However, research which suggests that Bitcoin uses 1% of the entire world’s energy is very misleading as the crypto space has evolved, cryptocurrency mining is actually driving green energy innovation. The energy used to power a large percentage of Bitcoin mining farms comes from the energy which has already been produced has not been used and which certain countries are desperate to offload. For example, in the case of solar energy or wind farm energy it is difficult to unload energy which has already been created. When the supply outweighs the demand it is not possible to just turn the energy off, if it is not used then it goes to waste. This is a common issue in countries such as China, where around 70% of Bitcoin is mined. Here, Bitcoin miners are making use of unwanted solar energy that would otherwise go to waste. There are also many ICO’s and mining farms that are dedicated to the use of renewable energy mining, such as Hydrominer, located in the Austrian Alps, which uses renewable hydroelectricity and Golden Fleece which set up a mining farm in an abandoned factory in Georgia that uses energy that is generated by flowing water from the nearby mountains. Another example is Estonia, which has a state-owned wind farm that resides on a small island in one of the windies parts of the country, providing an endless supply of green energy for the mining of Bitcoin and Ethereum. 2. Bitcoin is too expensive There is emphasis placed on the cost of one Bitcoin, yes it can be costly depending on the market price at the time that you decide to invest and how much you would like to own. Many people are put off investing in Bitcoin because they are under the assumption that to be able to invest they need to be able to afford to buy a whole Bitcoin, this is actually not the case. Bitcoin is divisible into units, which means that you can buy half a bitcoin, quarter a Bitcoin and even a fraction of Bitcoin depending on how much money you have available to invest. The smallest amount of Bitcoin that you can purchase is called a Satoshi, named after Bitcoin’s creator Satoshi Nakamoto. For example, at the time of writing this, $100 USD = 3,047,998 Satoshi = 0.03047998 BTC. Suddenly the concept of owning Bitcoin does not seem quite so out of reach now, does it? 3. There is not enough Bitcoin Bitcoin is unique in that there is a cap on the amount that will ever be created, 21 million to be exact. With all the talk of Bitcoin replacing money as we know it, something that has come to light is that some are under the impression that the finite amount of bitcoin’s will be a limitation. The answer is no — this will not become a limitation because another unique factor about Bitcoin is that transactions can be denominated into smaller sub-units which are called bits! There are 1,000,000 bits in 1 bitcoin. As transaction size decreases, the number of sub-units can be increased in the future if needed. 4. Investing in Cryptocurrency is still too complicated for regular people This one we agreed on to a certain extent, there is still a lot of talk about market value and regulation and less talk about education and adoption. To invest in the space you do not need to be technically minded, you don’t need to understand the mining process, know to how to code or have a comprehensive knowledge of how to use the blockchain. There is far more relevance placed on the aspects of the creation of cryptocurrency than there ever has been in fiat currency, many of you won’t have given a second thought as to how fiat is created and how the printing presses are used. And yet you still use fiat every day! Are all of those who have invested in shares in Google experts in SEO and search algorithms? The fact is, the mainstream media and cynics are over-complicating the crypto space creating a dark cloud over the concept because they fear it, what better a way to increase negativity than to create confusion? The most important thing to know is what cryptocurrency is, the problems it solves, how you can invest, what the different types of coins are available and where you can invest. There is still a large gap in the market for cryptocurrency and blockchain education, one that we are only now 10 years on starting to see fill. Education is the key to understanding everything in life, that is why we created B21 Life our cryptocurrency education and training app. Why not take the time to download it today, participate in some lessons, watch some training videos and read our short and simple facts about cryptocurrency and then decide if it is still too complicated for the average person! 5. Bitcoin is dead! We thought that we would save the best until last, Bitcoin is not dead! The market has seen a very tough few months and while the price of Bitcoin and other cryptocurrencies have fallen, dramatically — they still have value! Bitcoin is worth $3590 at the time of writing this, while this is a significant drop since it’s $20000 highlight in December 2017, it’s worth noting that Bitcoin entered the market at just $1! As with any type of investment opportunity, it is volatile and the price is unpredictable and is always set to fluctuate. According to a recent study by Cambridge University, while prices have declined significantly this year, cryptocurrency user numbers have increased to an all-time high! Bitcoin might be going through a rough patch but it is most definitely not dead! The above are just a few of the untrue statements that we have seen lately circulating around online. If you have any questions that you would like us to address, feel free to comment below. To find out some true facts about cryptocurrency check out the ‘Did you know’ channel on B21 Life, our free cryptocurrency education and training app!
https://medium.com/b21official/the-top-5-crypto-myths-of-2018-28636c08cf41
['Rhea Craib']
2018-12-18 14:41:10.577000+00:00
['Education', 'Cryptocurrency', 'Crypto', 'Bitcoin', 'Fintech']
Why We Need To Give Respect To Aretha Franklin And The Dying Craft Of Artistry
Why We Need To Give Respect To Aretha Franklin And The Dying Craft Of Artistry And to those who dare to be creators in this time of disrespect When it was announced that the “Queen of Soul,” Aretha Franklin was dying, my initial reaction was sorrow at the imminent loss of a gem, who represented a time when being an artist was only possible if you were truly artful. And that could encompass whatever method of artistry that permitted that level of dedication and sacrifice — that is woefully missing these days. Thanks to the climate of superficiality mixed in with heaps of self-adulation and pompousness, not to mention high doses of delusion, we can only afford to recognize over-night sensations, that are popular for being popular, and posses a background story that goes viral on the basis that it sticks to the “rags-to-riches phenomenon.” Actual talent isn’t necessary these days, because it’s all about the race to spew out energetic tracks that are re-worked constantly to keep up with erratic change in temperature. And what’s even more fascinating is how very little respect those of us have for the artists who did do the work, and produced the stuff that doesn’t just change the landscape of an industry, but also influences the mental trajectory for anyone who is lucky enough to have more than just a taste. Lauryn Hill comes to mind when I think about the greats of my generation, who have given so much as a token of their generosity, which happens to be the most selfless and invaluable gesture of goodwill from originators. Hill’s phenomenal masterpiece, The Miseducation of Lauryn Hill is currently taking up residence in the illustrious vault of the Library of Congress, and while that honor is well-deserved, there’s also the magnificence of the young Black woman who made me feel included in the narrative of delightful complexities — that can only be depicted with soulfully lyric banter. The girl who displayed the language of hip hop with the Fugees and then elevated the universal appeal with her personalized affection for a genre she uncannily perfected, has spent the years since her Grammy winning days, staying relevant with controversy and growing disrespect from the public. She’s now embroiled in a battle of wills against a nemesis who is determined to discredit her success at whatever cost. Whether he’s within his rights to do so, doesn’t really compare to the horror of existing in a world where a talentless Cardi B is considered worthier than a woman who spent her impressionable years making enough of an impression — to warrant an instinctual level of reverence — no matter what. But the days of unfiltered artistry is dying out, and that’s what makes the passing of Aretha Franklin very hard to take. As a Generation Xer, my exposure to the Queen and all the others who embodied that realm of never-ending hits that have been spiritually packaged as classics — was through the musical library of relatives. They were lucky enough to be present during an era that birthed the kind of shit that will never be replicated. We will never again witness the startling audacity of hard-earned labor, that breeds the beginnings of fandom, that settles into fascination for what most of us can’t render — before becoming an endearing act of profound respect for the creator and what has been supremely created. When I first heard the infectious “Giving Him Something He Can Feel” from the nineties girl group, En Vogue, I was enthralled and unaware that this was merely a sample from Franklin’s treasure chest. And then years later, I treated myself to a CD that contained the soundtrack from the movie Sparkle. That was my introduction to the breathtakingly alluring voice from an artist who used her art to generously remind us of how the greats make it look so easy. Her career mimicked the dignity of her station and like her counterparts, she seamlessly evolved with the times and collaborated with the up and comers by lending her vocals to one of my faves — “I Knew You Were Waiting (For Me)” with the late George Michael. There was also the anthem of the eighties “Freeway of Love” that made the global trek, and managed to reach me in Lagos, Nigeria. And all through her stunning and incomparable trajectory, there was always a feeling of practiced security whenever she made appearances. From the performance at the inauguration of the very first Black president of the United States to being the first woman inducted into the Rock and Roll Hall of Fame to receiving the Presidential Medal of Freedom — and all the plethora of honors that formulate the foundation of a national treasure — Franklin evoked exactly what Barack Obama so aptly surmised. “American history wells up when Aretha sings.” “Nobody embodies more fully the connection between the African-American spiritual, the blues, R&B, rock and roll — the way that hardship and sorrow were transformed into something full of beauty and vitality and hope.” While that sounds spectacularly accurate, for me it always comes back to the painful demise of pure artistry, and how the precious few who did it for the glory of love and the commitment to laborious requirements, are rightfully leaving us with evidential capacity of what once was, and will never be again. It’s sensationally gratifying that the Queen of Soul was once paired with woman who also electrified us with the symbolic translation of melodic jewels that will never fade. “A Rose Is Still a Rose” is the surviving magic between Franklin and Lauryn Hill that happened a decade ago, and resulted in a gold-plated album for the older royalty. As we bid adieu to another orchestrator of what we fondly rely on when we need to be rescued from the dullness of the present, there’s the wonderment of how we assumed that the good times would continue to roll — with or without the blessing from above. The future of creators in these hostile times signals a forecast that’s not so favorable because despite the palette for excellence, we’ve succumbed to the theory of instant gratification, and the branding of mediocre entries that have enough plastered hearts to secure misplaced endorsements. The passing of legends serves as the emptying of vessels of religion, that united us once before, when all we had was the rhythm of our hearts that followed the beat to manuscripts of memorable feats — tracing our familial and selfied scrapbook. How will it all be measured decades from now, when the dust blows away the layers to reveal pebbles that aren’t strong enough to resist the windfall of the majestic yesteryears — that will still stand firm with fastened nostalgia? Thankfully, the host of angels that are assembling don’t have to worry because they did what needed to be done without asking anything in return. And that’s why we give complete respect to Aretha Franklin, and the artistry of her craft that is slowing leaving us. We respect those who are quietly paying homage to the wealth of fortitude and divine adherence to the product — and it’s polished finish. The sparkle will never stop flashing for attention, even when the Queen takes her final bow.
https://nilegirl.medium.com/why-we-need-to-give-respect-to-aretha-franklin-and-the-dying-craft-of-artistry-d4329509ff95
['Ezinne Ukoha']
2018-08-16 18:08:53.483000+00:00
['Death', 'Culture', 'Icons', 'Life Lessons', 'Music']
Serverless GDPR Workflow — From Design to monitoring with Lumigo
Solving the “Right to be forgotten”: A step-by-step guide to AWS Step Function workflows with full monitor and troubleshoot capabilities This blog was written by Jente Peeraer and Tom Hertogs Intro As an enterprise grows, it tends to acquire or develop a few (IT) subsystems containing various forms of personal data about customers. This is perfectly natural and is necessary for the normal operations of a business. But what if you want to delete the data of one or more of these users? How would you coordinate this deletion across the various systems? What happens if one of these systems fails to delete the data of a user, or better yet, how would you know that a system was unable to delete the data of one or more users? In this blog, we will talk you through a possible way to handle the GDPR regulations of the European Union in your organisation by using a serverless application running on AWS. We will also talk more in-depth about the difficulties a similar application encounters concerning monitoring, logging and alerting and how to tackle them. Webinar Now that you know what this blog post is about; Let us take a short moment of your time to tell you that there is also a webinar for this post. During the webinar, we have given a demo of the entire workflow and have showed you in detail how Lumgio works and how it can make your monitoring a lot easier. So feel free to have a look at it here. Outline What is GDPR? Before we start architecting a solution, we need to understand GDPR. For this, it’s best to have a look at the GDPR guidelines, or more specifically in our case the “Right to be forgotten”. Making a long (legal) story short, it boils down to: “Enterprises have about a month to delete user data once requested”. How do we tackle this? What you need is an overall controlling workflow to manage and monitor the deletion of user data across multiple systems in parallel. You don’t want to wait for one system to have deleted the user data before moving on to the next. You’ve got a deadline to maintain after all. You need to start a workflow once a user has requested his data be deleted, and you need a workflow for each subsystem to request the deletion of that user’s data and monitor the result. This overall workflow could look something like this: This workflow could delete data in an arbitrary number of subsystems, which would each need a controlling workflow for the deletion in that specific subsystem. These underlying workflows could look like this: We end up with an overall controlling workflow, managing the underlying workflows, which in turn manage the deletion of a user in a specific subsystem. Now that we got our high-level overview out of the way let’s deep dive into the implementation. Technical architecture Now that we have a general overview of what we need and how it needs to interact, we need to decide which technologies to use, we need: A way to control the different states/steps in the workflow and manage the transitions To send requests to a subsystem to delete data of a specific user Notify any interested party when something has gone wrong in this process. We’ve chosen to go for a completely serverless approach by using the following services of the AWS cloud: AWS Step Functions to manage the workflow AWS Lambda to send the requests AWS SNS to send notifications regarding the failure of a user’s deletion You might be asking yourself why we’ve chosen these services, and not just created a server specifically for these user deletions, which would then manage the workflow and the deletion in the specific subsystems. Architecturally speaking this would be easier right? Well, yes. Purely architecturally speaking it would be easier, but the combination of the services above has many benefits: AWS Step Functions will give you a visual representation of where exactly a specific request is at a given time. Along with some information about the various transitions, it went through. We’ll show you what it looks like later! It also gives you a visual representation of which requests are still in progress, which have completed and which failed. A specific workflow can run up to a year (at least at the point of writing this). Since our workflow has about a month to complete, this is more than enough. We don’t need to maintain any state about where a request is at a given time since it’s managed for us, which greatly simplifies what we need to do once the server crashes and the processes need to be resumed from where they left since…. well, there is no server that we manage so… nothing to do here! At least not for resuming the workflow, we still need to handle normal error cases, which can be done by using the retry mechanism built into Step Functions. Since we have a completely serverless approach, we only pay for the moments it’s actually being used. Suppose only one user has requested for his data to be deleted this year, then we only pay for a few transitions, a couple of milliseconds of processing we needed to send the request and a couple of seconds to check whether the user has been deleted. On the other hand, the server would need to run continuously and would continue to increase our billing, why pay for something you don’t use… right? Let’s look at the other side, imagine a million people requesting that their data should be deleted on the same day. While the server would be struggling a lot to comply with this demand and would quite possibly go down a few times, the serverless approach is built for scale and would easily manage this load. You could even add some queuing in between to prevent the subsystems from being overloaded. Because our workflow has been split into various Lambdas, they could be written in different languages or even be maintained by different people. Another one of the benefits of a microservice-like approach. Now that we’ve chosen our technology stack let’s go a bit deeper into some technical details. For the subsystem, we’ve made a distinction between a synchronous and asynchronous subsystem. The synchronous system would delete the data immediately. In contrast, the asynchronous system simply registers the user deletion and would delete the user over time (possibly with some form of batch processing). Both methods require a slightly different approach regarding the monitoring of a user’s deletion. The synchronous flow is the easiest and is shown below. The asynchronous flow is slightly more complex and requires a polling process after the delete has been sent, which takes the deadline into account. The flow is shown below. The last addition is the overall workflow management. In this case, we’ve chosen to start everything on an API request. When someone sends a delete request with his id, we store the id, along with the timestamp of the deletion request. The flow then needs to be completed within 30 days after the original request was made. Problems Now we have a fully Serverless architecture managing our GDPR deletion process. Great! Now we’re done, aren’t we? Well, not exactly, while the flow seems to be okay, and most cases seem to be covered, this is only enough in our happy cases. What do we do once we receive an input we didn’t expect? When we misconfigured something? How do we know these things? On top of this, we might want to have an overview of how much activity is going through the system, the number of errors occurring at a given time frame, and if any errors occurred at any given point in time, we’d like to know wouldn’t we? These can be divided into three parts: Logging, Monitoring and Alerting. Logging While it’s pretty easy to set up logging using your favourite framework in any of the various programming models of AWS Lambda, consolidating this into one whole is a different story and is not an easy task. When working with external systems it can also be very useful to see exactly what we’ve received as input at a given time from the system. Monitoring The most important things we’d like to know in this architecture would be: The number of times that our functions are invoked The number of errors that we’ve encountered How much of the given memory we consumed What is our function’s execution time? Did we get any time-outs? Most of these things are easily visible in the Lambda dashboard in the AWS console, but we’d also like to have a complete overview of all the functions that we manage. To accommodate this we’d need to create a dashboard, which has to be updated every time we create a new function. Alerting If something unexpected happens in one of our functions, we’d like to know, or even better, we’d like to be notified. Using the services of AWS itself, we’d need a combination of different alarms and this for every function we’d like to run. While this is all possible, it’s a little cumbersome to do so every time we add a function. Solution To provide a solution for the problems above, which arise in most projects when the architecture becomes complex, a framework/tool for monitoring serverless applications becomes more and more a necessity. In this case, the tool we’ve looked at is Lumigo, a monitoring platform specifically designed for serverless applications running on the AWS cloud. We’ll be showing you how to integrate it into your application and how to find what you’re looking for. Application We have created an application, which is used to follow the rules imposed on us by the GDPR guidelines. This application will be deployed on AWS and will consist of multiple resources, like API Gateways, Lambdas, DynamoDB, Step Functions and so on. Organisation Let’s start by looking at our organisation. We have three departments containing data that should be removed when our deletion process starts. These domains are project, order and customer. In order to comply with the guidelines, data needs to be removed from all these domains. We’ll have a look at how our application will be set up. Main Workflow The main workflow coordinates the process of removing all the user data available throughout the different domains of our organisation. When we send a rest request to an API Gateway to remove the user data of a specific user, we invoke a Lambda, which stores the provided user id in a DynamoDB table. This DynamoDB table will produce a DynamoDB stream, which invokes another Lambda. This Lambda will kick off the workflow. As has been mentioned before, our organisation has three domains, which all store user data, the project domain, the order domain and the customer domain. This workflow will send a request to each domain in parallel asking them to kindly remove the data of this user. In the above image, we see a successful run, the domains successfully removed all user data they had, and all finished executing successfully. However, in practice, things won’t always go so smoothly. What happens if one of the domains is not able to delete the data in time? This flow is illustrated in the flow below. In this case, the order domain had some problems deleting the user’s data in their systems and failed its flow. This made our main workflow fail as well. Because Step Functions considers the entire parallel step to be failed if the error isn’t properly handled, if you want to know more about Step Functions parallel steps, check the docs here. Synchronous Workflow The main workflow invokes three other workflows. The project domain contains very little user data and has a very easy and straightforward way to remove everything they have in a timely manner. Because of this, they can provide a synchronous API to send a removal request. This allows us to implement a very straightforward synchronous workflow, as shown below. In the DeleteUserData step we invoke the synchronous API of the project domain, the API returns an error response when it fails to delete the data, which the function in turn translates to a boolean, specifying whether the deletion was successful or not. In this case, the API returned an OK response, and the function translated this to true, so the CheckUserDeleted step goes to the UserDeletedSuccessfully end-stage. However, this isn’t always the case, as can be seen in the next workflow. In this example, the project domain wasn’t able to complete the deletion. The CheckUserDelete step will then delegate to the UserDeletionFailed step, sending an email to the admin of the project domain, telling him some manual action is required to get the user data removed. Asynchronous Workflow The order and customer domain contain more user data and have it scattered in many places inside their systems. For their domains, queues are being used and you have people doing manual removals of data. Because of this, a simple synchronous call to their API won’t work to request the removal. An asynchronous workflow has been implemented to alleviate this problem. The DeleteUserData step will send an asynchronous request to the customer and order domain APIs, which will immediately return an answer marking the request as being submitted. If the request was submitted successfully the workflow will start polling their API to check if the user data has been removed. If something went wrong during the submission itself, which could be because of network problems for example, the workflow will immediately go to the UserDeletionFailed step. Then the CheckUserDeleted step says the user hasn't been deleted yet and will provide this information to the IsUserDeleted step. This step will then invoke the WaitForUserDeletion step if our deadline hasn't expired yet. This step waits for a set amount of time before going back to the CheckUserDeleted step. In the above case, the user was deleted within the allowed amount of time, and the IsUserDeleted step calls the UserDeletedSuccessfully step, ending the workflow successfully. In the above image the user wasn’t able to be deleted in the maximum defined amount of time. The workflow will fail and an email will be sent to the admin of the required domains, urging him to get a move on with deleting the data. In other words, manual intervention is required. Integrating Lumigo To monitor the application, finding out where bottlenecks are, where errors happen and so on we’re going to use Lumigo. Lumigo traces the requests your application receives from source to finish. Integrating Lumigo into your application is very straightforward, and you have a few possibilities here. We’ve chosen for the lambda layer approach, but you can also auto-instrument your function on the Lumigo site or manually wrap your function using the Lumigo tracer as well. If you want to know more about the different possibilities you can read more here. Since we’ve chosen for the lambda layer approach, we need to include the layer in each of our Lambdas, because it contains all the monitoring code and is fully abstracted away from your application code. Layers: - arn:aws:lambda:eu-west-1:114300393969:layer:lumigo-node-tracer:108 If you’d like more information about what the lumigo-node-tracer can do for you, have a look at Lumigo’s Github page, which contains the source code and extra documentation. Secondly you’re going to need to provide some environment variables, which are used to instrument the layer. Environment: Variables: LUMIGO_TRACER_TOKEN:!Sub '{{resolve:ssm:/${Application}/lumigo/token:${LumigoTokenVersion}}}' LUMIGO_ORIGINAL_HANDLER: user-api-lambda.handler LUMIGO_STEP_FUNCTION: true The LUMIGO_TRACER_TOKEN variable contains the API key we receive from Lumigo, in this case, it's stored on the AWS SSM Parameter Store. When deployed it will be injected from there as an environment variable. The LUMIGO_ORIGINAL_HANDLER variable contains the handler of our lambda, which is used by the Lumigo layer to trace the function execution. The LUMIGO_STEP_FUNCTION simply tells the Lambda it's running in a step function. Now that we’ve added Lumigo to our application, we can set up some alerts. We’ll do so by going to the Lumigo site and going to the Alerts configuration tab. The reason why you can do this now is because alerts work out of the box for most use-cases, as mentioned above. From here we can add alerts or edit existing alerts, the view should be pretty similar in both cases. Let’s open or create an alert. Here we can choose which Alert Type we want and for which function the alert should be configured. We can even choose only to get one alert each hour or day in case we’d like to limit the notifications we’ll be getting. When that’s done we still need to set-up where our alarms need to go, we can simply click on the link in the Delivery Channels field. Here you can choose which integrations you want to enable and configure them. And with that done, your next error will be delivered to you using your preferred channels. For more info on alerts, check the following documentation. Using Lumigo We got the set-up out of the way, awesome! Now we can open the Lumigo console and see what we can do with it as a monitoring platform. The best way to see what we can do would be to use our application and see what’s happening. Let’s say we want to delete a user with the id “delete-me-now”, we simply send a delete request with the id in the path. The request has been sent! Time to open Lumigo, let’s start with the dashboard. The dashboard gives us an overview about the amount of invocations that we have across our landscape in a specific timeframe. We can also see which functions are most invoked, which functions had a cold start, an estimation of the cost, … . Out of the box, this is already some powerful functionality. Especially if you know that all we did so far was adding a layer and some configuration. But we want to know more! We need more specific information about how the request we just sent went. We’ll go to the transactions view for that. Here we can see what transactions just occurred in our system. This already gives us an idea about how long some transaction ran and whether they succeeded. We’ll click on the first one because we’d like to know more about it. We can see now that our API request saved an item in DynamoDB triggering a Lambda, which started a Step Function, and we can even see the amount of times a given function was invoked. On the right we can also see the logs across all the Lambda functions that ran inside this specific transaction. But we need to know even more, we want to know which response the delete request for the order data received. Let’s click on the API icon. Here we can see the request duration, response code, request and response headers, request and response body, … . This should be all the information we need to find out why something went south in the system. Speaking of going south, remember those alerts we talked about earlier. I just got an email and a slack notification. It seems like there’s an issue in our system. Let’s click on the link and find out what went wrong here. The link takes us to the transaction overview of erroneous flow. Here we can see which function failed (1) and we can see the logs at the right side to see what went wrong. Clicking on (1) also gives us information about the issue in this case as shown below. Now we already got deep into troubleshooting: this error indicates that the response from the request was undefined. Weird! Without a strong troubleshooting tool, like Lumigo, finding the root cause might have been a huge time-consumer. Fortunately, we can easily check the call and all its parameters by clicking the AWS API Gateway icon. Immediately we find the problem: There is no response to this call because this is the wrong api URL! It looks like someone updated the domain name of the external system without checking it properly. Correcting the domain name resolves the issue, and we’re done! Debugging as fast as 3 clicks! Issue solved! Conclusion The GDPR guidelines and in our case the “right to be forgotten”, can have a tremendous impact on businesses and the data they have about their users. Failing to meet the deletion policy can amount to enormous fines from the European Union. By leveraging the power of the cloud and serverless event-driven architectures, you can quickly and cost-effectively deploy a solution which can coordinate the removal of the required data throughout your domains. To make sure the application is working as intended we integrated Lumigo as a monitoring tool, which worked for us in the background, and abstracted all monitoring, logging and alerting code away, allowing us to focus on what’s important, getting that user data out of our systems before the European Union starts to knock down our doors. Next steps If you’re still hungry for more reading material you can check out the next blogs!
https://medium.com/cloudway/serverless-gdpr-workflow-from-design-to-monitoring-with-lumigo-909c04d990fc
['Jente Peeraer']
2020-12-18 08:30:36.004000+00:00
['Alerting', 'Aws Step Functions', 'AWS', 'Monitoring', 'AWS Lambda']
Machine Learning Algorithms: Surprises at Deployment?
In a recent ArXiv paper by a large group of Google researchers (+ two EE/CS professors and a PhD student), called “Underspecification Presents Challenges for Credibility in Modern Machine Learning” they report discovering a new underlying cause of deployment surprise: “under-specification”, that is, that algorithms that seem equally good at time of development (i.e. they all give a similar “solution” and therefore the problem is under-specified) perform dramatically differently during deployment, in terms of performance on subgroups. Is the above paper’s discovery a new insight? Are deployment surprises a specialty of deep-learning? is “under-specification” a deep learning problem? Is under-specification in fact a problem in prediction? That predictive algorithms can perform dramatically differently on data subgroups is well-known. Simpson’s Paradox is an extreme example, where a correlation between an input and output changes direction when examining subgroups of the data. The larger the number of predictors, the larger the chance of a Simpson’s Paradox. Predictive models are also easily “fooled” when the training dataset includes a minority group that has a different input-output relationship than the rest of the training data. Models are fooled because metrics used to train and evaluate algorithms give equal weight to each observation (e.g. least squares or maximum likelihood for training; RMSE and accuracy metrics for evaluation). Illustration of Simpson’s paradox: a positive x-y relationship appears for each group separately, but a negative x-y relationship appears when the groups are combined. (Source: Wikipedia, Creative Commons) While the Google researchers paper’s abstract concludes with a vague sentence that might mislead readers to think there’s a technological solution (“Our results show the need to explicitly account for underspecification in modeling pipelines that are intended for real-world deployment in any domain”), in several places in the 59-page paper the authors conclude: “This confirms the need to tailor and test models for the clinical settings and population in which they will be deployed.” or “This is compatible with previous findings that inputting medical/domain relational knowledge has led to better out of domain behaviour…, performance… and interpretability… of ML models.” The paper closes with a proposal to circumvent the need for a context-specific dialogue between the data scientist and the end user by building models that favor “predictors that approximately respect causal structure”. While using causal structure is feasible and useful in some domains, especially in low-dimensional problems, the areas where ML shines are exactly those where causal relationships are difficult to specify. Explanation and prediction both have their merits, and predictive solutions can still be sound and useful even without underlying causal modeling if developers and users collaborate and communicate during the entire loop of design, testing, deployment, and post-deployment feedback. At their basis, deployment surprises are a failure to understand the limitations of ML, or even statistical models. They all rely on many human choices —by data scientists, data collectors, data engineers, people on whom data are collected, the end users (e.g. decision makers), and more. In judicial decision making, there has been a growing number of studies identifying issues related to deployment disasters, triggered by the 2016 ProPublica report on glaring errors of the COMPAS system used in several judicial decision making contexts. Many of the issues are related to discrepancies between the data used to train the algorithm and those during deployment, but there are many other context-related issues that surface when we ask “how will the ML solution be used to generate an action?” We can then ask what data the judicial decision maker will use as input to the system and compare that to the input used to train the data (different populations, different definitions of “recidivism”, etc.). We can compare the action that will be triggered (e.g. parole decision) to the action used to define the output in the training data. These are examples of the critical knowledge a dialogue would uncover.
https://medium.com/swlh/machine-learning-algorithms-surprises-at-deployment-6388b006fc0c
['Galit Shmueli']
2020-12-11 17:35:03.478000+00:00
['Machine Learning', 'Prediction', 'Decision Making', 'Human Behavior', 'Deep Learning']
The British Invented Concentration Camps
Great Britain’s conquest over the New World was a bloody endeavor, with many countries succumbing to British occupation. Africa was no stranger to the English invaders, most notably in northern Africa where countries like Egypt had British troops stationed for quite some time. The southern regions of Africa were also occupied by Great Britain, with the former Dutch Cape Colony taking the southern tip of the continent. However, not everyone was happy with the English rule. Two countries bordering the Cape Colony were openly hostile to the British. The South African Republic (Transvaal) and the Orange Free State had both fought the British and won in the First Boer War. Eighteen years later, however, the Orange Free State and the South African Republic would cross swords with Great Britain again, due to the discovery of gold. The Second Boer War Sketch of the failed raid. Courtesy of Wikipedia.org Diamonds and gold were discovered in the South African Republic and the Orange Free States, and upon hearing of the discovery, the United Kingdom swiftly moved to take the newly established gold mine. The gold mine was located where present-day Johannesburg stands, with the city being rapidly set up by mainly British prospectors during the gold rush. The hope was that upon the arrival of the English army, the British people would rise up and help the soldiers take hold of the gold mine and all surrounding territories; this plan became known as the Jameson Raid. The Transvaal was aware of the British’s plans, however, and had Boer troops encircle the English 600-man column. After a small skirmish, the British men surrendered, having taken sixty-five casualties. The English had superior numbers, and while they had taken most of the Transvaal and Orange Free State’s land, the Boer troops would still terrorize the British soldiers in a guerilla war. The guerilla war was fierce, and the English needed a way to contain the Boer guerillas. To solve this problem, the officer in charge, Herbert Kitchener, started a ‘Scorched-earth’ campaign to limit both the supply and movement of the Boer troops. The British forces systematically burned crops, destroyed farms, homesteads, and even raped Boer women and children as young as ten. The raping of women and children was an atrocity. The cruel treatment in the occupied territory was a common occurrence with the British troops doing anything they could to cut the Boer supply lines, oftentimes harming civilians. However, the Scorched-earth campaign was not the end of the British atrocities in South Africa. The British created the first-ever concentration camps. These camps were set up originally as refugee camps for civilians forced to flee due to the conflict. However, after Kitchener started the Scorched-earth campaign, refugees flocked to the camps in large numbers. In attempts to curb the Boer troops' ability to resupply, Kitchener had systematically forcefully rehomed thousands of civilians into the camps, even going so far as to almost completely depopulate some areas. The conditions inside of the concentration camps were horrific. Disease and starvation killed thousands of innocent civilians inside the camps, with the British neglecting those trapped inside of them. The lack of food, poor sanitation, and overcrowding lead to malnutrition and multiple outbreaks of diseases; measles and dysentery struck the children worst of all. Around 27,000 women and children died in the British concentration camps. A malnourished Boer child photographed by Emily Hobhouse. Courtesy of Wikipedia.org The summary The British forces had won the war and on the 31st of May 1902. The British Empire added the territories of the South African Republic and the Orange Free State to the Cape Colony’s land as a result of the Treaty of Vereeniging. The English now controlled the gold mines. With that gold funding the modernization of the South African Cape Colony. However, the war came at a great cost, and not just economically: thousands of innocent human lives were savagely put to an end in the conquest for gold and land. The story of the British concentration camps is rarely told, and at often times forgotten like many other war crimes committed by the first world powers. The sun never sets on the British Empire, and for the expansion of it, no human life is too dear.
https://medium.com/history-of-yesterday/the-british-invented-concentration-camps-2b7b93ed8b1c
['Louis Lonsdale']
2020-09-08 14:01:02.465000+00:00
['Britain', 'History', 'South Africa', 'Africa', 'England']
The community of Decred is present around the world
As you may know, Decred is a decentralized project — its digital currency is available to everyone, anywhere, at any time. However, the project is not limited to the digital realm only. In fact: the Decred community is expanding its global presence! Here are some pictures from around the world. Enjoy 😎 Mexico From the 22th until the 26th of April, many young people visited the Decred booth at Talent Land in Guadalajara: Morocco On the 24th and 25th of April, Decred community members were present at the IEEE Blockchain Summit in Rabat: Australia On the 30th of April, some Decred community members organized a crypto governance panel in Melbourne: Brazil On the 4th and 5th of May, lots of people visited this awesome Decred booth at Bitconf in São Paulo: Canada On the 8th of May, several community members were invited to speak at the ThatCrypto studio in Toronto: United States From the 9th until the 17th of May, a large number of community members gathered at the Blockchain Week in New York: China On the 17th and 18th of May, Decred was introduced to a big crowd of crypto enthusiasts in Hangzhou and Chengdu: Nigeria On the 24th of May, a team of Decred developers and community members presented the project at a blockchain conference in Lagos:
https://medium.com/decred/the-community-of-decred-is-present-around-the-world-64fcc9113924
[]
2019-05-26 15:45:09.014000+00:00
['Blockchain', 'Cryptocurrency', 'Crypto Community', 'Decred', 'Growth']
AmCham North Luzon: Artificial Intelligence and Robotic Process Automation
AmCham North Luzon: Artificial Intelligence and Robotic Process Automation Last December 9, 2020, I attended the webinar on “Artificial Intelligence and Robotic Process Automation”, conducted by AmCham Philippines- North Luzon Chapter. Mr. Ebb Hinchcliffe, Executive Director of AmCham Philippines, started off the webinar by welcoming and thanking the attendees for their continuous support of the Chamber’s virtual events. Following this, the featured speaker, Mr. Emmanuel Bonoan, Vice Chairman and Chief Operating Officer and Head of Advisory of KPMG, was introduced by Ms. Maritess Rivera, First Vice President of BDO Unibank. Mr. Emmanuel Bonoan started his talk on the journey of Artificial Intelligence with the “evolution of systems”. A hundred years ago, Mechanical Systems emerged and became the starting point of what we are experiencing today. As the years passed, Information Systems developed and soared for 35 years, followed by the Internet Systems that reigned for 15 years and continuously upgraded until the coming of the Cloud and Internet of Things, which have been prevailing for the past 5 years. As digitization arises in this era, technology continues to modernize and upgrade into “cognitive systems”. Furthermore, there is a so-called “Intelligent Automation (IA)”, which consists of the following: Artificial Intelligence (AI) , which pertains to the capability of a machine to imitate intelligent human behavior. , which pertains to the capability of a machine to imitate intelligent human behavior. Machine Learning (ML) , which enables systems to automatically learn and improve. , which enables systems to automatically learn and improve. Robotic Process Automation (RPA), which enables organizations to configure computer software, or “bots” to capture and interpret existing applications. According to Mr. Bonoan, AI and cognitive systems will soon replace human work. As an example, a machine was asked a question: “How to be creative?” and the machine wrote the prose presented below: Moreover, Mr. Bonoan presented an overview of the State of AI Deployment research done in cooperation with KPMG globally.
https://medium.com/the-looking-glass/amcham-north-luzon-artificial-intelligence-and-robotic-process-automation-2889934bcc8a
['John Clements Consultants']
2020-12-17 11:54:44.203000+00:00
['Amcham', 'Robotics', 'Artificial Intelligence', 'Innovation', 'Digitization']
Reducing Carbon Footprint with Kiwi
Customer Profile Wants Users who are intrinsically motivated are driven by the desire to have a better environment for themselves and their generations. Others want extrinsic motivation through incentives, and support from their family, friends and the community. Needs Some users and those with a busy lifestyle may need guidance. Those who are aware may also need help due to the unavailability of resources. Fears Carbon footprint reduction might seem like an inundating task. People fear that their efforts will be in vain as they cannot see the impact of their actions and it may seem impractical to change everyday habits. Substitutes Currently, there are smartphone apps available for Android and iOS such as Oroeco, and websites such as CarbonFootprint.com that can aid individuals with calculating their carbon footprint. People may also get information from their utility services, for example, Hydro One. Product Features Kiwi will educate individuals on the carbon footprint concept and include features such as tips on what products to buy, diets to consume, transportation to use and energy conservation. It will motivate users by offering monetary incentives. The app will also include a timeline and show users metrics indicating their impact. Additionally, it will facilitate a community and the ability to share their progress on social media. Benefits The app will help users reduce their carbon emissions and build awareness. Apathetic individuals will be motivated by monetary rewards to install and use the app and eventually become carbon-conscious. Kiwi will also fulfill peoples’ desire to mitigate the effects of climate change. Experience By accurately completing their carbon reduction activities, people will experience happiness from receiving incentives. They will also feel self-fulfillment from contributing to society, making an impact and caring for the Earth. Moreover, they will feel a sense of community with others who possess common values and goals. Goals to Achieve We aim to create awareness on this issue, motivate people to reduce their carbon footprint, and build a community with similar interests. We intend on fighting climate change by reducing carbon emissions and motivating people to have a positive impact on the environment. Kiwi will help users have a better understanding of their carbon footprint through viewing personalized visuals and metrics that show their impact on the world compared to using the existing carbon footprint calculators and traditional methods such as desktops and websites. Instead of making drastic changes, our app will use the data of our users’ current habits to provide them with suggestions to gradually shift towards an eco-friendly lifestyle. By incorporating gamification techniques in the mobile app, they will be motivated to engage in eco-conscious behaviours. Through Kiwi, we hope to create a community that is conscious of their carbon footprint and the significance of their actions.
https://medium.com/cs449-uwaterloo/reducing-carbon-footprint-with-kiwi-f4570acace0f
['Yovela Murzello']
2020-12-12 17:06:00.055000+00:00
['Mobile Apps', 'Carbon Emissions', 'Carbon Footprint']
What I learned after making a game as a gift for one person
What I learned after making a game as a gift for one person During the holidays, I was inspired to create a game as a gift for a stranger, based on a design principle from a prominent game developer. Hit mobile game “Crossy Road” creator Matt Hall of Hipster Whale has a very simple philosophy. He believes in choosing his target audience carefully. So carefully, in fact, that in 2015 he designed games for the individual. When Matt was working at the game studio Tantalus and assisting with the development of “Pony Friends” for the Nintendo DS, he focused on a single photo of a rich young woman with a prize horse. “What would make her happy?” he thought as he instructed everyone in his studio to study the photo and keep it at the forefront of their minds during development. “When I make a game this way, I don’t have to get bogged down in demographics or store trends. All I have to do is make a game that is everything for someone.” — Matt Hall, Source https://bit.ly/3rCr8r0 Pony Friends went on to spawn a sequel, due in part to Matt’s passion for designing games for one person. Source, https://www.igdb.com/games/pony-friends-2/presskit “Pony Friends” went on to sell millions of copies and spawn a sequel, “Pony Friends 2”. And we all know how much of a monumental success Crossy Road became back in 2015. Well — it sure seems like he’s onto something! As an independent game developer, I am prone to burnout and lose passion and motivation for projects. All professionals and even hobbyists can relate to this. So in November, I swore to create a small standalone project to rekindle my motivation. The Secret Santa Jam hosted by Sheepolution saved me from a major slump. Source, https://itch.io/jam/secret-santa With this desire in mind, I participated in a game jam (i.e. a time-limited hackathon where developers create playable game prototypes) called “Secret Santa Jam”. The premise was to replicate the traditional secret Santa gift trade but with games. Each game dev received a letter from another person, outlining their likes, dislikes, favorite games, etc. Then you are tasked with creating a game that you think the person would enjoy. You in turn submit a letter and wait with bated breath for your fully personalized game to be delivered. “This concept is genius,” I thought. My giftee sent me a spirited letter about their love of action role-playing games, customization of playstyle and techniques, and their personal love of sports and weight-lifting. This set my mind in motion. “Maybe he will enjoy a power system around weight lifting!” “Including a multitude of skills and making them effective against enemies and the environment is key.” “Maybe something like a Zelda-style top-down action game with a locked-door mechanic would work best.” I began coming up with the concept — a warrior mage that must retrieve a treasure from a monster and trap-laden castle. This helped me constrain the scope to a single environment and purpose. Creating the playable character as both a spear-wielding knight and an adept magician helped me design many playstyles as quickly as possible, and throw in the weight lifting power up as an added thematic bonus. In the end, I made a game heavily inspired by Legend of Zelda’s dungeons. Source, https://itch.io/jam/secret-santa/rate/861322 As ideas flooded my mind, they became clear game mechanics and thematic elements in a flash. The power of designing for a single person comes from the specificity of their desires, their personality, and their proclivities — these all allow you to design a custom-tailored experience. With the scope reduced, I was offered the opportunity to build an experience that is sure to please 100% of the target demographic; that being my game gift recipient.
https://medium.com/@cjames1/what-i-learned-making-a-game-as-a-christmas-gift-for-one-person-830ed08b3513
['C. James']
2020-12-25 06:29:34.344000+00:00
['User Experience', 'Game Design', 'Unity Game Development', 'Gaming', 'Game Development']
The Compleat Conservationist, part I
What books, journals, apps, websites do naturalists use for go-to guides and inspiration? Here is a handful of books I’ve found useful or inspiring. Naming Nature, by Carol Kaesuk Yoon (2009, Norton) In the midst of an accelerating mass extinction where we are losing species much faster than science can identify them, this is an engrossing look at taxonomy, or how we organize life on earth. Yoon laments our disconnects from nature. A child living among the Indigenous Tzeltal Maya people in Mexico can identify about 100 different plant species, Yoon says. How many American adults can do that? She provides an account of “folk” taxonomies that are binominal, two-word descriptors the predate Carl Linnaeus, the botanical whiz kid from Sweden who published Systema Naturae in 1735. That system laid out the framework that most of us learned, the Linnaean hierarchy — kingdom, phylum, class, order, family, genus, species. (Of course, we use that every time we select an unadulterated native plant for our garden, relying on the scientific name — genus and species — rather than a vague or even misleading common or commercial name. Don’t we?) Later schools of thought refined how we might arrange living things. Well into the 20th century, “cladists” organized the tree of life around the branches (clades) based on evolutionary relationships. They famously declared that, technically, fish don’t exist. Lungfish, they would explain, are more closely related to cows than they are to salmon. That news would have been nonsensical to Linnaeus, and perhaps blasphemous to Izaak Walton. Walton described his 17th century meditation on conservation, The Compleat Angler, as a “Discourse of Fish and Fishing.” In any case, the cladists’ fish-dissing were fightin’ words to other taxonomists. With breezy, engaging insights, Yoon chronicles the debates. The Living Landscape, Rick Dark and Doug Tallamy (2014, Timber Press) Doug Tallamy’s earlier Brining Nature Home has informed folks about the importance of native plants for pollinators, wildlife generally and healthy ecosystems. This collaboration with photographer Rick Darke provides coffee-table-book glimpses of what we can do with gardens and parks to restore land to a more natural, critter-friendly state. You’ll find abundant ideas, photos and tables that list native plants, their characteristics and species that need them. A New Garden Ethic, Benjamin Vogt (2017, New Society Publishers) In a similar vein, inspired by Tallamy, Benjamin Vogt makes a plea for more thoughtful gardening in his 2017 book, A New Garden Ethic. He ties his book directly to “The Land Ethic” chapter of Aldo Leopold’s classic Sand County Almanac in which Leopold writes that a land ethic “changes the role of Homo sapiens from conqueror of the land-community to plain member and citizen of it.” Vogt says we should foster places that are attractive and useful to us, “but that are equally if not more attractive and useful to other species…. We need urban gardens that exuberantly embrace wildness in its complex fullness, not in a watered-down echo that does us little good.” An Indigenous Peoples’ History of the United States, Roxanne Dunbar-Ortiz (2014, Beacon Press) To fully understand our lands and waters, we should know the history of North America’s original hunters, anglers, farmers and engineers who have lived here for thousands of years. The Indigenous people also practiced savvy forest and wildlife management. Controlled burns opened up forest clearing for attracting game and growing corn, squash and other crops. When Europeans arrived, they entered a continent that already featured a long history of advanced agriculture, trade and stewardship of the land. Some regions had hundreds of miles of roads and irrigation canals. More important, for those of us taught U.S. history solely focused on European settlers, this is a detailed, eye-opening, gut-wrenching account of genocide, abuse and discrimination. The full story of what happened to these Indigenous nations, people and culture should be woven into history lessons at every school level. Dunbar-Ortiz admits that revealing the history of these nations and cultures is an arduous task. It may take generations to do it justice. More books and other resources later. Some of the categories missing here are the best guidebooks and grand explorations of life and ecology by E.O. Wilson and others.
https://medium.com/@michaelreinemer/the-compleat-conservationist-part-i-1bcf09576f0c
['Michael Reinemer']
2020-12-26 18:58:51.767000+00:00
['Indigenous People', 'Environmental Issues', 'Conservation', 'Taxonomy', 'Botany']
Extending Cloudify With Custom Intrinsics
Photo by Tekton on Unsplash Cloudify is highly extensible via its plugin-based architecture. The Cloudify blueprint parser, however, is not extensible. Most of the time, that’s not a problem. You can add functionality by writing custom plugins and workflows. Sometimes, however, extending the blueprint syntax is ideal. Intrinsic functions in Cloudify, the concept of which flows from the OASIS TOSCA specification, are YAML maps with special reserved keys. For example, the get_input intrinsic function causes blueprint inputs to inserted where it is call, and is typically expressed like: { get_input: some_input } get_input, and other intrinsics also support referencing nested keys in the referenced target. Such nested references are expressed like this: { get_input: [ some_input, level1, level2 ] } Which would fetch someval from this input structure:
https://medium.com/swlh/extending-cloudify-with-custom-intrinsics-429313da4597
['Dewayne Filppi']
2020-04-12 01:26:46.165000+00:00
['Tosca', 'DevOps', 'Orchestration', 'Programming', 'Cloud']
Most great writing books are memoirs too.
Most great writing books are memoirs too. Stephen King’s On Writing is a memoir disguised as a how to book. Anne Lamott’s Bird By Bird, ditto. The story of the title is priceless. Hemingway’s Moveable Feast is a pure memoir but filled with insights from his most fertile years as a writer, written thirty years later. Elizabeth Gilbert’s Big Magic is, on the surface, a self-help book about creativity, fear, and perseverance that combines her life experiences as a writer with motivational bits for those times when you’re staring at the screen with no ideas. They should all be on your bookshelf, along with instructional classics like The Elements of Style by Strunk and White (yes, White is the guy who wrote Charlotte’s Web).
https://medium.com/@martinedic/most-great-writing-books-are-memoirs-too-8973bfe94ad2
[]
2020-12-09 21:28:38.178000+00:00
['Guides And Tutorials', 'Memoirs And Histories', 'Creativity', 'Writing', 'Motivation']
How to create a free website in 2 hours
After you create an account you only need to do the following steps: Fill in your personal information customize website connect to a payment provider add your product (connect to a domain) After that, your website is ready to go! I suppose you won’t need my help with filling in your personal information. So I will guide you through the other steps. The first step will be customizing your website. You will start off by entering a name for your webshop. Webshop name After that, you will get an overview of all your pages. The home page will be generated automatically. Dashboard If you wish to edit your page, you have to select the Edit Page button. This will show you a styler option. I suggest you start by selecting a few colors to style your webpage. Style option Palettes In the style menu, there is an option called to create a palette. The colors selected in your palette will be implemented through the rest of your website. I chose the following colors: Palette This will change my website to the following style: Home Page After that, you edit the page to your likings. It’s a drag and drop mechanism and I have never seen a more easy way to edit your template. The options however are limited, but if your idea is great. Customers will not complain about a “decent” website.
https://medium.com/@dundypura/how-to-create-a-free-website-in-2-hours-976adbe283f7
['Dundy Pura']
2020-12-23 17:11:46.044000+00:00
['Create Online Store', 'MailChimp', 'How To Create A Website', 'Website', 'Webshop']
Forthcoming Book “America’s Covert Border War: The Untold Story of the Nation’s Battle to Prevent Jihadist Infiltration”
Now Available for Pre-Order, February 2, 2021 release date The forthcoming book by journalist-turned-intelligence analyst Todd Bensman, AMERICA’S COVERT BORDER WAR, is now available for pre-order on Amazon and Barnes & Noble. This 13-year work of journalism reveals, for the first time, secret (ongoing) border security operations that America built after 9/11 to prevent terrorist infiltration mainly over the southern border with Mexico. The book reveals how these long-lived covert programs on the southern frontier and throughout Latin America have apprehended multitudes of dangerous jihadists and so far prevented attack but also why they’re suffering the effects of denial-related neglect at a time of escalating risk. Publisher: Bombardier Books (February 2, 2021) Length: 288 pages ISBN13: 9781642937251 ORDER FROM AMAZON HERE https://www.amazon.com/exec/obidos/ASIN/1642937258?tag=simonsayscom For Bulk Order Purchase Discounts, Order Here ABOUT THE BOOK Americans concerned by unchecked global migration, porous borders, and national security will feel surprised to learn that thousands of migrants from the Islamic world breach the U.S.-Mexican border each year, despite widespread media insistence that these long-haul travelers are imagined, as are the hardened jihadists caught among them each year. This is an intelligence world insider’s untold story of the ambitious and intrigue-laden covert American counterterrorism programs built after 9/11 from the U.S. border to the tip of South America. These programs were created to protect Americans from a supposedly notional infiltration threat that has been killing and wounding thousands in Europe in recent years. The surprising conclusion: the American effort has prevented attack on the homeland so far, shielding an unknowing nation from the migrant-jihadist bloodshed Europe has suffered as a result of organized terrorist border infiltration that began in 2015 and have continued since. But how much longer can these programs keep America safe from the same threat without the public recognition that they exist, and the care and attention that they deserve? This geographically sprawling counterterrorism enterprise — the last unrevealed one from the 9/11 era — is suffering from denialism and neglect at America’s peril…just as Europe was before its 2015-to-present calamity with mass migration from the Islamic world. It has not kept pace with an evolved human proliferation threat tied to global migration flows of volumes not seen since World War II. But this book is much more than revelation and complaint; it provides solutions for the nation’s leaders to better protect America from this unusual border threat. ABOUT THE AUTHOR Todd Bensman is an award-winning newspaper reporter and magazine writer who transitioned to a career as a national security intelligence professional for the Texas Department of Public Safety and then returned to the journalism trade. He currently serves as the Texas-based Senior National Security Fellow for the Center for Immigration Studies (CIS), a Washington, D.C. policy institute for which he writes, lectures, and grants media interviews about the nexus between immigration and national security. He has testified before Congress as an expert witness and regularly appears on radio and television outlets for his national security and border security expertise. Separately, he reports on international and domestic terrorism matters for major online news sites and teaches terrorism and journalism as an adjunct lecturer for Texas State University in San Marcos, Texas. Bensman was born in Houston, Texas and raised in Phoenix, Arizona before moving to Alaska to work as reporter and then back to settle in his native Texas after reporting on wars, rebellions, and the strangeness of life in more than 30 countries. He is the recipient of two National Press Club awards for his foreign reporting, an Inter-American Press Association Award, and two Texas Institute of Letters awards among many others. In line with his hybridized journalism-intelligence career, Bensman holds a master’s degree in journalism from the University of Missouri and a master’s degree in security studies from the Naval Postgraduate School’s Center for Homeland Defense and Security. WHAT THEY’RE SAYING “Todd Bensman is one of the last remaining authentic counter terrorism experts and investigators on jihadist terrorism whose new book will mesmerize, shock, inform and totally enlighten you on the covert war against Islamist militants fought by the government inside the US as well as around the world. Having worked as both a national security investigative reporter as well as a top level counter terrorism intelligence analyst for the government, Bensman is in a unique position to reveal a fascinating story spanning the globe over almost two decades that will interest all readers.I must say that in today’s world of totally politicized media and the poisonous influence of political correctness in corrupting the objective truth about jihadist terrorism, Bensman rises above all others in revealing what is really going on and what has gone on. And even though I have specialized in terrorism for more than 25 years, I have learned more from reading the work of Todd Bensman than any other counter terrorist official or “expert” in the world bar none.” - Steve Emerson, Executive Director of the Investigative Project on Terrorism, author of seven books on violent Islamic extremism “Amidst multiple international crises, most of the media have lost sight of the jihadist threat to America. Todd Bensman is a welcome exception. His investigative reporting on continuing terrorist attempts to infiltrate the United States deserves wide attention from specialists and the general public alike.” — Clifford D. May, founder and president, Foundation for Defense of Democracies “Progressives on the cultural and political left harbor a lot of dangerous and seditious ideas about illegal immigration. Chief among them is that lax border enforcement and massive waves of third world migrants are not an invitation to terrorist infiltration. But Todd Bensman’s new book, America’s Covert Border War, is even more dangerous and seditious because it speaks plain powerful truth about the very real problem of jihadist infiltration, backed up by the kind of deep reporting that will challenge the assumptions of even the most willfully ignorant, glib or smug. The book is a well-grounded warning from a longtime veteran of the War on Terror as well as a bold rebuke to the media’s unfortunate culture of denial, avoidance and resistance.” - William McGowan, best-selling author of Coloring The News; Gray Lady Down “Journalist and former counterterrorism intelligence manager Todd Bensman shines a spotlight into a complex issue that few truly understand. Trust me when I say this, Bensman has been in the trenches and behind the scenes to safeguard Texas. He deserves our thanks for this.” - Fred Burton, former Diplomatic Security Service Special Agent and New York Times best-selling author of Ghost, Beirut Lives, and Chasing Shadows “Todd Bensman provides a deeply disturbing national security perspective to failed immigration controls from Panama through Mexico to the southern borders of Mexico to which we were blind. Bensman now examines the threat of lone jihadists and organized groups moving through those countries to the United States. He knows this world from his reporting on the ground in Central America and Mexico and as a government intelligence worker. With this book, we are far less blind.” - Dr. Michael Lauderdale, University of Texas at Austin, Clara Pope Willoughby Centennial Professor
https://medium.com/the-right-side-of-history-of-national-security/available-for-pre-order-todd-bensmans-forthcoming-first-book-america-s-covert-border-war-the-90a5e38d29e2
['Todd Bensman']
2020-12-04 14:17:30.623000+00:00
['Mexico', 'Immigration', 'Latin America', 'Borders', 'Terrorism']
Decentralized category intro: NEAR and IPFS
I know this comes out quite late as we’re close to the end of the competition already, but it’s better late than never, right? There’s still some time left if you’d like to try adding the decentralized ingredient to your compo entry. This should give you a good intro as to how you can start with both NEAR and IPFS if you haven’t researched it already, as I know this was a bit of a struggle for those who are unfamiliar with the tech. After reading the contents below you should be able to explore the topic further on your own. NEAR The NEAR Protocol offers a JavaScript API. You can link to the file itself from your entry’s index.html file to use it (yes, in the Decentralized category external resources are allowed) - could be either any of the CDN instances: Or our hosted version, up to you: After getting the file loaded, the nearApi object in the window scope will be available for us already. The first thing to do with the NEAR API is to connect to the blockchain: In this example we’re connecting to a testnet (a network where doing stuff is free). After establishing a successful connection, we can proceed with getting a user’s wallet containing their info: const walletConnection = new nearApi.WalletConnection(near, ‘triska’) if(walletConnection.isSignedIn()) { let account = walletConnection.account(); } If a user is already signed in, we will get their details in the account object: a username, balance, etc. From now on you can proceed with your own logic: for example create smart contracts that will enable you to save highscores or user generated levels on the blockchain. You can implement global highscores, NFT prizes, or even an old school arcades using coins. Here are the resources you can follow: - Near 101 - How to use NEAR in a game - How to build a Play To Earn game on Near - Examples and Docs - NEAR Lands game - BerryClub game This should give you a good overview of what’s possible. IPFS While NEAR have a single JavaScript API, Protocol Labs offers a whole lot of tools and technologies for you to use. The most known are: - IPFS - peer-to-peer hypermedia protocol - Filecoin - decentralized storage network - libp2p - modular networking stack - Drand - distributed randomness beacon - IPLD - decentralized data structure - Web3 Storage and NFT Storage And much more. As long as you use any of their tech, like host a game on IPFS, implement FIL coins, communicate using libp2p or add randomness with Drand you will be eligible to join the Protocol Labs challenge in the Decentralized category. All needed code and libraries can be referenced and linked from your entry the same way we did with near-api.js . The most creative implementations will get the most points and the highest chance of winning the challenge. Make sure to start with this list, where you’ll find intros to IPFS, Filecoin, and more. Notable articles to check first: - JavaScript IPFS - Using IPFS in gaming - Building peer-to-peer games on IPFS - a look at Interplanetary Tag - Using IPFS distributed file storage for game asset metadata The easiest thing to do would be to publish your game on IPFS through a service like Pinata, but there are so many other things you could do - see how others implemented decentralized features in the Gamedev.js Jam that happened a few months ago. Examples from Gamedev.js Jam Both NEAR and IPFS were implemented in some of the entries submitted to the Decentralized category of the Gamedev.js Jam 2021 in April. Notable examples include: The game is decentralized, offering the option to log in with your NEAR Protocol wallet, and is being hosted on Protocol Labs’s IPFS. This game is a multi-user virtual world in which players compete to become owners of the pixel art canvas (which is an actual NFT). This game is more of a proof-of-concept for a project where players can create their own levels, publish them using NEAR and IPFS, and play others’ levels. Full list can be found here. There’s also the collection of games submitted to the ETH Global hackathon which you can look into as well - each and every one have their source code available on GitHub. Next steps I know this might be difficult to grasp at first, but we had similar situation with Web Monetization API introduced a few years ago, and now it feels like adding extra features for Coil-enabled users is rather a simple task to most of you. I do hope this will evolve in a similar way and we’ll be getting more and more cool and brilliantly creative entries utilizing decentralized technologies in the future. Due to unforseen circumstances I wasn’t able to deliver a complete tutorial myself, but I hope the materials above will be enough for you to at least dip your toes in the topic. I do promise I’ll be exploring this further and you can expect more content coming from me in the next weeks. Don’t hesitate to join our Slack - folks are more than happy to answer all your questions and help you with any issues you might have with your entry!
https://medium.com/js13kgames/decentralized-category-intro-near-and-ipfs-7857e5f9631d
['Andrzej Mazur']
2021-09-07 14:37:45.427000+00:00
['Decentralized', 'Introduction', 'Ipfs', 'Near', 'Js13k']
It’s Our Fault Christmas Is Unaffordable Again
A poem about how we are to blame for yet another money-tight Christmas. “Marley’s Ghost” — original illustration from A Christmas Carol. ’Twas the night before Christmas, when all through the rental not a bill was paid early, not even dental. Although the stock market was booming, so how could that be? I guess I spent too much time watching TV. What I should have been doing is making speculative investments in businesses with specifically low-grade assessments. For that is the labour that our society needs, the kind of innovation that only Capitalism breeds. Who cares about the teachers, artists or physicians? We need more ways to price gouge patients with pre-existing conditions! We live in a meritocracy, where only the most talented prevail — a warehouse worker is replaceable, so they deserve to fail. Sure, increasing corporate profits and worker productivity are related, but that doesn’t mean the workers should be the ones compensated. Wages have been stagnant since the 70’s for a very good reason, and it doesn’t matter if the affordability crisis intensifies with every season. In the end of the day, our job creators deserve a break for all the opportunities they provide and the risks that they take. So stop the complaining and just celebrate your Christmas in debt, you haven’t even been replaced by AI… well, yet.
https://medium.com/@conleywrites/its-our-fault-christmas-is-unaffordable-again-25680dac41b8
[]
2020-12-08 05:45:05.780000+00:00
['Socialism', 'Inequality', 'Christmas', 'Poetry', 'Capitalism']
The beginning of Fight Ring NFTs
Fight Ring NFTs is not just collection of anime avatars. it’s a V1 collectible avatars for 31 hero from different anime. Your card to enter to the metaverse allows you to be the first player in the trial version of Fight Ring first P2P virtual world combat game with your favorite character from your favorite anime project living in the Ethereum Blockchain. TNF — Verified custom smart contract. TNC — Verified custom smart contract. in association with the largest game companies and big studios. LET’S GO! Ready to start this adventure together? How can you see a the skills of the hero? All The heroes are strong and unique, but some are rarer than others. simply, you can put the name of the hero in YouTube and see his powers or watch their anime ^^. Chose your hero on OpenSea website and Take it easy: they are all super cool! Fixed price 0.15 ETH WHAT IS V1 AND V2, AND WHAT IS THE Difference? V1 is about 31 NFT’s hero who have their own powers and are not cloneable V2 is about 500 NFT’s hero who have their own powers like V1, but they will be cloneable You can simply have your collectible in your wallet for collection and trading. But if you want to hold your V1 NFT’s hero, You will be able to access the game with the developers and get the advantages of first players In beta version. - Keeping your V1 Fight Ring NFTs means keeping your favorite hero, weather in the beta version or the official version . It is like your identity in the game it will not have a copy ever, a powerful Hero companion that you always carry with you. - You will soon be able to challenge the other companions you will find in the game.
https://medium.com/@fightring/the-beginning-of-fight-ring-nfts-815ee76e1d64
['Fight Ring']
2022-01-05 12:18:05.494000+00:00
['Gaming', 'Metaverse', 'Anime', 'Nft Marketplace', 'Nft']
“Role of Grandmother” is an emerging and recurring theme for me these days and it’s playing out via…
“Role of Grandmother” is an emerging and recurring theme for me these days and it’s playing out via family, friend and client sources. There’s so much we want for our grandchildren and that’s okay in the human community. Nice thought to want a better start, great birth, and wonderful life for generations of the future. Great that we do what we can to make the best happen — on condition that we undertake not to be disappointed (I speak as though to myself here ) if things turn out differently. What if we, the professionals who meet with expectant mums, listen out for the possibility that it is the grandmother who needs support to resolve her unresolved tears and fears, in preparation for this incoming child. That said, it’s super of us to work with a mum in advance of her baby’s birth supporting her intention to have the best birth possible and be able to respond to the unexpected(!) if it happens. And, also, on hearing a cry for help from the grandmother, regarding her birth(s), encourage her to seek birth healing which takes many forms such as Matrix Birth Reimprinting, Emotion Code, and other sensory healing modalities. Common themes that arise relate to conception context, social pressures, relationship with the dad as also tales of pressure to give baby up for adoption; heart pain that never goes away but that can be addressed very effectively even decades later. PS some of you will remember the Troubles….they, the children who lived through them are the grandparents of today, and if still affected, deserve peace. Do this, and in time to come, those affected by the covid restrictions of last 16 months will more easily be recognised as holding on to memories of loss, whatever they may be.
https://medium.com/@julie-annemullan/role-of-grandmother-is-an-emerging-and-recurring-theme-for-me-these-days-and-its-playing-out-via-b4669fb29545
['Julie-Anne Mullan']
2021-06-08 10:25:33.358000+00:00
['Adoption', 'Grandmother', 'Birth Trauma', 'Pregnancy', 'Energy Healing']
Paintbrushes & Microscopes: From Harvard to Creative Entrepreneur [Interview]
Advice to early stage creatives: “Look at the story of the entrepreneur, more than the story of the traditional artist” While not all entrepreneurs are artists, all artists should be entrepreneurs according to Evelisa Natasha Genova. For the Harvard graduate, painter and podcast host, the business of art is one she approaches with the same grit and hustle as a startup founder launching their next culture shifting idea. First picking up a paintbrush at a very young age, she never imagined a career as a painter. “I was always drawing, but I also had a microscope and I loved science,” she tells CRY, “My inner child happens to love science and is connected to art. I am intrinsically motivated to keep that with me to this day. I didn’t think of myself as an artist. I actually wanted to be a paleontologist.” Today, her visual art career has provided her with the opportunity to travel the world working with clients from Mumbai to Los Angeles and host her Stories of Life & Live podcast, interviewing notable figures in art and business. Here she shares why establishing a thriving career in art is not just about inspiration and idea creation, but in adopting an entrepreneurial mindset. Discipline is where creativity unfolds “I am not a moody artist. I don’t base creation on my mood, feeling or inspiration. I know that can sound challenging because there’s a bit of a story or myth about ‘the artist’ but I actually resisted that from a very young age. I always valued rigor, hard work and challenge; so even to this day, I’ve built up a business model around my art. I am painting because I commit to it. It’s through that commitment to the work where all of the creativity unfolds. But my decision making and my time management is not creative, it’s very structured.” Don’t put yourself in a box. “I worked for a number of years in government relations, negotiations and political advocacy. In that world, I really minimized or just did not want to have art as part of how I showed up in that world. It was sort of a secret part of life that I had. You know, I was moonlighting as an artist. This is a bigger reflection of my own spiritual journey, but I think I judged and internalized my heart centre and leading with my heart. My heart happens to be creativity. My heart happens to be about working with young people and children. I judged it all because of my own internalized perspectives; I struggled with that. I was trying to force myself to be something that I am not. I never wanted to be put into a box — I never liked the negative associations that came with art so I somehow thought that if I showed up in the world saying that I am a painter, I cared what people think. I would think to myself, “I’m much more hard working and intellectual than that, so please don’t think of me that way.” But now I just think, we are multifaceted human beings. I don’t feel the anxiety of being put into a box anymore because my life and my efforts reflect my values of being hardworking and ambitious.” Take on the mindset of entrepreneurs. “Look at the story of entrepreneurs, more than the story of the traditional artist. I think the story of the entrepreneur and the strategies that entrepreneurs use will be more helpful, more grounded and strategic than anyone else who’s following their passion. Even if their passion happens to be creative pursuits. In fact, you will hear entrepreneurs being so passionate and successful about things like selling socks online. You just need to have that entrepreneurial mindset and let go of any false notions of the artist mindset. Which by the way, that’s only in recent history — from the romantic period onward, there was this myth of a tortured artist soul. But historically, speaking about the Western world, artists were an important part of the economic fabric of society and it would be considered as viable a position as anything else. That’s just reflective of the time. So I think if you look at the story of the entrepreneur, the challenges and resources that entrepreneurs use, then that can be super helpful. I also recently interviewed on my podcast, this venture capitalist Patrick J McGinnis who is also the author of The 10% Entrepreneur. I asked him to give some specific advice to creatives. He’s in the Venture Capitalist world and he started that expression FOMO — he’s literally the founder of that. He had some amazing advice on how to have a bridge job or be working and be 10% entrepreneurial and slowly transition to 100%. In our interview, he gave really great advice to creatives and artists.” Keep your day job (for as long as possible) “Being a former employee allowed me to have no pressure with my skill set, my passion and my talent. I think that’s really important because it allowed the growth to come very naturally. I didn’t have an agenda. I did not set out to become a painter per se, so I actually do put a lot of value in being self reliant (especially as a woman in our society). Be self reliant, have a good sense of money management, be an employee and let your talent grow. Take risks that your life doesn’t depend on; initially anyway. From there, it organically grew, and I was able to learn what people really desired from me in my strengths. Then I used a lot of entrepreneurial resources. There’s a lot of transferable stuff from that — developing a business plan, creating a strategy. It’s kind of dry but for me that works really well. That’s the kind of thinker that I am. Ask yourself the tough questions “I would say, in all areas of life, if impact is something that you really care about — and that is something that I do care about and value in whatever pursuit I do —then constantly evaluate yourself. I would ask myself, “what are my strengths?”, “what are my weaknesses?” and “how can I serve the needs of this world?”. I think for a long time I was trying to work in areas where I would have impact and leadership but it was a mismatch to my skill set because it was more in an environment that required me to be super technical, rigid or in a form -filling type of environment. When you ask yourself these questions, you can align your values and needs really well. Which can help you to better choose a bridge job or even understand how to better market your products or service in alignment with your natural skills and strengths. I think it just allows you to be more in the flow and that’s what I would do. CRY For more of Evelisa, follow her on Instagram or visit her website. Starting July 11, Evelisa will launch a new course on “Art and Science” for grades 4–6. Click here to register.
https://medium.com/cry-mag/paintbrushes-microscopes-from-harvard-to-creative-entrepreneur-e50180128f41
['Safia Bartholomew']
2020-08-10 22:22:19.916000+00:00
['Creativity', 'Education', 'Harvard', 'Art', 'Painting']
What can you offer as a manager to Software Engineers that will last?
Career Development Plans as the most powerful tool to help Software Engineers grow Software Engineers are highly demanded positions in today’s market but only highly qualified profiles can really choose the company to work on. We are more and more in a technology-oriented world which ends up in a healthy marketplace for all tech-related positions, but -at the same time- there are huge differences between the professionals who do constant learning to be up to date so they stand out from their peers, compared to the ones that don’t invest much on building their career. In general -as Software Engineers- it is easy to choose a company to work with relatively good conditions and stability. That’s for sure a great thing but it has some drawbacks, one is perhaps you are not investing enough in your development and you are a bit stuck doing the same things for quite a long time. So, probably you are not moving from your comfort zone and this is something to avoid if you want to build a remarkable career. On the other side, as a company, there is a fierce market out there that can offer higher salaries and outstanding conditions. So what is the key for the people to decide to stay in the company or to move to another one? In my opinion, after managing people for +14y and now, as VP Engineering at Ontruck, the most valuable thing you can offer as a company to retain talent is a challenging, consistent and trustable Career Development Plan. For sure, salaries and conditions are important and work at a reputed company with good leaders and a profitable business plan is highly desirable but -at an individual level for the long term- what it will make the difference is the ability to build a solid career. At Ontruck we have been working for more than a year on building exactly this, inspired by companies as Spotify, Google, CircleCI and others, we built our own model based on two pillars: Technical Ladder Career Development Plan Process (CDP Process) Technical Ladder The Technical Ladder is about defining the steps to reach the goals by identifying the expectations to cover in a consistent and challenging way. It’s not about executing tasks or actions but to ensure you are able to fulfill these expectations as a settled behavior. Currently, we have these levels at Ontruck: L1| Junior SWE L2|Software Engineer (SWE) L3|Senior SWE L3.5|Senior Staff SWE L4|Principal SWE and Engineering Lead L5|Head SWE L6|VP Engineering L7|CTO This is just the first step. Then, you need to think about a few things when building the Tech Ladder: What does it mean to be more senior? How can people move to the next level? How many paths do we want to have? How are we going to group and co-relate the expectations to measure? What is the lowest level where people can stay forever? In our case, being more senior is about being able to have more impact and influence on people, initiatives and projects in the company. It’s not only about mastering more languages, technologies or tools or even be more expert in some areas. So, for example, for Software Engineer and Senior SWE levels the main scope is the team but for Senior Staff SWE the main focus is cross-team. These are the areas of influence for our levels: Individual , focuses mainly on personal development: Junior SWE , focuses mainly on personal development: Junior SWE Team , the impact is on the team itself: Software Engineer SWE and Senior SWE , the impact is on the team itself: Software Engineer SWE and Senior SWE Cross-team , influence spans the team itself: Senior Staff SWE, Principal SWE and Engineering Lead, who also has a big responsibility and impact on the team in charge , influence spans the team itself: Senior Staff SWE, Principal SWE and Engineering Lead, who also has a big responsibility and impact on the team in charge Department & Company, more strategic and broad impact, Head SWE, VP Engineering and CTO Related to possible paths, we have the same trail until reaching Senior Staff SWE level. At this point people can move to a more technical role as Principal or a more managerial one as Engineering Lead. We use dimensions to group expectations. Currently, we have: Mastery, Team Success, Deliver Value and Accountability as the areas we value when measuring success. About the level, people can stay forever, for us, everyone can stay as Senior Software Engineer if they have good development. Once decided on the general strategy for the Technical Ladder, the hard work starts: identify the expectations for each level. I warn you this will take a lot of time to identify, review and refine everything until having a good enough first version to launch. You can see below a sample of the expectations for the Senior Staff SWE level: Career Development Plan Process Once we have a good-enough Technical Ladder (don’t try to have it perfect at the beginning) we can start building our process. We want to ensure everyone has an individual plan that allows them to progress in the company by identifying areas to improve and define concrete actions to reach their goals. The direct report is the owner of the CDP, so they should lead proactively the process in terms of pace and the effort needed to accomplish them. The manager is just the facilitator who can support and guide them to make it possible. We need to ensure to define a structured an efficient process, useful for everyone and that is good communicated and understood. It’s also important to train the leaders to execute it well and homogeneously. At the beginning, we learnt by trying but having a common ground for the first iteration of the process. We left it open to the managers about how to exactly execute it and how to manage the details, as the template to use. Once the initiative was mature, after more than 15 CDP done, we defined a more formal process incorporating the learnings and unifying the details. A lesson here is that the CDP is a great tool but don’t underestimate the time and effort to dedicate to have a viable process to launch. We -as managers- need to invest a lot to make it right, the side effect of not doing it is lack of trust and confidence by the people. Next are a couple of things to have in mind when defining the CDP Process: Who is legible to access the program? What are the implications between this process and HR promotions & salary raises policies? At Ontruck everyone is eligible if they have been at least 6 months in the company and they have a good performance. Related to HR (we call it People Team), we completely bound our process to People Team's policies, meaning fulfilling a level in our ladder is recognized officially, also in sync with the promotions and salary raise policy. Once the main concepts are clear, we need to define the steps of the process, for us these are: Assessment: evaluate the status (Not Accomplished, In progress, Accomplished) of the expectations for the SWE’s level to consolidate a level or to reach the next one Career Definition Plan: from the expectations in progress or not accomplished, choose which ones we want to work on and define Goals, Measure of success and Action Steps for each of them Follow-ups: formal and periodic check-ins to ensure the progress is good enough and to remove impediments Closure: most of the expectations are covered and the level is fulfilled, close the CDP and decide the next steps The process starts when the manager and the direct report, do the assessment asynchronously and blindly (without looking to the other one). Once it’s done the manager cross-checks the two evaluations, decides the final rating for each expectation and arranges a meeting to discuss and identify the expectations to work on. After that, the manager works on making a solid proposal as a starting point to discuss, mainly about the Goals and the Measure of Success to schedule a second meeting to present and improve the proposal and enrich it with the direct report feedback. This can be iterated as many times as needed. Once there is a good proposal, where both are comfortable with, there will be a CDP kickoff to officially launch the plan. From this point, recurrent follow-ups will happen (for us, once a month). Ontruck CDP Process Timeframe Currently, we also have templates both for Assessment and CDP definition and a repository to store the people instances. Assessment Template CDP Definition Template Some Final Tips Manager preparation and investment is a must, you should invest enough in the preparation, thinking, execution and follow-up to get a great and useful CDP but also to engage the person and show up you are taking as seriously as it is Technical Ladder and CDP Process definition takes a lot of time and you shouldn’t take a shortcut; think carefully in advance if you want to go with a proper CDP process or otherwise you want to do something lighter (that is also fine!) Don’t try to create the whole Technical Ladder before start creating CDPs for the people. Under a lean approach, you can start defining the lowest level you need to do the CDPs for the people in this level and, afterward, continue building the Technical ladder bottom-up until having all the levels defined Fulfill a level is not about covering 100% of the expectations, instead define a more realistic approach, for us at Ontruck is 80% Although the CDP definition is a collaborative work between manager and direct report, we encourage the managers to make an initial proposal as a starting point to discuss, the main benefit is to have a more efficient process because we don’t start from scratch but also the managers show their commitment and investment in the process Follow-ups are key to have success in the end, the manager should ask the right questions to know if the CDP is healthy or some adjustments or facilitation needs to be done. A great plan can fail just because of the lack of this tracing Setup specific meetings for the follow-ups instead of reusing regular 1:1s or multi-purpose meetings, this will help to focus on the CDP avoiding to talk about day by day or unrelated things Setup a high bar, we want to have a rigorous, challenging but fair for everyone so, don’t be benevolent or lazy when evaluating but on the contrary be exigent and honest Recognize success, when someone reaches a level is time to celebrate, as something hard to accomplish. Do it in a transparent and positive way to inspire others to do the same Conclusion Investing in people's career development is a win-win situation for the company and the employees but we need to face it in a solid, trustable and transparent way, to ensure the people believe it’s a good investment in their personal development and the company get also the benefit of having more engagement and competent people. For the company, this needs to be the #1 retention initiative, by ensuring a clear path for everyone to develop their career and progress within the company. As a high-valuable side effect, having several CDPs in place will also create a perfect breeding ground to allow to launch powerful projects and initiatives that -in the end- will make a better company. For Software Engineers, Tech people or employees in general, usually this is the most demanding thing because it will give them a solid opportunity to invest in their career and a clear path to fulfill their ambitions, rather than having shorter-oriented benefits as the salary conditions. In conclusion, creating Career Development Plans are a great strategy tool for managers to ensure trained, engaged and committed people, fully aligned with what cares more to them and by using a win-win approach.
https://nerds.ontruck.com/what-can-you-offer-as-a-manager-to-software-engineers-that-will-last-8595004b2b7f
['Iván Hernández']
2020-12-10 10:09:55.732000+00:00
['Engineering Mangement', 'Ontruck', 'Career Development', 'Software Engineering', 'Management']
HyperLogLog in Google BigQuery
Counting and reporting uniques is always a challenge as it usually requires a full scan of the dataset to count the number of distinct values we have. On small datasets it’s fine but when dealing with larger volumes, it quickly becomes a performance and resource issue. We recently ran into that problem when trying to measure the number of unique users reached by Unsplash images. Photo by Joanna Kosinska on Unsplash Uniques can’t be aggregated The uniqueness of a value also depends on the time range used and ranged counts can’t be aggregated. If you have 2M distinct identifiers per day for a week, it doesn’t mean that you have 14M distinct identifiers for that week. Some of these identifiers will appear in different days, making the true weekly count lower than the sum of the daily counts. In a time partitioned table, the full scan is a problem because whenever you need the unique count over a different time range, you need to scan the entire time range. Example for a daily partition: 1 day = 1 table = 1 scan to count uniques 1 week = 7 tables = 7 scans 1 month = 31 tables = 31 scans etc… Table scans are both slow and expensive, so we want to avoid them as often as possible. What is HyperLogLog (HLL)? I won’t describe the algorithm itself, you can probably find a better explanation over here. What’s important for us is that it allows to calculate a precise estimate of the number of uniques values in a set of values. It still requires a full table scan because you need to input all the values for the algorithm to work, of course. Note that you don’t have to fit the data in memory and you can stream it through HyperLogLog, only keeping the uniques count (and HLL structure) at all times. Google’s implementation of HLL in BigQuery In BigQuery’s standard SQL, HLL is available to speed up distinct counts if you’re willing to trade off a little bit of accuracy. Another benefit of using HLL in BigQuery is that it allows you to make a single scan of you daily table, even if you need both daily, weekly, monthly, quarterly and yearly uniques. It solves the uniques aggregation issue and helps saving a crazy amount of volume processing. This works because Google’s implementation allows you to save and load HyperLogLog schemas. HLL schemas are intermediary results that you can query, save and load. You can count uniques, save the HLL schema, load it the next day and start counting uniques for the next day, still considering the unique values that HLL processed the previous day. If a value is present in both days, it will be counted only once and not twice like a simple aggregation would do. You can also merge multiple daily schemas and the behaviour would be the same. The math is pretty simple. If you need a daily, monthly and yearly unique count, you’d have to make 3 full scans and pay: cost = 3 * daily volume * 365 * price per volume cost = 1095 * daily volume * price per volume But with the usage of HLL presented here, with a single table scan it would only cost you: cost = daily volume * 365 * price per volume cost = 365 * daily volume * price per volume Since the price of merging HLL sketches is negligible, you’re saving 67% on your queries processing costs. Practical example of how we use BigQuery’s HLL at Unsplash Coming back to our original problem: we’re trying to estimate how many people and devices Unsplash photos reach so we can report it to our contributors. We collect the logs of all our photo views tied to an anonymous device identifier. Counting how many distinct identifiers we have in our logs helps us understand how many devices we reach. Photo views logs are stored in daily partitioned tables in BigQuery. Each day, we count the number of distinct device identifiers with a true count and we estimate it with HLL. For that day, we store: The true distinct count The HLL estimate The HLL schema (that we can encode in Base64 for example) #standardSQL SELECT exact_count, HLL_COUNT.EXTRACT(hll_sketch) as hll_estimate, hll_sketch FROM ( SELECT COUNT(DISTINCT identifier) exact_count, HLL_COUNT.INIT(identifier) hll_sketch FROM `daily-logs-20190101` ) The true distinct count allows us to draw the daily evolution of our reach. The HLL estimate tells us how precise the estimate is by comparing it to the true count. We can estimate precision with something like: precision = 1 - (|exact_count - hll_estimate| / exact_count) To chart the monthly evolution, we leverage the daily HLL schemas we stored. At the end of the month, we merge the 31 schemas and get the monthly estimate. We store the estimate and the new schema resulting of the merging. #standardSQL SELECT HLL_COUNT.EXTRACT(monthly_hll_sketch) monthly_estimate FROM ( SELECT HLL_COUNT.MERGE_PARTIAL(hll_sketch) monthly_hll_sketch FROM `daily-hll-sketches-january` ) At the end of the quarter, we merge the 3 monthly schemas. At the end of the year, we merge the 12 monthly schemas … or the 4 quarterly ones. The impact on processing is huge. A single daily table scan is enough to count (estimate) uniques over any time period. The rest of the processing is simply merging HLL schemas which is very cheap.
https://medium.com/unsplash/hyperloglog-in-google-bigquery-7145821ac81b
['Timothy Carbone']
2019-05-15 12:15:26.051000+00:00
['Article', 'Big Data', 'Hyperloglog', 'Data Science', 'Bigquery']
SUPPORTING OUR PARENTS DURING COVID-19!
Adobe Stock During this period of at-home isolation due to the COVID19 pandemic, parents need so many things. Even parents of “typically developing” children need a lot of support and understanding right now, as everyone tries to fill their children’s days with constructive activities. Parents of children with special needs experience high levels of stress, uncertainty, and worry for their children’s wellbeing, learning, and ability to cope in the world — and that’s on a good day! Add to that the new stresses and challenges that a world-wide pandemic presents to us all, and you have a parent that very well may be struggling to function at all. The everyday emotional and behavioral challenges of managing (let alone educating) a child with autism, severe ADHD, or other developmental disabilities can overtake everything and overwhelm even the most equipped parents. One parent I spoke with works as an administrator for a social services organization providing preschool and family support services for many different sites. While continuing to work at home during this time and juggle as many balls in the air as she is used to juggling, she now also has to be the primary special education teacher for her own 3 yr. old son who is on the autism spectrum, a 6 yr. old first-grader and a 13 year old middle school student. Even though “Ashley” knows a lot about early childhood development, she is not used to being her son’s OT, Speech, and Applied Behavior therapist, let alone his pre-k teacher. She shared with me just how hard it has been trying to get her own work done, let alone meet her children’s needs. Any three-year-old can be a handful to keep entertained all day. But children with significant language, social, cognitive, and/or behavioral challenges could wear out an Olympic athlete. Here are some things you can offer, if you are a teacher, therapist, or the close, caring friend of a parent with a special needs child right now. The Right Perspective — Focusing MOST on Social/Emotional learning: Here is a very nice video link you can view, as a teacher or therapist. It establishes the framework or “lens” from which you should be communicating with parents. Although the video was created for parents, it may be too lengthy and/or too academic for many parents — but most certainly contains all of the most important messages you should be communicating, addressing the most basic needs parents should recognize in their children. Video: Tips for Parents Patience and Understanding. Parents need to not feel judged. Let them know they ARE doing a good job — and that they WILL get through this, you care, and you are there for them to provide support. Simple messages of support sent from a place of caring and commitment — can go a long way! Here are some suggestions, some ways you can show that support: If the parent’s primary language is something other than English, use Google Translate to send messages the parent will understand, in addition to the English versions. Give the parent permission to HAVE FUN and PLAY with their child — the pressure on parents to be their child’s educators can be overwhelming. Parents are dealing with fears, uncertainties, worry, and guilt — over not being “enough” or not doing enough for their child. Give parents a clear message that what is MOST important is everyone’s mental health during this time. Being quarantined for this length of time is stressful for everyone. Financial worries, isolation, and the challenge of dealing with their children’s behaviors and needs 24/7 is just too much — children and parents alike MUST have a release valve. PLAY, laughter, and silliness have got to be not only allowed but encouraged! Remind parents that rotating and organizing/limiting amounts of toys can actually increase children’s engagement levels and time spent in meaningful play. Connection — especially now, special needs parents need connections with people who care and understand. Even MORE so. The challenges you experienced in working with the child with autism or another developmental disorder or delay are 100-fold for the parent who has that child at home 24/7. To a great degree, these parents need you now, more than ever (and I would venture go guess, many of your previously less-connected parents are more open and willing to respond to your outreach efforts at this time). But it also behooves us all to be a little more persistent and a little more innovative with the ways in which we connect with parents. Find out what works — some parents won’t answer the phone but will respond to a text message. Some are on Facebook and would welcome a connection with you there. Anything that works and is in keeping with your school’s rules. At this time, we all need to be a little more creative to stay connected. Options and Flexibility. Schools and therapists are really good at coming up with plans, such as scheduling teleconferences with parents in lieu of the child’s regular therapy sessions. However, this may not at all be the best plan for the parent. The time you have selected may or may work for that parent on a particular day. As handy as it may be for you, and as much as we might like to keep things in a set routine for the child, we cannot predict just whether or not the child and parent will be in the right state of mind and body to actually participate at that set time. For most parents, a far more realistic plan would entail accessing a recorded YouTube video at a time chosen by the parent, which could certainly change depending on the day. Don’t judge! Instead, understand, and put your recommendations down on paper or in an email. Remember that parents are juggling the demands of trying to work at home in their daytime jobs, with supervising the learning and activities of other children, all while trying to be available to follow your instructions for parent-directed “learning activities” for their child. If you are a teacher who is conducting Zoom lessons or sending out lessons via email, be sure you find a way to individualize for your special education student, making the session available by video, and checking in with parents /children who did not participate. The time it would take to send an email or make a phone call to a parent whose child was not in the right frame of mind to participate will be no more than the time it took to individualize a lesson for that child at school. Social-Emotional / Behavioral tools: Because everyone’s stress is so high and children’s behaviors can be out of control when their routines are all messed up (and this is certainly one of those times!) think about what resources you can provide to parents for helping them work with their child’s social, emotional, and behavioral needs. Many of the tools that teachers have can be adapted for at-home use. For example, giving parents the visual of a Piggy Bank and explaining the concept of “putting money in the bank each and every day” (positives) to fill up their child’s emotional piggy bank is one simple conceptual tool. The Pyramid Model resources, found at NCPMI Website are plentiful! Part of your teacher or therapist planning time should be committed to mining this website for resources you can share with parents. Here’s one great example: A social Story for kids called “Why Can’t I Go To School?” Print and put it together for families, or if that’s not possible, send a link to Video Version. Also see ConsciousDiscipline.com for some great resources, particularly on helping kids with emotional regulation / calming techniques. Print out and provide to parents (if at all possible) the written resources found in the “Family Behavior Support App” provided by the Barton Lab at Vanderbilt University. Here is an example: FBSApp_Stay Calm Here is another: FBSApp_Taking Care of Yourself The Zero to Three website also has some great written resources, FAQ’s and Videos you can look through and share. Helping Parents Cope Find Printables, interactive activities, videos, and tips: Sesame Street’s COVID19 Resources Here The Council for Exceptional Children has provided this fantastic resource that covers literally all ages and contains every bit of information you could possibly want to share with parents. Parents need this information in small bytes. I recommend creating short YouTube or Google Classroom videos each week to share this pertinent information with parents in video form so that they can take it in when they are able to. CEC Info for Parents Tangible Help with Routines and Structure: One of the most important components of your students’ learning and progress is the level of structure you provide during the school days. Your rules are firm, and your routines are consistent. We know how critical these are for all children, most especially children with ASD or ADHD, and your ability to create and maintain the needed structure is one of the keys to your success. So now that parents must take on the role of educator, they will need a crash course in not only why these structures are necessary, but how to create them and how to maintain them. They will need all the help you can give them! I recommend providing clear, written guidance that can be printed out for parents about how to create clear family rules, how to state clear expectations before each activity, and how to plan for the inevitable push-back they will get. Remember that it is far easier to hold a limit on a student than it is for one’s own child. Give parents as much support as you can, emphasizing the value of limits and consistency. Here is one example of a written resource that could be provided to parents in a “sent-home” pack: Help Us Have a Good Day, provided by NCPMI. Visual Supports: Helping parents understand the importance of creating predictable routines for their special needs child is a task most SPED teachers and therapists are very good at. Parents need not only the verbal instructions and supports to do that, but they also would benefit immensely from having some actual visual materials provided, in the form of visual schedules, ‘First, Then” cards, and choices. Whether your program has a drop-box for parents to pick up materials, or offers delivery services, parents must have a way to access these crucial supports. Don’t assume all parents will be able to print things out at home — the more you can provide for them, the more likely they are to follow your instructions, and the more their kids can stay on track with routines and learning! Care Packages: One thing’s for sure: busy, overstressed parents will not have the time or energy to browse Pinterest or Facebook for ideas for engaging educational activities for their kids, let alone to make a bunch of things. Even highly responsive parents very likely don’t have time to make playdough and games if they also must work at home during this time. So, if you are a teacher or therapist, keep in mind that “pre-made” or very simple activities using common materials that parents can quickly grab will be the most helpful. And if you’re a friend or relative who has more time and you want to help, consider making a care package for that friend or relative parent who is struggling to keep it altogether, at home with a high-needs child. Here are some suggestions, for children who are on the autism spectrum, have ADHD, significant speech delays or cognitive impairments: Sensory materials: Anything tactile, visual, auditory, or gives movement. Rubbermaid container of sand, or a tub of homemade playdough or goop (parents may see all kinds of things to make on Pinterest or Facebook but lack the time to make them) with a few tools, scoopers and cups. A weighted cuddle toy (small, lap-sized animal that weighs between 1 and 3 lbs.) The same concept as a weighted blanket, the proprioceptive input that this provides (pressure on the joints) can have a very calming effect. Oil/water Sensory Bottle Viewer: there are instructions online for making these, or they can be purchased at Lakeshore Learning How to Make a Sensory Bottle Or Purchase one Reusable Paint with Water books: these simple books come with a small water brush, and can keep a child occupied, giving the parent a moment to breathe or work with a sibling. Find the kind that have the water brush you fill and refill, as these can’t spill and can be used over and over. Find some Here. Pre-Printed Scavenger Hunt — find one online, or write one up with clues to certain items in the house like “Find something that bounces,” “Find an animal that starts with the letter ‘B’,” or “Build something that’s taller than the dog.” This can be a fun activity for older siblings to do with younger ones, or two school-agers to do in competition. The idea is to keep them busy, make them do a little thinking and reading whenever possible, and break up the monotony of the day a little, while giving mom or dad a chance to be on a work call or get something else done. Fun Snacks — fruit snacks, raisins, Paw Patrol graham crackers, etc. Little snacks that mom can pull out as a special reward for extra good behavior or just to break up a long afternoon. All in all, your role as an educator has really made a huge shift, from focusing completely on instructing your students, to a much broader focus on supporting and educating the parents on how to make at-home learning actually happen. A huge part of this is supporting parents emotionally and letting them know that they’re not alone. As we all ride these uncertain times apart, yet together, let me remind you that you, too, need support. Many of the resources I’ve provided in this article also include supports and recommendations for teachers and other education professionals as well — and of course, many of you are parents yourselves. The most important message to take in, and to communicate during these scary and stressful times, is “You are ENOUGH.” You, your children, the parents, and your students will get through this, together. Take care of yourself, give yourself permission to just do the best you can, and let that be enough, because it IS. ABOUT SARA BEACH: Sara Beach, M. ED is a ECE Trainer, Consultant, Coach, Writer and Podcaster who specializes in creating systems for quality, social-emotional learning, mental health supports and trauma-responsive practices within early childhood programs. As President and Lead Consultant of Synapse Early Learning Systems, Ms. Beach is currently a Pyramid Model Master Cadre Trainer for Illinois, and provides Process Coaching on the Pyramid Model to programs and school districts. Certified as an Infant, Toddler, and Pre-K CLASS Trainer and coder, as well as NHSA “Mind in the Making” trainer, and Lakeshore contract trainer, Sara does grant writing and consulting, and trains administrators, teachers, and coaches whenever and wherever there is need. Contact Sara Beach at Synapse Early Learning Systems.
https://medium.com/@sarabeach/supporting-our-parents-during-covid-19-2a932c3d43df
['Sara Beach']
2020-04-23 13:31:35.914000+00:00
['E Learning Solutions', 'Teaching And Learning', 'Parent Support', 'Special Needs Parenting', 'Distance Learning']