title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Knowledge is Power — Object Omniscience and realising the full value of our assets.
Resale and secondary markets are another answer to waste The best way of getting rid of waste is preventing it in the first place; durable goods which find homes with owner after owner over time, and never wind up in landfill, is the best available option. But many objects — cars, bicycles, electronics — typically have a very rapid loss of value when they are resold. Even though eBay has put a lot of liquidity into second-hand markets, the transaction costs are still unacceptably high. Goods pile up or get thrown out (or dumped into charity shops, resulting in a really large loss of value and liquidity) rather than getting listed on eBay and resold to somebody who wants them. Selling things on eBay is hard, and half of that difficulty is the labour involved describing what you want to sell, and pricing it appropriately. The other half of the difficulty is finding buyers — the asset liquidity problem — and that problem itself resolves down to search. If I’m looking for an item, and you call it “a great bass pedal” but do not list the actual model number, how am I to know if it’s what I am looking for? But, conversely, what if you list the model number, and I only know that I’m looking for a “great bass pedal”? Everybody is using their own words to describe things, and that semantic gap in itself reduces liquidity and, consequently, the value of the assets that are looking for new owners. People guess what potential buyers will search for, but this is all guesswork: why is that? What we need here is clear and unambiguous records about what an object is, and for those records to be equally accessible to all parties dealing with that object: buyers, sellers, insurers, auctioneers, renters, repairers, scrappers, and everybody else involved could all use the same name for the thing and share their records. People measuring and offsetting CO2 emissions could use these same record keeping systems too. We call this structure a Mattereum Asset Passport. The Mattereum Asset Passport pulls together structured, semantic data about all aspects of an object’s existence from a variety of experts and observers. The claims made about the object are tied to financial compensation if they are inaccurate, protecting buyers/users and making sure that experts get paid and have skin in the game. One expert could estimate the embodied CO2 emitted from a manufacturing an object, and another could show proof that these emissions had been offset by (for example) tree planting or regenerative agricultural practices. We can tie the environmental impact data directly into the same system which manages color matching or vouches for the mechanical strength of the thing: product data is product data. Our core argument is that today everything is underpriced because buyers are constantly hedging against imperfect information. Without these Asset Passports, the preventable information gap has an even more damaging form: the infamous “Lemon Market” which results when the sellers know more about items than the buyers. Without buyers having certainty about items, all markets are “lemon” markets. A “lemon” is a term used to describe a car which appears fine on the surface but has hidden mechanical problems, rendering it worthless. The higher the likelihood of “lemons” appearing among genuine goods, the more consumers are likely to price in the risk of buying one in the amount they are willing to pay for all items in the market. All purchasing decisions are impacted by imperfect information, and consumers respond by paying less for things than they would if they had perfect information about the offer.This drives the value of the entire market down, and is still very bad for a consumer when they end up with a “lemon” and bad for the environment when it’s scrapped. When there is certainty as to an object’s functionality, history, and quality, the value which was previously depressed by the risk of buying a “lemon” is unlocked. Even if the thing consumers buy is perfectly described, they can still be hurt by what they don’t know: there was a better one available, or a better one is due to be released next month. We are constantly trying to lowball our purchases to manage the risk of making mistakes based on imperfect information, and this badly hurts sellers who just want to get paid. This psychology applies as much to houses, cars, and industrial machinery as it does to consumer goods and trinkets. Fear-based behavior from buyers reduces overall economic efficiency in a number of ways. For example, buyers will: Trade at a discount from perceived value (expensive things are cheap on eBay partly because buyers do not trust what they are buying will be what they receive, so prices are depressed) Hesitate (reducing liquidity and harming cash flows) Introduce expensive middle men to reduce risk (for example, multiple third parties in a real estate deal exist to certify facts about the house to the buyer, and all their fees reduce the price the seller can feasibly charge for the house) Increase “trade gravity” (people tend to trade with people they are culturally close to, rather than the ones it is economically most efficient to trade with if we ignore cultural factors because we imagine we have more control when dealing with neighbours) Buy from brands at a premium because they trust the brand on matters like quality assurance, design, and fitness for purpose (premium price behavior) All of this is economic inefficiency which can be squeezed out of the global trade system if we can use technology to reduce the uncertainty of trade. Better product information lets you get full value for quality goods, and discriminates against garbage on the trash treadmill. All of these sources of friction and uncertainty can be removed or at least lessened. If you put yourself in the position of a purchaser: do you want to buy the goods with shoddy, inaccurate, vague information with no guarantees of accuracy, or do you want to pay a higher price for an iron-clad guarantee that you will get what you pay for? The premium for removing the risk from purchase transactions could be as high as 20% in some markets. The problem, of course, is that the records describing what this stuff is were not passed on to us when we bought it the stuff. To sell it, we have to effectively recreate those records, and that labour is expensive. It’s hard to get motivated to tag things manually. What we need is for that data to be stored with the object, and transmitted seamlessly to all future users. Mattereum Asset Passports do this. Mattereum’s system will aggregate expert opinions about objects for ease of consultation, too. We have a new mechanism for incentivizing these experts to put in the time and energy that it takes to express their opinions clearly. If accomplished guitarists can weigh in with their assertion that a certain model of guitar is ideal for players with hands under a certain length, for example, consumers can wade through the advertising hype and get informed answers to the questions they’re really asking when deciding what to buy. Now we are paying people for real information, rather than to stand next to things and pose with them. Everyone going out and buying their own stuff just to try out a new product or hobby is part of what got us into our current situation in the first place. If we think of our personal caches of underutilised goods as a sort of poorly organised matter network, of which our closets and garages are just one node, then we can begin to organise this network to make the best use of a wealth of untapped resources in each other’s own backyards. Look up the kit that is the best match for what you’re looking for, and then rent or buy it from your neighbour up the road who has one going spare. If all of these goods were in the Mattereum ecosystem, then the groundwork is laid for both an efficient sharing economy as well as an agile market for second-hand goods. All of this is based on the same need for information tied to individual physical objects: what is this thing, and what is it really good for? If we have the facts, we can design algorithms to optimize the allocation of capital (and goods) in our lives. It’s a straightforward big data problem. This is good news not just for consumers, but for producers. If there is a robust, healthy, friction-free secondary market for guitars, cameras, bicycles, even cars, people are much more likely to buy the best they can afford knowing full-well they can easily resell it at a fair price as soon as a better model comes out (or when they no longer need it). Many professional photographers, for example, operate this way already. They use the best possible equipment and upgrade almost automatically when better gear comes out, because there’s good enough liquidity in the second-hand markets for good quality cameras that they are taking very little risk buying new, high quality equipment. Good for Nikon, good for Canon, good for Sony, and for the photographer and the people buying their photographs. Efficient secondary markets are nobody’s enemy: they just increase the average quality of goods available to everybody. Only the real trash gets pushed out of having value by efficient markets. To make this vision come true, we need a new category of software. We know this approach works, because 60 years ago VISA changed the world specifically by reducing buyer risk VISA is an august institution. It is much older than most people think: it was started in 1958. VISA’s history is absolutely fascinating, and extremely clear, precise, and bold visions of the future powered VISA’s expansion and growth to world domination. As an organization, VISA is probably the closest precursor to blockchain technology. It exists to enable trade, globally, and does so by mitigating a set of risks which make it harder for people to buy and sell across the world. While it did not have the explicit political vision of Bitcoin, the world that VISA seemed to be creating in the early days was much the same world: endlessly fluid, seamless, point to point transactions around the globe. That was the vision, anyway. One of the key contributors to the success of VISA is their comprehensive strategy for reducing perceived consumer risk when using VISA to make payments. By providing risk mitigation for buyers living with uncertainty, VISA does succeed in facilitating global trade, particularly on the internet. Because VISA charges such a high fee (2%+) and has massive market power, they can afford to provide dispute resolution to buyers, and enforce sanctions on sellers (the infamous chargeback), in essence acting as a global small claims court. Relative to small businesses who are dependent on being able to accept VISA cards to continue in business, VISA is parasovereign. Access to justice on even such a rough-and-ready basis as VISA’s customer service reduces a buyer’s perceived risk, so people will pay for goods online with credit cards knowing that if they are defrauded they can get their money back, with the entire system being governed and overseen by VISA, Mastercard, and their peers (at a B2B level, including SWIFT.) This dispute resolution plus insurance package is extremely powerful for getting trade to happen where it otherwise would not, specifically because it protects buyers from a range of risks, including some classes of information inequality. VISA makes its living from sellers’ willingness to pay high transaction fees; increase in total transaction volume from accepting VISA more than compensates for them. This covers many inaccuracies in product descriptions; people can always say “goods were the wrong color, accept my product return or I will call VISA to arrange a chargeback.” But if we could create the right kind of markets for truth about those facts, we could eliminate those errors, and correspondingly reduce global transactional friction. There’s no reason that objects should not ship with an accurate color measurement, and plenty of third parties with color testing gear exist. We should never be guessing about things like color matching or product sizing. We have the technology, we just don’t have developed markets to make the information cheaply available to the people that need it. The power of the blockchain allows bundles of services like VISA’s to be disaggregated into marketplaces, leaving a decentralized, competitive marketplace where component services can be assembled into a system more efficient and powerful than the existing financial architecture has ever been. This is significantly beyond the scope of this article, but it’s news to nobody: including VISA. VISA itself recognizes the potential transformative power of the blockchain. The new VISA B2B Connect service uses Hyperledger to make more efficient international payments a reality, in theory competing with SWIFT. But a reasonably well designed blockchain trade system need not differentiate between B2B, B2C and P2P users; the underlying technology is secure enough to handle all users on the same backbone, as is natural to the decentralized paradigm. All of these technical trends converge on the same vanishing point: a world in which money flows around the world as easily as information. Mattereum thinks that model is likely to be broader: information, money, goods, and services will flow around the world as easily as information. But to get there, it’s important to understand how VISA operates, and what lessons it has for us as we come to our core challenge: redesigning the global economy so we have a planet worth living on in a hundred years. VISA and the blockchain Now let’s look at VISA in more depth to understand how they have become such a massive global financial infrastructure player. The VISA model bundles six services: Identity (for both buyer and seller) Credit Currency conversion Payment rail Dispute resolution Transaction insurance The blockchain ecosystem is building out a service architecture which roughly parallels VISA’s categories of functions: Identity (Sovrin, uPort, Civic, Mattereum) Credit (MakerDAO, Ethlend) Currency conversion (exchanges, Bancor) Payment rail (Bitcoin, Ether, DAI from MakerDAO) Dispute resolution (Mattereum) Transaction insurance (Mattereum, Etherisc) So the argument can be made here that the blockchain community are building out a “decentralized VISA.” Not an unworthy goal, as VISA and Mastercard equal upwards of $30 billion of revenue per year on more than $10 trillion of transactions. Additionally, given that blockchain payments are generally considered to be non-reversible, some businesses may greatly prefer blockchain payment solutions to address the problem of unreliable invoicing. However, the credit card paradigm does not touch vast areas of payments; mortgages, B2B transactions (perhaps 4 times the size of the B2C economy), and larger payments in general are out of scope for the credit card. But the emerging DeFi (“decentralized finance”) model does not distinguish between B2B and B2C services, and fully supports P2P transactions on exactly the same basis. It’s all one. The new emerging architecture is scale-free: trade is trade is trade. There’s another source of trouble in the existing system that the new system will fix: human error. High error rates, manual re-keying of invoice details, lack of any kind of meaningful “API economy” for automated provisioning of goods and services and so on adds up to a much larger opportunity than competing with the credit card in B2C financial transactions.The entire process of invoicing and processing large transactions is in desperate need of optimisation. SWIFT processes over $5 trillion per day, (at far lower margins than VISA, of course). But all of those transactions will be associated with labyrinthine internal bureaucracies, fundamentally ruled by the big four audit companies who make a living unpicking the natural errors — and occasional frauds — that occur when you have humans in the loop manually entering invoices into databases. Imagine the transformation possible in this space as precise, clear, machine-readable descriptions of goods and services (backed by guarantees linked to escrowed funds, for example) reduce the error rates on these transactions to near-zero over time. The trajectory for the blockchain industry has to be towards automating error-free B2B, B2C and P2P transactions, using triple entry bookkeeping concepts to unpick the invoicing maze and cleaning up the manual processes which exists at the boundary between almost all large organizations. Putting B2B, B2C, and P2P transactions on exactly the same backbones, and reducing the friction of operating over international boundaries to near zero is going to unlock genuinely world-changing amounts of wealth. Where does Mattereum fit in? The Mattereum process is three parts: Asset Passports, Automated Custodians, and the Smart Property Register. Mattereum started to create a “supreme court of the internet” for hearing disputes related to the use of Ethereum and other smart contracts in real world trade. This business model uses the provisions of the 1958 New York Convention on Arbitration to establish a private court which users can opt in to for their dispute resolution by including simple boilerplate text in their contracts. We saw this (and still do!) as a vital missing component in getting world trade on to the blockchain platforms that are emerging. Many such specialized courts already exist for industries like construction or ship-building, handling hundreds of billions of dollars of disputes every year, so why not for the blockchain space? Most of our innovations in this area relate to technical evidence handling, court procedures, and cost control. Importantly, the awards made by courts of this type are easily enforced internationally. But blockchain adoption in the real world has been slow, and the dispute volume to support such a court does not exist yet. We were a little early. The Automated Custodian is the technical/legal machinery for doing instant property ownership transfer almost anywhere in the world: an atomic swap for property. We aren’t there yet, but we are working hard on it. We then pivot Mattereum into its second phase: from the initial arbitration-centric model — which is well-suited to providing legally binding dispute resolution services for large volume commercial transactions — to the “smart property register”, in which we figured out how to apply these concepts to much smaller transactions by narrowing focus to a specific subset of disputes: those related to the authenticity or qualities of a physical object. Mattereum does this by taking a whole set of disputes about point-of-fact issues, and moves them out of litigation-style dispute resolution — adversarial, multiple parties, win-lose, complex burden of proof type decision-making, fault-finding etc. — to insurance claim style dispute resolution. We accept that insuring against the damage done by human error and occasional low-level fraud is necessary for function in the real world. Trade needs this. VISA proved that. The Smart Property Register is for contract composability: it’s how you’d put your house into a pool for a Blockchain Airbnb based on sublease clauses in your rental contracts. Again, that’s over the horizon: the defi ecosystem has to mature a bit before that has genuine utility, although MakerDAO’s ecosystem is rapidly approaching the point where this functionality would be useful! So that leaves our first product: The Mattereum Asset Passport, which is essentially a Self Sovereign Digital Identity for a physical object. It’s a domain name for physical matter. It is the abstraction required to connect ordinary physical matter to the internet, in the same way that domain names were the abstraction layer required to connect existing brands, concepts and information assets to the internet. Mattereum Asset Passports help asset owners to get full value for their physical objects by removing all the doubt and friction associated with the buying and selling of material things. An Asset Passport will also usually include a Digital Twin of the object. It is literally an Ethereum smart contract which serves to collect together all the relevant information for an object. This information is presented as a second series of smart contracts, which sell very specific indemnification contracts to verify that the information about the object is correct, and pay out if it is proven to be in error. The Mattereum Asset Passport is sort of like a microDAO surrounding an object: a plurality of people all make claims about an object and its provenance, and they can all stake their money on the accuracy of their claims. Anybody that wants to take them up on those promises pays for the privilege. The more people are relying on your data, the more money you get paid. But people can make claims, and invite people to rely on those claims, cooperatively or competitively. It is a market for facts about things; the necessary market infrastructure to effect a transformation in how we approach the material world. There are penalties for being wrong. That paradigm applies at every level of trade, from Magic the Gathering cards up to oil tankers. Our initial alpha relies on quite a few Old World fiat abstractions, but as time passes every level of this structure can be automated. Payments go from fiat-enforceable promises, to escrow accounts, to third party judicial release escrow accounts, up to proofs of insurance and reinsurance, over time. Likewise, we start by using fiat identity in the same manner as would be typical for KYC, but will upgrade to uPort, SOVRIN and other decentralized identity solutions as the technology matures. In the medium term, it will be all on-chain. An enormous number of disputes in the real world are about the attributes of objects purchased, including secondary situational factors like delivery time. But the basic dispute frameworks are that the thing was not as described, or thing was not fit for purpose and has to be returned or revalued. Accurate, truthful, complete information wipes out entire sets of disputes, bringing down the overall system costs for dispute resolution across all trade. It is like the difference between doing business in a clean vs. a corrupt economy: there’s a threshold past which things just get easy, and the economy really beings to fly. That’s what we envisage doing for the trade in physical assets. The more we can get people to tell the truth about their offers, the more efficiently the overall economy runs. We have to reshape the incentive landscape to get full disclosure about products, and the Mattereum Asset Passport achieves this. Essentially, we want a paradigm where, when something goes wrong, people do the equivalent of swapping insurer information, and moving on with their day. We want a situation in which the normal accidents of everyday trade can be covered at a financial level, without requiring complex dispute resolution procedures in cases without contentious dispute. Ideally such a system should punish attempted fraud, rather than reward it — and for that to work, the burden of proof must be very precisely positioned not to create pathological incentives. We feel our design achieves this. Mattereum’s business model depends on the research, development, and service design we did during this phase. The arbitration model we built to provide relatively affordable global dispute resolution for smart contracts also backs up the dispute resolution mechanisms used in the smart property register. Small disputes are handled using an insurance-type model, and larger disputes or problematic misrepresentations are escalated to our other dispute resolution forums. Without this model, disputes about blockchain smart contracts and oracles would have to be handled in regular courts, which would be unfeasibly slow and expensive. Justice has to be fast and economical if it is going to support a low transaction cost blockchain economy. High resolution information and fine-grained property rights will transform how markets work One critical area of uncertainty, which adds friction to trade, is uncertainty about product specifications, both for newly manufactured goods and especially for secondary markets like eBay. We have accumulated near-infinite data about people and their behavior in the past 20 years; far, far more than is really safe, necessary, or appropriate. However, we have singularly failed to make a similar accumulation of data about products; neither Amazon nor eBay currently permit me to shop for a laptop by a specific port configuration (“full sized HDMI port, two USB 3.1 ports, ethernet port”) never mind by the force-distance profile of the laptop keys to tell how it will feel to type on it. We still can’t buy clothes online that fit. The benefits of high resolution product information will be combinatorial. Why should I have to guess whether a TV will fit in the back of a car before I physically pick it up and try? The data exists: every one of these objects had a digital representation that it was manufactured from, but the data disconnects in the economy just cause massive inefficiencies at every hand and turn, because I can’t get to that data without massive efforts. Why? Why can’t I tell Uber the size of the TV set I want to move, and have their software automatically provision the right car for me? And if that kind of data is good for consumers, what about for freight haulers? What about for people packing aeroplanes? The “Digital Twin” paradigm will become the standard to which everything is produced. We should always be able to access a high resolution digital copy of our property — size, shape, materials, functional properties, and so on — so that we can always access complete information about what we own. That information combines with other information to offer new, radical, exciting services. It will be like GPS all over again, or like Mobile itself. Exhilarating, expansive changes in how the world works, right in our hands. Mattereum will establish the new market paradigm to get those digital twins built, bottom-up and grass roots style where necessary. Once we have the digital twins, we can start automatically searching for synergies in matter: what fits with what, what interoperates with what, what will match what. Color, size, shape, fit, technical standards, you name it. Matter is worth more once it is searchable, and computers can figure out its affordances for us. “Yes, this TV will fit in the back of your car.” I’d pay to know that before I picked it up. So would you. We do this kind of stuff for cars, we do it for aeroplanes and aeroplane parts. We know it can work in some markets, and increased computer power and better software allow us to do it for a much broader range of things than previously. We can find the metadata for all the world’s objects, and put it online. The rest is details, really. Better information means better decisions, lower friction, and reduced risk. It means more efficient rental and second-hand markets, efficient enough that they may explode into a whole new kind of utility (think Uber and AirBnb), and all this adds up to a more efficient fundamental economy. There is no reason for archaic property rights norms, established in the medieval or pre-medieval period to serve a struggling 21st century society properly. We need to make matter work more like information. We need a vastly more liquid system of property rights. Not simply tokenization and securitization of the ownership of assets, but a transformed relationship with matter: automated scheduling and provisioning, open options (“I need a sewing machine for two days any time in the next six weeks”) and so on. We can stretch existing capital assets to serve many, many more people. This serves both commercial and ecological imperatives. That’s the core goal. We use information markets to reach it. We get the matter under computer control so that we can use algorithms to efficiently allocate it where people need it most: markets, but fully informed transparent markets. Optimal markets.
https://medium.com/humanizing-the-singularity/knowledge-is-power-object-omniscience-and-realising-the-full-value-of-our-assets-6654af5e9826
['James Hester']
2019-12-02 16:06:28.353000+00:00
['Environment', 'Circulareconomy', 'Blockchain', 'Mattereum', 'Ethereum']
Future of marketing lies in chatbots
Modern chatbots are, at their simplest, digital assistants that deliver specific results via a conversational interface. At their most complex, they are artificial-intelligence-powered tools that will make highly-personalized marketing scalable. They will change the face of marketing as we know it. Many among us have used bots. Few among us have loved them. That’s why we must make a stand before it’s too late. You’ve heard it. Bots are the future. But what kind of future they usher in is up to us. The marketers. The sales people. The founders. The startuppers. The front-line of artificial intelligence implementers. Starting right now, we must solidify the framework around how to use these bots for good instead of evil. But first, we must make a promise. As marketers, we’re guilty of a lot. Not all of it is good. We have this tendency to latch on when we find something that works. We don’t let go until the newest, most promising marketing channel is run completely into the ground. It’s not because we’re evil. It’s a competitive world out there and we’re all just trying to find success for our businesses. If one email marketing campaign hit our lead goals, we pile on more emails. Content marketing is trending? Let’s force feed people more content than they could ever possibly consume. Now, software is built specifically to block, unsubscribe from, and otherwise squash the very campaigns we work so hard to create. When did it come to this? It’s time to make a change. Messaging is the new frontier of marketing. Bots give us the opportunity to tap into it by creating scalable, one-on-one interactions directly with consumers. But hold up. That doesn’t mean we should flock to messaging apps with constant, unwanted streams of vague information. Our bots should be activated only at a user’s request — we need their full permission. They should provide the most direct path from problem to solution. Their input should be rich in context, highly relevant, and as brief as botly possible. This time around, we can promise to do better. Consumers have changed. And, quite frankly, they’re sick of what we’re dishing out. People don’t want to search your entire blog database, your newsletter is completely lost in the garbage heap that is their email inbox, and they will unfollow you in a heartbeat if they find your social feeds overwhelming or irrelevant. When a person has a problem, they will follow the path of least resistance to the answer. A bot that can meet them in the messaging interface they’re already using to a provide a solution with minimal investment on their part? That’s where consumers and new marketers meet. In their 2016 Mobile Messaging Report, ubisend-the leading AI driven chatbot building company, found that over 50 percent of consumers said they would choose messaging apps over email to get in touch with a company. In their 2017 Chatbot survey, nearly 70 percent of respondents said they would rather engage a chatbot than a human because they desired an instantaneous answer. One in five respondents said they would find no qualms in spending an average of $440 with a chatbot. Not a bad close for a robot who works around the clock for free. All good marketers know — meet your audience where they are. Right now, the consumers are using messaging apps. People no longer have to download a separate app which they will never use: By operating within already-used messaging channels such as Facebook Messenger or Slack, bots solve people’s growing frustration over the silos that apps create. Why should a consumer have to download and open three different apps to choose a restaurant, book a reservation, and add the event to their calendar when a bot can do it all from the message app they already have open? Why should a busy salesperson have to spend hours researching while juggling apps and keeping track of tens of tabs when they could just ask a bot to generate a list in seconds? It’s simple. They shouldn’t have to. And soon, they won’t. We can do better this time around. When you pull it off, the results are incredible. Bots provide incredibly customized communication for every single user you have or ever will have. However, when it’s bad, it’s almost unforgivable. Bots operate in a fragile space — people’s personal communication channels. If they don’t deliver an efficient and delightful experience, users won’t hesitate to unsubscribe from your bot and never look back. The single most important thing you can realize is that the challenge of building a bot doesn’t lie in the technical details. It’s in making it so human-like that people almost can’t tell they’re interacting with a program instead of a live virtual assistant. Bot communication is both simple and complex all at once. At a minimum, a rules-based bot should be able to offer up the solutions your user base is likely to seek. That means you will have to deeply understand your audience, determine what kind of requests they’re likely to make, and develop a way to provide satisfactory solutions. Your conversational flow is vital and should include a series of dependent questions. The answers to which will give the bot enough info to both understand what the customer wants and deliver on their request. When available with AI-enabled bots, context is also hugely important. In the same way a real-life assistant would be able to unpack all the unspoken context in a request, a bot should be able to respond to the same cues. Say a user initiates a request to find a hotel room for Friday. Artificial intelligence can help the bot learn where that user will be Friday, what time they’re available to check in, whether they prefer their accommodations have a gym, and so on. Delivery of this solution should arrive within as few steps as possible. The entire interaction, including the solution, should take place right in the messaging app. Most bot-building platforms include rich media options to help users carry out any necessary actions once their answer is delivered. To be honest, you and your bot probably won’t get it right the first time — or countless times after that. But the amazing thing about AI-enabled bots is they get more and more useful every time they interact with data. And that’s exactly what a bot should do: Be useful. Useful bots have a way of attracting referrals and retaining happy customers. Your bot should help its user cut down on the micro-decisions and actions they have to do each day. What it should not do is function as another communication channel where you push out non-customized, blanket content. Just because bots are the newest marketing stream doesn’t mean they’re suddenly the only marketing stream. For more complex tasks, a form or a live customer service agent might still be the best way to serve the consumer. Even for less complex tasks where a bot usually works, a customer should have the option to engage another channel or a human if they aren’t able to reach a satisfactory solution. Pick one thing and do it extremely well. Always consider what duties you want your bot to perform and on which platform(s). What works on Facebook Messenger might not be a great fit for Kik. Research shows that people are very open to interacting with customer service bots, even if they don’t immediately realize that’s what’s happening. That’s still no excuse to try to pass your bot for a human. People don’t like feeling tricked. Bots must be held to a high standard now more than ever. As this technology rapidly evolves marketing as we know it, the way businesses leverage it will set the tone for what’s to come. That’s why we as marketers and makers must understand our role in ensuring we don’t scorch another channel before it has the chance to radically change communication as we know it. Bots have unlimited potential. Right now, it’s hard to understand the full scope of just what bots can do. Sure they can save you time and stress by making mundane, repetitive tasks a thing of the past. But imagine an AI-enabled bot that can help keep track of health conditions, refill prescriptions, and automatically alert medical professionals if it senses something is off. Today’s bots are often divided into two categories: Informational bots and utility bots. Informational bots give users a new channel to consumer information, such as breaking news alerts based on your interests. Utility bots help users complete an action or solve a problem via a user-prompted transaction. This might include enlisting a bot to deal with the nightmare of updating an airline reservation or pulling up your Google analytics data, so your campaign stats are always at your fingertips. For the first time, that’s not just a figure of speech. The world, and the market, is ready for bots. Just look at WeChat. China’s WeChat shows the way to social media’s future. The app’s 700 million users return to it up to 10 times every day to do everything from conduct business calls to manage their personal finances. What can you do to make your bot that streamlined and useful? If you’re a good marketer, you’ll do your best to build a tool that feels welcoming without being imposing and helpful without being overwhelming. You’ll understand that bot communication should solve one problem really well — and do it only when prompted. It’s not that people don’t like bots. In fact, it’s quite the opposite. When the basic rules and boundaries are respected, people are pleased to work with bots. They fill a need. They provide value and delight at the same time. There is no doubt bots are the future. Whether that future is a utopian one or a dystopian one is up to how well we follow this framework right now. We have the power to build honest-to-goodness relationships with our consumers. And consumers are ready to trust us again. It’s a great spot to be in. This time around, we will do better.
https://medium.com/predict/future-of-marketing-lies-in-chatbots-a10d124aa39d
['Riti Dass']
2018-10-25 20:23:08.843000+00:00
['Chatbots', 'Bots', 'Marketing']
How To Create Global Variables In Python
The values ​​defined in the function are called local variables . These values ​​disappear after the function runs. For this reason, local variables cannot be accessed from the desired location of the program. As seen in the example above, the function was ran and we saw the x value on the screen. But we want to print the x value again, we see that our x value disappears. At the same time, changes made to variables within the function do not affect the original. In the example below, the variable x defined with the same name is defined as both global and local. The global variable with the same name will remain as it was, global and with the original value because a new scope is defined in the function. Output: 8 10 As you can see the x value is still 10. What if we want the changes we make within the function not to be lost? We can use the global keyword to create a global variable inside a function. Output: 10 As seen in the example above, we received an error in the first example, but we did not receive an error this, because we made a global definition in this function. The variables we create outside of the function are called global variables. Global variables can be used both inside of functions and outside. Output: Call x from within the function: Python Python Finally, the reason for the error in our 2nd example was that the variable y is defined in the function and cannot be used in the other function. We can figure out this problem by defining the variable y globally outside of the function.
https://medium.com/python-in-plain-english/global-and-local-variables-in-python-81b1664357a6
[]
2020-12-15 09:57:14.087000+00:00
['Programming', 'Python Programming', 'Python', 'Software Development', 'Data']
Winning the War on Social Media Advertising
Bad data and lazy marketing are killing your social advertising potential. No, really. One doesn’t have to look too far to see a plethora of content published about the evil of digital advertisements and their decreasing quality. Make no mistake — there is a war being waged on paid social media advertising. In this post, we’ll see how the “War on Social Advertising” is a house of cards built by irresponsible reporting, how smart marketers are using good data and content to advertise effectively on social media, and the best practices for being smart about your paid social efforts, too. WHAT IS SOCIAL ADVERTISING? Chances are, if you’re reading this you’re not confused about what we mean by social advertising. But I’ll define it anyway, so there’s clarity and proper expectations around precisely what we’re going to talk about in this post. When we refer to “Social Advertising,” we’re referring to paid digital ads being served on one or more social media platforms, particularly Facebook, LinkedIn, Twitter, Instagram, and Pinterest. There are other platforms, and other definitions of “social” and “social media,” but what we’re concerned with today is the continued assault on advertising within social media platforms, and why reports of its ineffectiveness are missing some crucial truths as to why it’s not working. Because in our experience at E3, it is working. SOCIAL ADVERTISING VS. SOCIAL MARKETING One distinction that is absolutely crucial to make: social advertising is not social media marketing. We’re talking about paid advertising on social platforms, including display ads, video, and other platform-specific ads like Facebook’s Canvas and Carousel ad types. What we’re not talking about is posting offers, discounts, or other marketing material directly into a company’s organic social media channels. As a rule, our digital marketing experts at Element Three tend to shy away from organic social media for the purposes of strategic marketing planning and execution to meet client goals. That it’s not a service offering we cater to is not the story — because if we could find helpful, effective, and profitable ways to help our clients with social media, we would do so. Instead we face the reality of what Google’s Digital Marketing Evangelist Avinash Kaushik calls “the broken promise of Marketing Utopia,” how companies quickly diluted building meaningful relationships within social platforms with selling (and yes, advertising). Where once the promise of social was readily available audiences interested in “your thing,” marketers instead created an environment where the platforms, informed by the desires of their users, began to become smarter about what users engaged with and truly wanted to see. This led to what Avinash calls the “Zuck Death Spiral,” the practice of Facebook’s algorithm being updated to slowly learn what a user wants, and remove what companies wanted them to want instead. It’s a reality you’re all too familiar with if you’ve tried banking on organic social to help achieve marketing results: “Real humans on Social platforms quickly got turned off by these low-grade Social contributions/posts by companies. That meant humans (us!) refused to engage with them. This was noticed by Team Zuck, who started to slowly turn down the presence of company posts in User feeds. This led to less Reach for brands. Which in turn led to even fewer customer interactions for content posted by brands.” The “Zuck Death Spiral” is likely not confined to just Facebook. Other platforms have gotten more and more savvy about serving their users content similar to what their users engage in…and, in response to organic social engagement diminishing for brands as a result of over-saturation of salesy content, those same platforms created means to advertise, and to let users know and recognize those ads when they are seen. The reality of organic social — that it’s worthwhile to maintain social posts that feature lifestyle content and other brand-specific material, but unrealistic to expect commercial conversion and monetized activity from posts — helped create the need and market for social ads. It’s too bad that some of the same short-sightedness, misunderstandings, and critiques surrounding social media to begin with have been attached to social ad capabilities as well. “PAID SOCIAL MEDIA DOESN’T WORK FOR…” Take your pick of claims here. “Paid social media doesn’t work for…” Millennials. Or baby boomers. Or small business. Or…pick your poison. But I beg to differ. I believe the reason why we see so many “Social Ads Don’t Work…” and “The Death of Paid Social!…” articles is because of lazy marketing, lazy journalism, and bad data. According to a Forbes article on a study of millennial ad-filtering habits, “millennials communicate with each other far more than any advertising campaign can.” Mostly because they rely on friends, word-of-mouth, etc. Just a paragraph down, though, look what the same article has to say: “Fewer inputs were necessary for male decision buying than for female decision buying. There’s also a difference in terms of categories, so that the female millennials are more likely to focus on health and beauty aids, and the male millennials are far more likely to focus on electronics and technology.” Wait…what? Paid social ads are ineffective because millennials are more likely to talk to one another…but when they do interact with ads, there’s a difference between the sexes for the amount of interactions and the categories of products? This indicates that there is in fact a time and a place where social ads are effective for millennials. The first statement uses bad data and lazy marketing to lead to a generalization that “ads don’t work for millennials.” But the second statement — that ads within particular segments and timeframes do work — that’s a different story. Within the original interview with eMarketer, Nora Ganim Barnes, chancellor professor of marketing and director of the Center for Marketing Research at the University of Massachusetts Dartmouth, notes that traditional “push” style marketing is not effective among the younger generation. She notes, “Millennials are looking for information before they make purchases, but they’re looking for it from their trusted sources, and their trusted sources are not the manufacturers or providers of products.” The rest of the interview highlights all the places where millennials do interact with brands, from apps to sharing content and more. It mentions ads once, specifically in how millennials filter out the things they do and don’t want. What the article doesn’t do is damn advertisements outright for the millennial audience. Another study, executed by CivicScience and reported on by eMarketer, claims that “very few US internet users have made a purchase based on ads they saw on social platforms, like Facebook or Snapchat.” Directly below that claim is a graph showing that 16% of responders had made a purchase based on a Facebook ad…and even more striking data. According to the study, 45% of respondents have never purchased based on an ad on social media sites, and 35% of respondents don’t use social media at all. Let’s look at those numbers for a second. 45% of those surveyed have never made a purchase based on a social ad. 35% of those surveyed don’t use social media. If I think about those numbers for a second, the data analyst in me gets angry. There are some pretty irresponsible conclusions being drawn from this survey. For one thing, how does “45%” equal “very few?” The article title — “Social Advertising Isn’t Really Driving Conversions” — and the first statement make it seem like paid social ads are totally ineffective. But the inverse statement — that 55% of respondents have made a purchase based on a social ad — seems to me to be rather positive. Maybe I define “very few” and “isn’t really” differently, but in advertising, even in aggregate, 55% is pretty damned good. I’ll bet there are a lot of CEOs out there who would love their close rate to be around 55%, especially just for people they advertised to on social media. Another point of frustration: how is this 45% number figured? Does it include the 35% who don’t even use social media? I need to know this answer. Because if it does, that data is even further skewed. If I remove those who don’t use social media at all, then my actual conversion engagement number of 55% is that much greater, because now I’m comparing apples to apples — we’re only looking at those who use social media and, therefore, are actually relevant to my social media strategy. I wouldn’t use the data from a survey that asks, “which guitar is your favorite: Gibson, Fender, or Paul Reed Smith” and fails to eliminate the respondents that don’t have a guitar. If I was trying to find out a favorite guitar type, I’d ask people who actually buy guitars, and I wouldn’t pay much attention to the folks who said they didn’t own one. The mistake that the Forbes and eMarketer articles make is in drawing conclusions that are not supported by the data they’re using. Rather than breaking engagements down within social media and advertisements down to constituent parts (buying cycles and phases, content, platforms, audience segments), they makes an overall generalization that audiences don’t like ads, or being marketed to, or social media in general, and so social advertising must not work. In short: you can’t make assumptions on the quality of advertising with generalized bad data. It takes a more practiced, patient, and dedicated hand to tell the whole truth, the real story — not in macro-generalizations, but in micro-moments of effectiveness. Before we do, let’s look at what marketing keeps getting wrong with paid social media. THE REAL REASONS WHY PAID SOCIAL ADS AREN’T WORKING In our experience, there are six reasons why paid social ads aren’t working (and you could probably apply this same list to many other marketing activities). They fall into two buckets: Marketers Are Lazy Targeting and Segmentation Are Off The second bucket, around targeting and segmentation, is often an outcome of the “marketers are lazy” bucket, but not always. There are three unique reasons within each bucket, and they break down like so: Marketers Are Lazy: Marketers don’t have data Marketers aren’t adding genuine value Marketers don’t know the value of a lead Targeting and Segmentation Are Off: Marketers are targeting the wrong channel Marketers are segmenting the wrong audience Marketers are utilizing the wrong offer Let’s take a look at each in turn to diagnose why it’s easy to make general assumptions around paid social advertising, and how to prevent the failure of your own paid digital efforts. MARKETERS DON’T HAVE DATA Marketers don’t always have every piece of the marketing puzzle. They might be missing market research, first-party customer data, or other helpful information. It never fails to astound me how many companies or products are launched without real market research or comprehensive data. But as marketers we’ve figured out how to grow tough skin, make educated guesses, and learn as we go. Still, the lack of comprehensive data — particularly around customer demographics and psychographics — has the potential to hamstring any campaign. Market research is often worth what you pay for it. It can save thousands of dollars in guesswork, start-and-stops, and stuttering campaigns in the long run. I get that sometimes a failure to research isn’t a matter of laziness; it’s a matter of budget, organizational buy-in, prototyping, and a dozen other factors. It does happen for legitimate reasons. So what to do? In the absence of full market research, there are other methods to gather real, meaningful data about audience type, size, and behaviors online — and wouldn’t you know it, paid social is an excellent tool for just that. Just don’t launch head-first into your one big, killer idea until you know there’s an audience there who cares. MARKETER’S AREN’T ADDING GENUINE VALUE This one goes hand-in-hand with the above lack of data, and is commonly matched by a decidedly un-humble, non-customer-centric approach of assuming “I know what the customer wants.” The biggest killer of campaigns, whether it’s paid social, display, search, organic, email or otherwise, is a misunderstanding of value to the consumer. As Nora Ganim Barnes said above about millennials, ALL audiences are getting better at filtering out the internet trash. And it’s true of more than just the younger audiences that we’re more apt to trust sources we know, be they friends, mentors, or colleagues, rather than fly-by-night (to us) companies or unknown quantities with unknown value. The value has to be focused on the end consumer, and it has to be targeted to the right moment in time to be valuable to that customer. We’ll get to offers in a minute, but suffice it to say for now: not understanding your audience, their needs and desires, and how your product or service has a unique selling proposition that cures those needs and desires, means that you are not ready to advertise. You are going to fail until you understand the value to the customer. Simple as that. MARKETERS DON’T KNOW THE VALUE OF A LEAD Here’s another campaign killer built around value — only this time, it’s the value of the customer to your organization. Marketers the world over are familiar with the arguments with sales that always go something like this: “The leads you’re sending me aren’t qualified/aren’t good/aren’t what I need/aren’t interested in what we have to sell.” Sales guys want it all: they want the best leads, cheaply, and lots of ’em. And sometimes we can do it all at the same time — but not without understanding how much a lead is worth to our organizations. We’ve written before about calculating a customer’s lifetime value and using that to calculate the value of a lead. The quick math version says you find out what the average customer spends over the course of their average engagement with your company — their lifetime value — and then work backward across your lead and acquisition funnel from customer to opportunity, opp to sales qualified lead, SQL to marketing qualified lead, and so on, until you have the value of whatever stages are appropriate to your organization’s sales and marketing funnel. This is extremely important. Why? Because you will use this number to justify your very existence — or, more directly, to justify the budget for your paid advertising activities. Campaigns that don’t have enough budget to be effective are hard to overcome. You’re living on a hope and prayer, wishing for success. The key to unlocking budget is knowing the value of the lead you are going after, and being able to forecast results. One thing is for certain — when you’re getting leads under the average cost per acquisition, it’s a LOT easier to get more budget for the campaign. Know the value of the customer, and use it to your advantage. Anything less is lazy. MARKETERS ARE TARGETING THE WRONG CHANNELS The rubber meets the road here when the lazy marketer is driving. One of the critical errors we repeatedly make in marketing is not being where the customer is. You can call this “targeting the wrong channel,” or platform, or…several things. The overriding point is targeting the wrong place. I know you’ve heard “the right message, and the right time, to the right person, with the right offer!” platitudes enough to make you sick. The thing is, they’re true. Where and how you are targeting truly matters, whether that’s social channels, or specific keywords, or time of day. And not having the right data or the means to test until you have the right data is likely the reason marketers keep screwing this up, unless they’re willfully targeting with no rhyme or reason, in which case marketers are just being stupid. We’ve already seen how bad data and lazy investigation into what was really being said or really going on has led journalists astray in reporting the effectiveness of social advertising. It would be a shame if your marketing efforts suffered the same silly fate. MARKETERS ARE SEGMENTING THE WRONG AUDIENCE Failure to understand the customer is again at fault here. Targeting the wrong person is just, well, dumb. What a waste of marketing dollars! The thing is, I know some of you are saying by this point, “hey, what if I don’t know who my audience is?” And that’s fair — we know that sometimes even the smartest marketers just don’t have the data, and don’t have the power to hit pause and make an organization do the market research to figure it all out. But paid social and other paid channels where targeting by demographic, interest, and behavioral factors is available are excellent ways to narrow down your audience and truly start to segment them into useful groups for advertising and marketing purposes. You have to start somewhere, I get it, but staying general and generic forever is a death sentence for your paid social (and any other marketing) efforts. Test, figure it out, move on. Don’t let the lack of data make you lazy. MARKETERS ARE UTILIZING THE WRONG OFFER Lastly, one aspect that everyone should really understand at this point is that marketing with the wrong offer is a quick and easy way to turn off potential customers. You have got to understand what customers want. It’s even better if you understand their behaviors at each appropriate stage of the funnel, target accordingly, and accept that some of your ads may fall outside the perfect zone of where that customer is. Marketing automation can help this a ton. Knowing how to segment an audience based on activity, cookies, and other factors can really help you become effective with your offers. Not doing this is unthinkable in today’s data-rich and customer-savvy environment. The other side of the coin is knowing how to massage an offer to perfection — and when the offer is the problem, or something else is at fault. If you guessed where this is going — to testing and conversion rate optimization — congratulations, you get the golden egg. Your work, your offer, isn’t done at launch. There’s ways to improve. Either way, getting your offer right is critical. Throwing something out there with little to no value to your potential customer is not going to work, and expecting it to do so is just lazy marketing. HOW ELEMENT THREE GETS IT DONE So how does social media advertising work? By now you know what we’re going to say: paid social ads that work well work because of great customer data that is utilized to develop the best marketing segments, targeting, and content offers. Here are the social advertising best practices that we use at Element Three to deliver ads that rock. SOCIAL ADVERTISING STRATEGY One thing our team prides itself on is being able to listen to a client’s goals, needs, and sales data, and craft the right campaign to reach their goals. This doesn’t always mean having the right answer right out of the gate — some campaigns start off low and slow, spreading budget across many channels with a broader range of demographics, psychographics, or offers, and using the early results as a way to zero in on what will work best within a campaign to deliver on our objectives. Our approach also tends to focus on funnel-stage targeting — or better yet, full-funnel targeting where different tactics and offers are working in concert to reach, engage, qualify, and sell prospects across multiple stages of the funnel. We do our best to engage with every client in both short-term and long-term paid digital advertising strategy, and how we use, test, improve, and lean into social advertising always has a place in how we approach advertising campaigns. SOCIAL ADVERTISING TRENDS There is something to being first to a new channel among your competitors. Seeing what’s going on in your industry and surrounding areas, not just topically, but technologically, can be a huge key to success. We tend to stay away from “fly by night” topics. Newsjacking to promote your own thing only dilutes the value and quality of your content, and tends to attract folks not really interested in what you have to say and do for them. Ah, but jacking technological trends — like new ways to advertise on Facebook, Instagram, Pinterest, LinkedIn and more — can have a major impact on reaching audiences in fresh and interesting ways. Social platforms offer new mediums for advertising and getting in early, where pricing is low and opportunity is high, can make the difference in advertising campaigns. Likewise, seeing when channels or tactics within them start to dry up is key to knowing when to shift. Element Three keeps a keen eye out for both — the new opportunities and the sunsetting ones — and helps craft campaigns that have maximum impact and reach based on those trends. SOCIAL ADVERTISING BENCHMARKS There’s a right way and a wrong way to report on paid social ads. Measuring social advertising should always start with the client’s goals, not the channel’s metrics. Element Three asks all kinds of questions around a client’s goals — lifetime value of a customer, lead-to-close cycle, typical cost per acquisition — and uses those as barometers to pair with goal forecasting, budget estimates and more to craft campaigns with real targets, dynamic and agile goals, and positive results. In other words, if you’re interested in leads, we’re not going to target impressions as a key metric, and we’re not going to sugarcoat a campaign report with fantastic impressions that missed the overall (conversions-focused) goals. You have real marketing needs — volume, cost, quality, whatever they may be. We craft campaigns around your overarching marketing goals and work backwards to campaign goals, KPIs, and benchmarks that cater to them. There are reasons why paid digital is one of our fastest growing service offerings, and one of the major ones is because we know how to find and bring value. SOCIAL ADVERTISING SPEND In paid advertising, budget is one of the most important factors in how we strategize and execute campaigns. Having the right amount of budget for the goals you want to create — and a realistic expectation of return on ad spend/return on investment — is critical to campaign success. If you don’t know the demographic and psychographic information of your customers, it’s going to take more money (and potentially a longer timeline) to get amazing results. The great thing about paid social is that audience targeting can be improved with positive and negative engagements, letting you learn more about your customers as you go. Activity can be gauged not only on engagement, but on conversions, helping to create lookalike models of your best buyers. But without a history of success or the data to program this from the get-go, it will take time and budget to test. Some strategies start “low and slow” and test broadly across many channels and audiences in order to learn where the best target is. Having the budget to test and learn first, then pivot spend to the most effective channels and crank your campaign into overdrive, is crucial to reaching maximum value for your campaigns. Balancing budget and overall cost per acquisition against lifetime value is also a key component to measuring campaign effectiveness and dedicating appropriate spend to your efforts. “That cost per lead is too high!” can be a valid response — but not if it’s not measured against the actual value of customers. If we can get you leads for $25, and your sales team closes 1 out of 20…you might think $500 is too high for a cost per customer. But if your customers are worth $5000, that’s hot — for every $1 you give, you make $10. That’s not a bad return at all. Your willingness to spend should reflect the breadth of that ROAS ratio. ADDITIONAL BEST PRACTICES There are dozens of other factors that can help make social media advertising campaigns effective, but here are just a few more that we’ve seen become critical factors in campaign success. Campaign Duration : audiences have limited attention spans, so it’s important to refresh offers, relax immediacy in campaigns, and give prospects room to breathe when appropriate. : audiences have limited attention spans, so it’s important to refresh offers, relax immediacy in campaigns, and give prospects room to breathe when appropriate. Creative Refresh: likewise, audiences aren’t going to respond to the same ads they’ve seen time and time again; creative needs to be rotated to maintain engagement. likewise, audiences aren’t going to respond to the same ads they’ve seen time and time again; creative needs to be rotated to maintain engagement. Advanced Reporting: there’s nothing like real-time reports on your campaign success, and we set up dashboards for just that — a place to log in and see results as the campaign progresses. If you’re not keeping these things in mind when you’re building and executing your campaign, it might not matter if you’ve gotten everything else right — you may end up with a big ol’ turkey of a failed campaign. WHERE PAID SOCIAL ADS ARE GOING Unless things change in a major way (like, Facebook stops nerfing business posts, or somehow disappears altogether) social advertising isn’t going anywhere. Despite the Zuck Death Spiral, brands still need a social presence, and social ads are the best way to maximize your brand’s potential in that space. To win in the War on Social Advertising, the first step is definitely simply to be in it. In too many cases lazy marketers, lazy journalists, and bad data are telling a story that you shouldn’t believe. They’re telling you that social advertising doesn’t work, despite the fact that the data doesn’t back it up. They’re telling you that it’s too hard, simply because they aren’t adding value or they don’t understand the value of their leads. They’re failing because they targeted the wrong channel, or they segmented the wrong audience, or they utilized the wrong offer — not because social ads themselves are wrong. Don’t get drawn into the wrong conclusions just because the competition and pundits are lazy, or don’t understand what they’re seeing. Don’t reject social advertising because of the herd. Stick with it. Do it right. Find the right strategy for your needs, know the trends and follow the ones that make sense, set the right reporting benchmarks, and spend intelligently. Your ads will perform, and the next time you see a headline about the death of social advertising, you won’t be afraid. You’ll laugh. Perspective from Dustin Clark, Digital Marketing Director at Element Three
https://medium.com/element-three/winning-the-war-on-social-media-advertising-3a31bfb6ebed
['Element Three']
2017-10-13 13:28:12.434000+00:00
['Marketing', 'Social Media Marketing', 'Paid Social Media', 'Paid Advertising', 'Digital Marketing']
Why I’m Not Letting Your Nazi Comment Stand
Why I’m Not Letting Your Nazi Comment Stand I’m practicing for when it matters While I appreciate your comments about the serious nature of work, and I am familiar with some of your arguments, I am puzzled by your willingness to mischaracterize my thoughts in your recent comments. I feel the need to affirm that language can still be used positively on behalf of workers and that the workplace is not a hopeless case; most astonishingly, it also appears I must assert that taking pleasure in using language clearly and well doesn’t make someone a Nazi. First, as you should know, those disciplinary write-ups we’ve been discussing reflect processes that emerged from collective bargaining, as a better alternative to draconian corporate policies that fire people without warning and without cause. Moreover, if you’ll look at the example, the write-up is describing abusive and violent behavior, with an eye toward mitigating it. We do that sort of thing now, you know. Second, I must say that saying that all of that activity is corrupt and cultish is profoundly cynical and discouraging to those who still try to make a positive effect in the workplace. I agree that there is much to say about the fraught nature of work and its effect on the human soul, but I’m taken aback by your assertion that frivolity is never appropriate in any situation where anything serious ever happened. If that’s your position, then I suppose you might want to avoid articles tagged “Humor.” As I was reading along, I thought I might beg your pardon if my lighter approach to problems in the work place offended your sensibilities. Finally, I must admit that when you suddenly jumped to the concentration camps in Nazi Germany, AND imagined that someone like me was like a zealous clerk happily checking off people to be executed, I was brought up short. How’d we get way over there, I wondered? I choose not to let that careless and gratuitous offense stand. I thought about it hard. My friends said, “Don’t bother replying — he’s not worth it.” It’s certainly not up to me to say whether someone is “worth” a response or not, but this discourse is worth it. We are exchanging ideas here. In this space, we were talking about how to write clearly and well for institutions, to avoid obfuscation and cut through the fog. But your comment derailed that discussion, and you took the meeting onto a pretty sharp tangent and left it there. So I’ve thought more about what you brought up. To be sure, those wrongs in the workplace are serious and need to be addressed. The critiques are real, and I would be interested in your considered analyses of these issues. The masses, the proletariat are living in society bent on erasing us. Our relationship with work since the Industrial Revolution has changed our souls, undermined our families, fragmented our culture, destroyed our health, trashed our environment, and dimmed our future. We don’t even know what work is any more. Foucault called madness the “absence of work” — what will this mean in a future where “work” means staring at a computer monitor, the way you and I are each doing as we have this discussion? All of these things are quite true. And yet, you offered no ideas about the proper uses of rhetoric in the face of these crises. In fact, if I understood you correctly, you seemed to indicate that there wasn’t any use in trying. Instead, seemingly out of the blue, you fired off a Nazi bomb. It was a random, thoughtless suggestion of an insult toward someone you don’t even know, the ever-popular she-sort-of-sounds-like-a-Nazi reference, because I suggested that writing up some blowhard who lost their temper in the office might be satisfying — yes, even fun, if the blowhard was a particular pain in the neck. I had the audacity to be frivolous. I must therefore be a Nazi, and a stereotypical bureaucrat, and a gendered stereotype secretary too, for the trifecta. So, no. I cannot let that stand. It is not as if I were gleefully checking off the names of those about to be executed. You have misunderstood the analogy, sir, in your eagerness to hurt. So who do I say we should be? Who would I be in the analogy? Why, I’d be the one doing the damned write-up. I’d be the one documenting for posterity the crimes against humanity committed by those employed by that evil institution. That’s who I would be in the analogy. Would it be “fun” to recount those serious and deadly deeds? No. Of course not. In that scenario, I would choose a different word. If I were called thus to speak truth to power, and I had the skill and the courage, it would not be fun. It would be glory.
https://medium.com/notes-on-the-way-up/why-im-not-letting-your-nazi-comment-stand-4f7005f58be3
['Rev Dr Sparky']
2019-09-24 16:22:11.897000+00:00
['Justice', 'Rhetoric', 'Life Lessons', 'Inspiration', 'Writing']
Principle — 4 Little Tricks to Share Your Idea in an Effective Prototype
How to prepare a clickable prototype that contains all interactions. How many times have you tried to explain your excellent design to a developer? Either using gestures, storyboards or easy digital prototypes with finished design views — it’s not easy to communicate your vision. Principle gives you the power to show basic interactions in few easy steps. After reading this article, you will be four steps closer to express your vision. Here is full sized Calendar View I’ve prepared for my HUB Mobile Application. Firstly I tried to explain my vision of interactions to my friends, using InVision app with few screens uploaded. Didn’t work. Then I spent 2 hours preparing this prototype. It finally opened their mind. The Beginning There is one thing you need to remember if you want to use Principle. Preparing designs in Sketch will make you able to just copy-paste full components from one tool to another, so that you save a lot of time. Be sure that artboards in Principle are the same size as those from Sketch. Thanks to that all the parts will fit each other. Preparing Principle Artboard After preparing design in Sketch, you need to think about animated layers. Each motion is somehow different but in this case you need to divide them into two categories. Motions like fading or moving on y or x axis don’t require extra work. You can simply copy (ctrl+c) the components in which you want to use these motions from Sketch and paste (ctrl+v) them into a Principle artboard. However remember that asymmetric shadows will disable auto center option. Furthermore Principle doesn’t contain rulers. moving and fading In case of motions like stretching, rounding corners or even text changes, I suggest you to recreate these layers in Principle. Don’t worry, texts created in Principle will match exactly these from Sketch! stretching and rounding I prepared 750x1334 artboard. The whole animation shown above took a total of 9 artboards. Motion Matters Well animated interfaces will help the user understand the relation between two screens and prepare her or his mind to smoothly cross from one action to another, but here is a catch — UI is not a Disney movie, so step wisely when adding successive motions. “There is no such thing as a boring project. There are only boring executions.” –Irene Etzkorn In this part I will show you how I design motions in my prototypes. If you need to fill some blanks related to Principle basics, do not hesitate to watch this Principle basics tutorial.
https://medium.com/elpassion/principle-4-little-tricks-to-share-your-idea-in-an-effective-prototype-e73bd323899c
['Kamil Janus']
2016-11-24 13:21:05.149000+00:00
['UI Design', 'Mobile App Design', 'Design', 'Motion Design', 'Principle']
I’m Haunted by the Man Who Set Himself on Fire
I’m Haunted by the Man Who Set Himself on Fire I began my career as a mental health worker with high hopes, but the system doesn’t set up anyone for success Photo: Saurav Sharma / EyeEm / Getty Images If we could change ourselves, the tendencies in the world would also change. As a man changes his own nature, so does the attitude of the world change towards him… We need not wait to see what others do. —Mahatma Gandhi “Never wear shoes you can’t run in,” a case manager offered as a welcome tip on my first day at a state-funded mental health clinic. “You never know when you’ll be chased or need to chase someone.” I learned that lesson firsthand later the same day. My master’s program in violence and abuse prevention required a specific number of internship hours, so I felt fortunate to snatch up the paid gig working on an Assertive Community Treatment (ACT) team for $11 an hour. I was young and fueled with vigor. I was going to change the world. I was going to make it a safer place. I was going to change lives, have a positive impact on the community, and be a voice for those unable to speak for themselves. I was every person who has ever begun a career in the mental health profession, full of hope and the belief that what I do matters. The only previous experience I had in the field was reading textbooks during my time as an undergrad studying health science, followed by 100 service learning hours spent in the emergency room of a Phoenix hospital. This was back in the day when vodka-soaked-tampon death stories covered the front pages of newspapers. The team focused specifically on engaging court-ordered committals out in the community. Some struggled to follow through with substance abuse treatment, others were deemed a danger. There were ones who just wouldn’t stay on their meds and were required to have them hand-fed and those who suffered so deeply from mental illness it was a challenge to accomplish basic functions. The system of the clinic worked like this: Pile an overload of cases on treatment teams, waste as little time as possible helping patients, and invest as much time as possible on mandated requirements which consisted of, but were not limited to, paperwork, documentation, and clinical notes. We spent more time in meetings talking about patients than we did talking with patients. The clinic’s motto was “Recovery Focused,” but our mental health care system is designed to merely maintain. Because of this, everyone involved suffers. Patients, providers, and the community at large. It was shortly after 8:00 a.m. I was sitting at my cubicle writing clinical notes and snacking on carrot sticks. I was three hours into my shift, after a morning of doing group home checks. Maroon 5 was streaming on Pandora, drowning out the clink-clank of keyboards in the background. I noticed the red light on my desk phone flashing as I promptly pulled my headphones off. It was a page over the intercom system. “Doctor Strong to the front lobby. Doctor Strong to the front lobby, please.” The voice was convincingly calm, but I knew it was an emergency. “Doctor Strong” was code for “help.” Any clinical staff on grounds and not with a patient were mandated to respond. This is how I found myself and four other employees I’d never met surrounding a man who was threatening to set himself on fire. The lobby was full, the check-in line was long, and the security guard looked the most scared out of anyone else in the room. “What’s your name?” I asked, doing my best to maintain relaxed body posture. “They won’t give me a refill,” he responded. “I’m Erika. I’m new here,” I informed him. “I don’t know you or your situation but I’d like to. Come outside with me and we can talk about it.” He pulled a large can of lighter fluid from the inside of his jacket, the kind you would use to ignite barbecue briquettes for grilling on a hot summer day, and doused himself with it. The tension thickened as my cohorts inched their way forward, closing in on his personal space. “Wait,” I pleaded with them. I turned back toward the man. His face and clothes shimmered with fluid. I leaned in and made eye contact. “Sir, please. Hand me the can and let’s go outside.” The pain in his eyes shifted into rage. His fists clenched, causing the can to hit the floor. He raised his arm, reaching out toward me. That’s when I spotted the white matchbook in his hand. With a single snap of his fingers he went up in flames. In the days that followed, my team did an investigation. I was responsible for completing and submitting the report to administration since I was present at the time of the incident. In it, I wrote the truth. “CM (case manager) has a caseload of 42, regardless of a state maximum regulation of 36. The ACT team’s investigation has ruled no fault of CM, nor is there supporting evidence proving neglect on the part of CM. A single employee or team is incapable of meeting all mandates and demands. If we truly want to help people there needs to be room in our daily chaotic schedules for one-on-one face time.” It’s been 17 years since I watched flames swallow him, and the image continues to replay in my mind on a daily basis along with the countless other experiences that haunt me when I turn off the bedroom light. I try to shake them but I can’t forget. I eventually graduated with a degree in advanced behavioral science with an emphasis on readiness and response to domestic terrorism. I went on to work for an agency where I supervised my own intervention team. We assisted U.S. Marshals and Homeland Security in tracking violent perpetrators and hiding victims. I left the field, and my career, right when mass shootings in America started to trend. I’d given up nearly two decades of my life, but no one — nowhere — was safer. I can’t help but feel that if our system were proactive, maybe I would have made a difference, but it’s reactive and I have to accept it for what it is. I recently began counseling sessions due to a bout of insomnia. The counselor reviewed my intake form, jotted down a note, and asked, “You must have been through a lot of trauma during your career?” “It was just a job,” I responded. “I had a job to do, and I did it.” “Do you think we should talk about it?” “Nope. I’m good.”
https://humanparts.medium.com/man-on-fire-181f4377c780
['Erika Sauter']
2019-12-16 19:35:45.322000+00:00
['Work', 'Mental Health', 'Relationships', 'Life Lessons', 'Culture']
Coming together as a couple this holiday season
by: E.B. Johnson It is not always easy to balance the chaos of the holidays against the relationships we share with our partners and our spouses. There’s a lot of stress attached to the festive season, and there’s a lot of responsibilities to meet. In order to keep our love solid and on track, we have to minimize the things that get in the way and intentionally make time and space for one another. Reconnect this holiday season and find little ways to make special time for one another. Avoid risky behavior or shady spending, and be open, clear, and honest throughout the ups and downs and inevitable stress. Family and friends are so important during this special time, but so are our partners. Don’t assume that autopilot is good enough. Be there for one another and make the festive celebrations brighter for being together. The holidays are a great time to reconnect. Across cultures and families, this is a special time of year in which many come together and share closeness and memories. The holidays are a great time to connect, and the natural need of winter makes it more possible for us to slow things down and enjoy one another. Within that, though, we can often lose sight of our closest relationships — as we struggle to show up and show out for the friends and family that also form a part of the festive puzzle. Don’t let your intimate relationship slide this year. There is a lot of extra stress and responsibility that comes with holidays celebrations, and they can often disrupt and distract us from putting the care we need into our most important partnerships. Shut down the secretive spending and the invasive in-laws. Map out your hectic schedules and minimize the little aggravations by making one another a priority and focusing on forgiveness. Nurture your empathy and keep your problems (and your time together) a private affair. Even as you rush about to make space for your in-laws and your children — do the same for your partner. Show them love and reconnect by being kind to yourselves and kind to one another, intentionally. Common problems our relationships face during the holidays. The holidays are a beautiful time, but they can become a major stress too. There are a lot of extra obligations that come with this festive time of year, and it can be hard to meet these responsibilities while still balancing a happy partnership. Once we take off the goggles and get real about our holiday struggles, though, we can take focused action together as partners. Risky behavior There’s a lot of celebrating that goes on at this time of the year and with that can come a certain slackness. We get a bit looser with ourselves, our pockets, and even our boundaries during the holidays — and that can be a good and a bad thing. It becomes problematic when risky behavior like drinking and gambling get in the way of your relationship and your responsibilities. We have to be careful and honest with one another about our tendencies to fall into these habits. Concealing spending Money always seems to be an issue in our relationships at one point or another, and that’s especially true during the holidays. One of the most common disagreements that couples encounter is that of concealed spending. Instead of being honest with one another, one partner spends too much and then hides it from the other partner. Disagreements ensue, and the partners can often find that they end up with bigger problems than they had before. Little aggravations With stress comes aggravation and hundreds of little irritations that become bigger issues. More often than not, we’re buzzing around trying to do more than we should during the festive period. Things fall between the cracks and we find ourselves brushing up against little setbacks and irritations that aggravate us in major ways. When we’re irritated, stressed, and over-worked, we can find ourselves lashing out and landing in hot water that leads to big fights with our loved ones. Family conflict Family seems to be the center of the season for most people, and that’s a great thing. With that gathering, though, can come some serious stress. Adding more family to the picture can result in increased conflict and meddling that causes major division. It’s important that we set boundaries with our loved ones and make it clear that they aren’t welcome to get involved in our relationship issues. Hectic schedules Perhaps the biggest roadblock that most partnerships encounter over the holidays is that of the chaotic schedule. There’s a lot going on at the end of the year. We have to take care of a lot of things around the home and within our immediate families. Work often demands more of us, and there seems to be no end to the invitations, gifts, cards, and decorating that needs to be done. Things are hectic. When you fail to make time for one another, this chaos gets in the way of relationship happiness. How to make time for one another in the festive season. Don’t let everyone else get in the way of your relationship this year. Make intentional (and mindful) time and space for one another in a way that allows you to reconnect and reaffirm your love. Prioritize your relationship and be empathetic and forgiving with one another. Festive winter days can be a special time. Do what you can to enjoy that with your loved one. 1. Make time a priority Throughout our relationships, it’s important to make time and space for one another. When stress is running especially high, though, or your relationship is going through a challenging time (as it often does during the holiday period) it becomes even more so. You have to make yourselves a priority, and you have to make your relationship a priority too. Just because you give more to others during the festive season doesn’t mean you stop giving to yourselves and the love you share. Sit down together and hash out your hectic schedules. No matter what you have going on, find a way to regularly spend one-on-one time together. You don’t have to go out on big expensive dates. A night on the sofa in front of the TV can be enough. Just ensure that you’re both consciously and intentionally clearing time to spend as spouses and as partners. Use that time to talk, bond, and reconnect over everything you have going on at home, at work, and in your social circles. Be candid with one another. Be present. If your mind is somewhere else, then you aren’t really with your partner. Spice things up and make time interesting and exciting. Try new things and look forward to your time together. 2. Focus on forgiveness How often do you focus on forgiveness in your relationship? Do you easily let things go? Or do you hold on tight until you feel some sort of “justice”? During the festive season, we can find ourselves dealing with a lot of excess stress and responsibilities. Within that, things can fall through the cracks and we can find ourselves dropping the ball and making mistakes. That’s why we have to make forgiveness a part of everything we’re doing as a couple during this time of the year. Incorporate more forgiveness into your connection this year. Stop obsessing over all the little aggravations and learn to let things go. When your partner does something small and inconsequential, don’t even waste your time and energy on it. You have enough going on. Walk away, take a deep breath, and count to 10. Give your partner some leeway. Cut them (and yourself) some slack by forgiving them and moving on, rather than allowing big conflicts to stir and fester between you. This is only a temporary moment in time. Look to the new year and all the opportunities you will have to reconnect and bond after this stressful season passes. Is this small irritation really worth a major fight? Always question yourself and forgive where you can. 3. Nurture your empathy Empathy is another important component of every relationship, but it also becomes especially critical when we’re dealing with high periods of stress. Our empathy allows us to connect, person-to-person, and understand one another on a real and emotional level. When you’re empathetic with someone, you are able to put yourself in their shoes and see things from the same perspective as them. It’s a powerful tool to aid in our forgiveness, and one which helps us to reconnect. Nurture your empathy, both within yourselves and with one another. When things get tough, take a step back and try to see things from the other person’s point-of-view. Look at their feelings and try to recall an instance when you felt the same feelings. Put yourself back into that place of sadness, grief, or insecurity. The more you can empathize with one another, the easier it becomes to forgive and remained connected through the hardship. Instead of approaching one another with hostility or aggravation, approach one another in this form of compassionate understanding. We are all just doing the best we can in a very confusing state of affairs and time of the year. Don’t pile on to the pressures by punishing someone needlessly. 4. Keep it a private affair The holiday season is one that is generally packed with all types of events, get-togethers, and family gatherings. And while this festive period is shaping up to look a lot different from any we’ve known before, it’s still proving to be over-scheduled and filled with all kinds of social obligations and responsibilities. Despite all this extra family time, it’s important that you both keep your issues to yourself and avoid involving anyone else. While it can be tempting to garner the support of nearby friends and family, don’t involve others in the conflicts that you have brewing between you over the holidays. Unless your life is in absolute peril, you and your partner are responsible for getting on the same page and finding peace. Keep your conflicts a private affair this holiday season. If you can’t avoid the head-butting, and apologies and empathy don’t work — get a little space from one another and process your own feelings without getting others involved. Allowing outside influences into your disagreements only muddies the waters and makes it harder to get back on track. Sort out your own emotions and issues before you run to guests and loved ones. 5. Be kind to yourselves Inside of our relationships we can often lose sight of ourselves, and that’s especially true during the holidays. We’re so busy running after everyone else’s happy experiences and gift lists, that we often don’t take any time at all to nurture our own physical and emotional bodies. This impacts our interpersonal relationships, which in turn feeds back to the way we feel inside. Want to be happier as a couple? Be happier on an individual level. Be kind to yourself this holiday. Don’t forget to take a few minutes each day to remind yourself that you’re worthy and capable. Give yourself a hug and celebrate at least 3 things you’ve done well each and every day. Do little things for yourself that allow you rest and recharge your emotional battery. Journal when you’re feeling stressed. Take a bubble bath and listen to some mindful meditation. Go on a walk, a hike, a jog. There are thousands of small, affordable, creative ways in which to give yourself a little boost. It will make you a more confident, fulfilled, and joyful partner — which is helpful during any time of the year. Putting it all together… The holidays are a special time in which we get to reconnect with one another and our sense of wonder and joy. That extends to our intimate relationships, in which we can use this special time to tighten our bonds with one another. Along with this festive period comes unique stressors, though, which can be tricky to overcome as a couple. In order to do that, we have to remain committed to one another and the lives we’re seeking to build. Make time for one another a priority. Carve out regular windows in which you can both reconnect and have fun with one another. Focus on forgiveness. It’s a stressful time for everyone. Be understanding and employ your deeper sense of compassion and empathy. When your partner or spouse screws up, try to see things from their point of view and seek to see where they might be coming from. Although you may have more support in the shape of family and friends around you, don’t put them in the middle. Unless you’re in danger, keep your problems to yourself and seek to resolve your own issues before you involve others. The holidays should be a time of heightened joy and limited conflict. Pick the battles that matter and remember to be kind to yourself. The happier and more complete you feel, the better you will be as a partner no matter the season.
https://medium.com/lady-vivra/coming-together-this-holiday-season-1fa772ff89a4
['E.B. Johnson']
2020-11-22 07:02:54.230000+00:00
['Relationships', 'Nonfiction', 'Self', 'Dating', 'Marriage']
Big Data on the Road
Getting from point A to point B has been one of humanity’s greatest preoccupations throughout history. While we’ve developed new methods of transportation such as railroads, cars, trucks, and airplanes, they never seem to be fast enough. Big data could make transportation even easier by making it possible to build systems that recognize traffic flows and respond quickly without any human intervention. Traffic Lights Take the humble traffic light. These things not only keep us safe, but also direct the flow of traffic in an orderly way to get us where we’re going. The only problem is when people aren’t quite so orderly. A lot of these lights are programmed in isolation based on what an engineer thought would be “normal” traffic. If you’ve been stuck bumper-to-bumper after a sporting event or concert, then you’ll know how bad this can be. A traffic system using big data can eliminate these headaches. A system could monitor the flow of traffic on the roads. It could adjust the flow of traffic based on the conditions it sees. For example, in light traffic it could use the standard timings. In heavy traffic, it could keep the green lights going longer in a certain direction. Los Angeles has already synchronized its traffic lights. Connected Cars A number of new cars are being sold as “connected cars” with an internet connection. Most of the time, they’re used to sync with smartphones to play music over the car stereo or serve as a hands-free device for phone calls. They also have a lot ofdiagnostic information on the engines and can report back to mechanics when the engines need maintenance. A smart traffic control system can make use of these connected cars. The system could query the car and see that its engine is idling, or that the car is accelerating and braking repeatedly. If it notices a lot of these at once,the system will know that there’s a traffic jam. The system can start adjusting the timing of traffic lights once again without the need for an engineer to watch the traffic and guess light timings that might work. The system will get immediate feedback and be able to react in real time. Pollution Monitoring Many areas have installed monitors to track air quality. With a warming planet, we’re going to have to encourage people to keep the air as clean as possible. While air quality monitors are useful for seeing what the atmosphere is doing, just like traffic lights, they’re all isolated from each other. A big data system can use this information to track the most heavily polluted areas and then help authorities take action. A high amount of particulates in an area could mean that a lot of people are driving. Perhaps the area isn’t served by enough transit. Maybe the roads can’t support enough cars. Planning agencies could convince bus companies to run more routes or local governments to upgrade their roads. Seeing Patterns The biggest advantage of big data is its ability to sift through billions of data points, including car trips, traffic light changes, and pollution monitors, to see larger trends much easier and more efficiently than a human can. With big data, planners will be able to have a bigger view of transportation systems as a whole: cars, trains, light rail, buses, and even pedestrians. They’ll be able to see who’s going where, and plan for more capacity. They’ll also be able to see who’s using which method of transportation. If an agency notices a lot of people driving when they could be using transit, they’ll want to know why. They can propose fare reductions on days when the sensors detect a lot of pollution. For example, an agency could see that the route from one city to another is choked with commuters. An agency could propose a new artery or transit system to take off some of the load. They’ll be able to do this because they know exactly where people’s journeys are beginning and ending. They could track cars using devices that count the number of cars, traffic cameras, and even connected cars. The city of San Diego uses big data to track who’s using smart cards on its buses and light rail system to cut down on fare evasion and to understand where passengers are going. When they build a new road or transit stop, they’ll know that this is exactly what commuters need. In places where big transit projects are up for voter approval, they might have a better chance of getting approved because the proposals will be based on the actual data behind the need, rather than political pull. Conclusion Big data can make our lives on the road much easier by tying together isolated, local traffic patterns and integrating them into a coherent view, letting planners make the best possible decisions in the future.
https://medium.com/the-ramp/big-data-on-the-road-f9559b492f41
['Jim Scott']
2016-07-13 12:31:03.259000+00:00
['Transportation', 'Data Science', 'Self Driving Cars', 'Hadoop', 'Big Data']
CUMMULATIVE MEAN ALPHA(CMA) : Deriving the possible alphabet sums for a parent alphabet
Simple framework analogy of CMA There are Alphabets derived from their parent’s Alpha position. These alphabets are in their own distinct tuple; with their numerical representation sum giving the parents alphabet representation. i.e alphabet at position 6 which is F, has its children alpha’s or its CMA’s as [(0,F),(E,A),(D,B),(C,C)] where 0 denotes that the letter F is the original alphabet in which other tuple were derived. When each of these tuple CMA’s are added, their numerical sum points to the alphabet F on the Alphabetical table. MATHEMATICAL MODEL OF CMA Model of CMA ALGORITHM USING PYTHON Algorithm for Cummulative Mean Alpha RESULTS Cummulative Mean Alpha for P CONCLUSION We could generate cummaltives of n tuples for a given Alpha, I believe it is possible and it can be applied in some cryptographic systems where we disguise the main alphabet with its n tuple set. We hope to do more on the research concerning CMA.
https://medium.com/ijost/cummulative-mean-alpha-cma-deriving-the-possible-alphabet-sums-for-a-parent-alphabet-94894ae98d53
['Kingsley Izundu']
2018-06-12 08:47:10.730000+00:00
['Algorithms', 'Encryption', 'Cyber Security Awareness', 'Cryptography', 'Google']
Let Jealousy Fuel Your Sex
On Jealousy & Romance Personally, jealousy has always been an interesting emotion for me to explore, and it’s the one that has taught me the most about myself. I’ve experienced both sides of jealousy, as the purveyor and as the receiver. I’ve used jealousy as a tool to gauge interest in the opposite sex, especially in the early days of dating. I’ve been the jealous, possessive girlfriend who forbade her boyfriend from having a MySpace account or to have any friends of the opposite sex. I proceeded to then date someone who put my jealousy and possessiveness to shame, by throwing out everything in my wardrobe that was form-fitting and semi-revealing, and prohibiting me from having any friends, regardless of gender. Needless to say, neither of those relationships were healthy. My self-esteem was at an all-time low, and I held the illusory belief that the amount of love one felt mirrored the amount of jealousy he or she felt. So when my ex-boyfriend got upset about my wearing a short skirt, it sent my dopamine receptors into a frenzy because I interpreted his reaction as a signal of love. Some might say that jealousy is an intrinsic part of romantic love, and this is true to the extent that most of us may not achieve the kind of enlightened, unconditional love we aspire towards. But even so, jealousy has less to do with the love we feel for our partners, and more to do with the love, or lack thereof, we feel for ourselves. At least, that’s the lesson I came out of those two relationships with. Jealousy has its roots in fear, the fear of losing what we have — to an other — and desire, the desire to protect what we have with all of our might. In romantic love, “I” gradually becomes “we” as identities overlap and merge, some couples more than others. Our lovers become an extension of us and us them. We, often unknowingly, place our self-worth into the arms of our lovers, looking to them for words of affirmation and criticism, losing some of our “selves” in the process. “When we put all of our hopes in one person, our dependence sores,” writes Esther Perel. While most of us do not give up total control of our emotions and sense of self-worth to our lovers, we give up more than we think. When much of our identity and self-worth is dependent on another person, it’s no surprise that we experience a sort of identity crisis when our lover isn’t there to reassure us of our worth. This becomes even more pronounced when “an other” enters the picture — because our sense of worth is so fixated upon what our partner thinks of and feels towards us, the threat of him or her liking “an other” more than us becomes too much to bear. It would mean this other is better and more valuable than we are. “We feel most threatened where we feel least secure,” Perel writes, and when we don’t have a sense of security in ourself nor in our relationship, this other becomes a gargantuan threat, reminding us of just how much we have to lose. As my self-worth and self-love grew, the intensity of jealousy I experienced and felt in any given moment diminished and became easier to contain. I’m now somewhere towards the middle of that jealousy spectrum. Whereas in the past, the strength of it would have driven me to act out and behave in a deleterious way, I can now identify the emotion as it arises, acknowledge the fear of loss and desire to hold on, and yield to its quiet call of introspection.
https://medium.com/sumofourparts/let-jealousy-fuel-your-sex-b9ce59bef9ed
['Renee Chen']
2020-02-07 13:58:55.801000+00:00
['Relationships', 'Sexuality', 'Life', 'Self', 'Psychology']
Bringing the best out of Jupyter Notebooks for Data Science
3. Notebook Extensions Extend the possibilities Notebook extensions let you move beyond the general vanilla way of using the Jupyter Notebooks. Notebook extensions (or nbextensions) are JavaScript modules that you can load on most of the views in your Notebook’s frontend. These extensions modify the user experience and interface. Installation with conda: conda install -c conda-forge jupyter_nbextensions_configurator Or with pip: pip install jupyter_contrib_nbextensions && jupyter contrib nbextension install #incase you get permission errors on MacOS, pip install jupyter_contrib_nbextensions && jupyter contrib nbextension install --user Start a Jupyter notebook now, and you should be able to see an NBextensions Tab with a lot of options. Click the ones you want and see the magic happen. In case you couldn’t find the tab, a second small nbextension, can be located under the menu Edit . Let us discuss some of the useful extensions. 1. Hinterland Hinterland enables code autocompletion menu for every keypress in a code cell, instead of only calling it with the tab. This makes Jupyter notebook’s autocompletion behave like other popular IDEs such as PyCharm. 2. Snippets This extension adds a drop-down menu to the Notebook toolbar that allows easy insertion of code snippet cells into the current notebook. 3. Split Cells Notebook This extension splits the cells of the notebook and places then adjacent to each other. 4. Table of Contents This extension enables to collect all running headers and display them in a floating window, as a sidebar or with a navigation menu. The extension is also draggable, resizable, collapsible and dockable. 5. Collapsible Headings Collapsible Headings allows the notebook to have collapsible sections, separated by headings. So in case you have a lot of dirty code in your notebook, you can simply collapse it to avoid scrolling it again and again. 6. Autopep8 Autopep8 helps to reformat/prettify the contents of code cells with just a click. If you are tired of hitting the spacebar again and again to format the code, autopep8 is your savior.
https://towardsdatascience.com/bringing-the-best-out-of-jupyter-notebooks-for-data-science-f0871519ca29
['Parul Pandey']
2018-12-21 04:57:36.724000+00:00
['Data Science', 'Towards Data Science', 'Jupyter Notebook', 'Programming', 'Ipython']
Co-Imagination
A new space for Innovation and Change For almost a century the Bauhaus movement has inspired purpose in the design community. Founded with the mission of reclaiming the soul and spirit of arts and crafts practices in the Industrial Revolution, the Bauhaus Manifesto proclaimed: “Let us strive for, conceive and create the new building of the future that will unite every discipline, architecture and sculpture and painting, and which will one day rise heavenwards from the million hands of craftsmen as a clear symbol of a new belief to come.” Walter Gropius, 1919 As we approach the 100-year anniversary of the Bauhaus Manifesto, designers have new opportunities for redefining our mission. The scale and complexity of the current technological and information revolution has presented us with challenges that transcend those that inspired the mission of the Bauhaus. New Niches Two types of experiences shaped my ideas about the future role of design: first, ethnographic research around the world; second, our clients’ challenges in managing change and innovation. Conducting ethnographic research all over the world has given me opportunities to visit people in their homes, to observe their environments and routines, and to converse about what matters to them. I have learned that regardless of their different material cravings, at the core, people are in search of meaning. The artifacts they collect, the rituals they follow, the relationships they build, the gifts they exchange, the stories they tell, and the experiences they seek are all in pursuit of meaning. I have also developed a deep respect for the creativity of ordinary people, who our clients sometimes see merely as consumers. Yet in reality these people are always the true creators of their own experiences. Working with future-focused organizations, especially those clients who have worked with us for ten or more years, I have found that our true value as a research partner is not the insights from individual projects or even the concepts our research helps generate. Rather, our deepest value is in our ability to influence their culture, demonstrating ways of thinking that enable innovation and change across siloes. The insights we gather, synthesis we conduct, and the frameworks and roadmaps we generate through collaborative processes are all part of an overall experience that itself generates inspiration and momentum for change, and has a lasting impact beyond our involvement in projects. Both of these influences (everyday people and our client organizations) have made us recognize the need for a catalyst of change who can help individuals and organizations tap into their potential for innovation and inspire meaningful and purposeful change. Meaning Making A perfect example of design that enables Meaning Making is LEGO blocks. Since 1932 LEGO has consistently enabled creativity around the world with its generic, configurable forms. The forms are not the end product; the final product is what the user creates. By supporting and enabling this creative process, LEGO has stood the test of time, and has been loved across generations. LEGO Enables Meaning Making A New Role for Designers Curiosity for a more purposeful role for designers in our time led me to the book Tools for Conviviality by Croatian-Austrian philosopher Ivan Illich. Illich argues that over-industrialization has turned humans into slaves of the tools that were supposed to serve us. As an alternative, Illich envisions a convivial society conceptualized on a belief that, “People need not only to obtain things, they need above all the freedom to make things among which they can live, to give shape to them according to their own tastes, and to put them to use in caring for and about others.” People need new tools to work with rather than tools that “work” for them. They need technology to make the most of the energy and imagination each has, rather than more well-programmed energy slaves. I believe that society must be reconstructed to enlarge the contribution of autonomous individuals and primary groups to the total effectiveness of a new system of production designed to satisfy the human needs which it also determines. In fact, the institutions of industrial society do just the opposite. As the power of machines increases, the role of persons more and more decreases to that of mere consumers.” (Illich 1978) Designers who want to explore their role as enablers of Meaning Making must embrace a new mindset. First and foremost, we need to stop treating people as consumers or users, and instead recognize and involve them as co-creators. Designing with people Designers can no longer design from the ivory towers of our design studios. The new space for Design must move closer to where the meaning making happens. The design process must begin with a search for patterns in the culture, behaviors, and imaginations of people and groups, followed by a synthesis of the patterns discovered and co-imagining of future scenarios of meaningful experiences. Observing people in their environments and asking questions can provide designers valuable insights about the explicit aspects of their lives. But implicit aspects such as unmet and unrecognized needs and hard to express aspirations have a critical influence on people’s meaning-making process. To investigate these aspects, designers need to cultivate systematic skills of learning through collaboration that will allow them to: · Provoke deep introspection about the past and the present; · Bring awareness to the complex issues that affect people and organizations · Pay attention to the metaphors and mental models that guide their understanding, and · Facilitate constructive dialogue about meaningful alternatives and roadmaps to change. Additionally, designers need to cultivate continuous foresight about the un-observable, latent forces affecting people and meaning-making, including the cultural, social, behavioral, or technological changes that are likely to influence what people will be able to do and what may be meaningful to them in the future. Involving people as partners in design does not compromise the creativity of the designers. Their creative talent, tools, ideas, and deep knowledge of the process are invaluable in harnessing the unobserved and unexpressed wisdom, energy, and imagination of the untrained. Co-Creation The late C.K. Prahlad, who was a professor of Corporate Strategy at University of Michigan School of Business, said, “Executives are constrained not by resources, but by their imagination.” (Prahlad & Ramaswami 2004) Through the years, we have learned that successful future-focused organizations are able to thrive in a highly competitive and chaotic marketplace by fueling the imaginations of stakeholders throughout their value chain, thereby nurturing a culture of curiosity, learning, and tinkering with disruptive ideas. Designers can serve to enable this “Co-Imagination” process by influencing how stakeholders learn and how they act collaboratively. We can organize activities to guide collaborative discovery, synthesis, and change across functional silos within an organization and across the value chain. Designers can also help envision the future social impact of innovations through scenario planning and other design tools, thus serving to provide an ethical lens for building roadmaps. By guiding co-imagination and helping build roadmaps for ethical innovation, this new generation of collaborative designers will enable grounded, purposeful innovation in the organizations they serve.
https://medium.com/sonicrim-stories-from-the-edge/co-imagination-1b0714284583
['Uday Dandavate']
2018-06-06 16:09:09.484000+00:00
['Cocreation', 'Codesign', 'Innovation', 'Design', 'Co Imagination']
I’m Single & I Don’t Want To Bake Things In Mugs
“Alexa, what’s half of 2/3 cup?” Photo by Alex Loup on Unsplash I’m a baker. Not one who gets paid for her efforts, nothing like that. I’m the kind who bakes at home and then Instagrams it because heaven forbid we do things other people can’t praise us for. I bake lots of pies and muffins and experiments with alternative flours because I have the gastrointestinal strength of a wet piece of notebook paper. I do my best. Another vital fact you should know if you’re going to spend your time reading my internet work (though honestly, I’m not that flattered, what else do we have going on), is that I’m single. I am baking for one. Perhaps most especially during a global pandemic but in truth, always. I’ve never baked anything, for anyone, but me. You know I take that back, one time on a group vacation to a cabin in the woods I made a large batch of cinnamon rolls for my friends. As I recall, they were a success, despite the fact that I had to prep them the evening prior while halfway to the bottom of a bottle of Chablis. What does it mean to bake for one person? It means one thing, more than any other: math. Fucking math! There is so much math in baking for one person that if you’d told me back in eighth grade how vital fractions were actually going to be in my adult life I’d have run away to the furthest corners of Alaska to live off grid and let the winter claim me. I made it through the vast majority of my pre-collegiate education with straight As and no boyfriends. “Nothin’ But Vowels” Silver, that was me. But not if you count math. Math was my nemesis. My evil Thanos ready to snap my self esteem into dust in the wind because my sense of worth was attached to my grades because what the fuck else do you have at fourteen?! I cannot do fractions, they are hard. I have no grip on the metric system, I’m American. Have you read a recipe lately and completed the mental contortions required to shrink it down to single person size? If you want to bake anything more than trash can lining, you will need fractions, the metric system, or both. All recipes assume you’re in love. There, I’ve said it. They require ingredient quantities of insulting proportions, knowing full well a single person cannot consume an entire sheet cake without falling ill or leaving it to mold. There are no baking recipes that even dare to validate solo bakers by offering halved or quartered versions somewhere on the page. So after clicking out of no fewer than 57 ads and waiting for 19 goddamned videos to load so I can pause them to scroll past the story of a blogger’s childhood growing up in a citrus grove I still have to figure out what half of three quarters is if I want some goddamned blackberry scones. And no, I shouldn’t have to make the full recipes, for those of you who’ve spent quarantine in proximity to something living other than a feline or an aloe vera plant. I shouldn’t have to use twice the ingredients I need for the one human being I’m feeding. Have you shopped for pantry staples lately? This is a matter of fiscal responsibility! The entire world of baking assumes you own more than two dining chairs and that one of them isn’t buried underneath a pile of books and clothes. It assumes single people will deal with it. We’ll deal with quantities we can’t consume, quantities that remind us it might be nice to have someone to share with. At no time do baking recipes take our feelings into consideration. “Just freeze it!” Fuck you. Breaking down fractions into smaller fractions is a trash activity. It’s not all as a simple as 1/2 cups and 1/2 tablespoons, people. It gets dark. Even if I was good at math, let’s just pretend, pray tell: HOW DO YOU DO HALF AN EGG? Anybody? Figured that out yet, have you? At least whoever is in charge of butter has taken pity on single people by offering packaging guidelines you can easily slice into and a product shaped into something that’s really quite effortless to work with in solid form. But there are some ingredients that cannot be halved, or quartered, or thirded, or fucking sixteenthed! Do you own a 1/8th measuring cup? Does anyone?? Want to throw some salt in the wound? Because heaven knows we have no idea how much to throw in this cookie batter? Even if you’re fortunate enough in your life to have access to things like KitchenAid mixers and Vitamix Blenders (which I am, they were gifts), those extremely fancy and supposedly well thought-out gadgets hate single people too! Try whisking the chilled part of a mini can of coconut cream into a fluffy vegan whipped topping. You can’t do it. Know why? The mixer can’t reach it! Our appliances themselves are intended to only operate when there are enough ingredients in them for a family of fucking four. It is one insult after another and quite frankly I don’t need this kind of negative reinforcement from things that have cords. My solution to the solo baking conundrum has been a very “screw it” attitude and subsequent loose relationship with measurements. I eyeball my way to just enough muffins for breakfast this week or single-sized loaves perfect for a bruschetta lunch. But don’t think for one second that I nailed it the first time. My recipes are countless attempts and failures and tweaks that land me somewhere in the vicinity of baked goods that don’t make me sad. I honestly don’t know if I’ve ever measured cinnamon. I can admit to that. Someone, somewhere, who I assume was either bored or almost out of ingredients, came up with a proposed solution to solo sweet cravings: Mug Cake. The culinary equivalent of a Live, Laugh, Love sign. Let me be extremely goddamned clear: Mug Cake does not work. Mug Cake has never worked. Mug cake is tasteless soufflé inside a souvenir you got in Utah that’s now hot enough to the touch to require an emergency room visit. And you still have to pull out all of the ingredients from whatever caverns of your fridge and pantry they’re hiding in, and you still have to use all of the measuring spoons. It’s all of the kitchen mess, none of the reward. Don’t forget to clean the microwave! Because it’s fucked now. Mug Cake is what you give the last kid to leave the party when all the little bags of treats are gone. It’s a shitty consolation prize for single people and we deserve so much more than terrible food made to sound cute to distract us from the fact that it tastes like chocolate eggs. Why is solo baking a consolation prize activity? Why is it just assumed that if you’re baking for one that you should have to figure it out on your own? Solo baking recipes are few and far between and they reiterate that no one really gives a shit about single people or our cookie cravings. Mug Cake?? Go fuck yourself! I want right-sized baking recipes and inspiration for people who live alone. Living alone and being single, in my opinion, should be thought of as more than just temporary life phases on our way to an acceptable form of living life: as a couple. My life is valid and when I say I only need six brownies, I mean it. While I do look forward to having someone around someday to share an impulsively baked weekend pie with, or perhaps a reason to make more than two pancakes at a time, right now I don’t, and that’s still a completely lovely way to live life even in a very partnership and family-centric society. I’m perfectly happy with the way things are. Until I try to bake something, and then I’m reminded that I’m just a fraction of what’s called for. And I’m half of 1/4 cup of over it. ____________________________________________________________ Shani Silver is a humor essayist and podcaster based in Brooklyn who writes on Medium, a lot.
https://shanisilver.medium.com/im-single-i-don-t-want-to-bake-things-in-mugs-cf6e6fed802
['Shani Silver']
2020-07-08 13:27:37.073000+00:00
['Writing', 'Humor', 'Relationships', 'Food', 'Life']
Best Bet: Bundle Investing
Bundle investing is a term used by many investors who diversify their portfolio by investing in multiple kinds of coins instead of just one. You’ve probably heard of this before, but for those of you who haven’t, bundle investing is largely unavailable in today’s market due to a lack of platforms which give you the opportunity to do so. Well, this is where CoinBundle comes into play, giving you the opportunity to easily and simply invest in bundles of coins. So, what’s the best way to bundle when investing in cryptocurrency? Let’s go over the do’s and don’ts when it comes to bundle investing because it just so happens to be your best bet when investing. This is not financial investment advice. This article will touch on key aspects of how to properly execute bundle investing. In this article What Is Bundle Investing? Well, bundle investing is a pretty straightforward concept. It’s exactly what it sounds like, investing in bundles of coins instead of single ones. CoinBundle gives you the opportunity to do this, by providing users with different bundles of coins to invest in. When you invest in cryptocurrency by purchasing single coins, there’s a tedious process that you have to go through involving registration for an online currency exchange and password maintenance. When you invest in crypto by bundle investing, you actually save yourself all the hard work of going through each and every one of those exchanges, and instead can just buy the bundle in one place. This way, not only is your portfolio diversified, but it’s also extremely easy to manage and track since its performance is all in one place. Each bundle you invest in contains different cryptocurrencies. In fact, some bundles — depending on which one you choose to invest in — contain different tiered coins. By this, we mean that the bundle includes different coins across all market capitalizations. So you can expect some “giants” like Bitcoin and Ethereum to be in the same bundle as smaller and lesser-known coins, again, all depending on which bundle you invest in. Bundle investing also means that you should be able to create your own bundles, right? Well, this feature of customizable bundles is on the way and will eventually be integrated into the CoinBundle platform, giving this type of investing a social aspect. Why waste time and energy going through numerous exchanges to invest in just a few coins when you can just buy one bundle with all of them in just one place? On top of that, bundle investing makes investing social by pinning specific bundles head to head with each other. Bundle investing — as a whole — relies on the user’s willingness to diversify their portfolio, which is one of the smartest things that you can do as an investor. Bundling has already been implemented by many major companies across the globe in order to more effectively sell their products. Thus, it’s about time that bundle investing made an appearance in the world of cryptocurrency, giving a good indication of where the future of finance is headed. Bundle investing is exactly what it sounds like, investing in bundles of coins instead of single ones. CoinBundle gives you the opportunity to do this, by providing users with different bundles of coins to invest in. Why Is Bundle Investing Better? Now that you know what bundle investing is, let’s analyze why this form of investing happens to be your “best bet” when it comes to cryptocurrency. For starters, bundle investing provides any user with a much more simple and straightforward process. What makes investing in bundles of coins significantly easier than single coins is the user friendly experience that allows anyone to get involved. Before, if you wanted to invest in altcoins not listed in popular exchanges, you’d have to go through a tedious process of registering with multiple exchanges and keeping track of all your account info. Now, thanks to CoinBundle, all your favorite coins can now be accessed in one place! When you buy ETH, BTC, or any other coin for that matter, you’re basically purchasing fractions (for example, 1.224528157864726124) of the actual coin. As you continue to purchase more coins or even add more money to the original investment, it can get difficult at times to keep track of all your assets. When bundle investing, you can easily just buy 1, 2, or however many bundles you want, and keep track of all your assets in one place. This provides the user with a much easier and simpler experience. Outside of just the user friendly experience, bundle investing also means that you’ll be diversifying your portfolio when investing, which is always a beneficial thing for any person. Why invest in a highly volatile market with just a few specific coins when you can increase your own security by spreading around your money across multiple kinds of coins? In fact, there are probably tons of coins you’ve been interested in purchasing but haven’t done so because you don’t know where to go. Well, now all of your diversification needs are solved. When purchasing a bundle, 1.54 bundles, or however much of a bundle you want to purchase, you know that all of your cash positions will have a significantly higher chance of being safe from sharp market movements. Let’s face it, the market follows giants like Bitcoin, but just like in any other market, there are always some coins that will move against the market. So, in the event that most coins see drops in prices across the board, you might also just have invested in a coin that’s moving in the opposite direction. Thus, another benefit of bundle investing has to do with the diversification of coins that you purchase through each bundle. CoinBundle is the easiest way for people to invest in cryptocurrencies responsibly. We take all the hard work out of investing, letting you build your portfolio according to your needs. Bundle investing is more user friendly, straightforward, and efficient than other investing strategies. How Can I Start? As mentioned before, there is currently a lack of platforms which give users the ability to invest in a bundle of coins, which is why you probably haven’t heard of too many people doing this. Well, this is where CoinBundle comes into play. Using CoinBundle, you can maximize your gains through investing in a wide variety of coins all in one bundle. Don’t worry about having to keep track of all your passwords for every exchange that you’re registered for, as bundle investing gives you the liberty of viewing your entire portfolio all in one place. For those who’d like to get a head start on bundle investing, you can start by doing your own research on specific coins which you would like to invest in one day. Keep track of which coins are the most volatile and which coins tend to be more stable over time. Using technical analysis, you can even assess price movements for your favorite coins to determine whether or not you want to invest in that coin. Thus, once you have all your favorite coins listed in one place, you can use CoinBundle to identify where a majority of the coins which you want to invest in lie. All you have to do from there is purchase the bundle with the most shared coins and you’re all good to go. What you don’t want to do is foolishly begin investing in single coins and keep track of them to call yourself a “bundle investor.” In reality, you’re just investing in single coins through different exchanges, which will become extremely difficult to manage as time goes on. Be patient and get a head start by doing your own research for potential coins to invest in. Again, once you follow these steps, you’ll be ready to start bundle investing through CoinBundle. You can start by doing your own research on specific coins which you would like to invest in one day. Keep track of which coins are the most volatile and which coins tend to be more stable over time, then buy those coins in bundles provided by CoinBundle. Do’s & Don’ts So now that you’re aware of what bundle investing is, what that entails, and how to start, let’s go over some tips on how to maximize your investing experience. DO: Develop a method to track all the coins in your bundle to make sure that one coin isn’t ruining everything. What most people will do is judge the performance of a bundle based on its overall returns, but further examining the individual coins which make up the bundle will allow you to make better investments when choosing in the future. DON’T: Buy and sell bundles frequently. Just like with any other investment, you have to wait and be patient with each new cash position you introduce. Instead of continuously buying new bundles, try adding more money to your current bundles. By introducing more cash positions, you have a safer overall investment and more chances of seeing higher returns. DO: Invest in different bundles. If you invest in multiple bundles but they mainly have overlapping coins, you aren’t maximizing your diversification. In fact, bundle investing is all about getting involved with as many coins as possible, so invest early and often in different kinds of bundles. DON’T: Check your investment portfolio everyday. Investments should be made without any emotion, focusing solely on the facts and statistics. If you’re following the stock market and your account balances on a daily basis, you’ll be riding a roller coaster of short-term emotions that not a lot of people can handle. Even worse, you’re increasing the likelihood of making a spontaneous decision to sell everything. Find a good pace at which you want to check your portfolio without causing any stress for yourself. Conclusion Bundle investing has revolutionized many industries by changing the way assets are bought and managed, so isn’t it time that cryptocurrency does the same? CoinBundle is making investing in crypto accessible and easy for everyone, utilizing bundle investing to do so. If you can remember to follow the advice given in this article, you’ll become a bundle investing pro and maximize your crypto-investing performance.
https://medium.com/coinbundle/best-bet-bundle-investing-724853ca576b
['Coinbundle Team']
2018-10-10 22:49:16.458000+00:00
['Investcoinbundle', 'Beginnersoinbundle', 'Cryptocurrency', 'Investing', 'Startup']
Digital Implications: What Have We Unleashed?
I’ve recently celebrated 25 years in the digital industry. That’s a long time. It began with a 1.44MB disk and 4 colors and now… Your time is valuable, so let’s get to the key insights. What happened to the web and the Fairness Doctrine? Why do Terms of Service and User Agreements matter? Let’s take a step back. In the early days of the web, we dealt with issues such as intellectual property, copyrights, etc. This is when “framing” content was so controversial. Do you remember those days? We had very little to worry about because we didn’t know what the next evolution looked like, and with limited bandwidth, the personal effort to access content was a time consuming and relatively low return experience; but hey, we were on the world wide web and were pioneers. Jump forward 25 years and we can now look back and see that we’ve “unleashed the beast.” Let’s start with our last election cycle and what we now know from Facebook, Cambridge Analytica, Google and the rest of the media players (i.e. Huffington Post, CNN, Fox News, MSNBC, Daily Beast, etc.). So, I’ll begin with the Federal Communications Commission (FCC). Yes, the agency we either loathe or love. Way back in 1949, The FCC introduced the fairness doctrine. This was a policy that required holders of broadcast licenses to present controversial issues of public importance through a lens that was “honest, equitable and balanced” while allowing the airing of opposing views on those issues, especially in context of political discussions. As the Washington Post article from 2011 noted Broadcasters had an active duty to determine the spectrum of views on a given issue and include those people best suited to representing those views in their programming. Additionally, the rule mandated that broadcasters alert anyone subject to a personal attack in their programming and give them a chance to respond, and required any broadcasters who endorse political candidates to invite other candidates to respond. However, the Fairness Doctrine is different from the Equal Time rule, which is still in force and requires equal time be given to legally qualified political candidates.¹ I’ll date myself, but do you remember when local broadcast stations had a “commentary” with a disclaimer that the views expressed were not those of the network? In 1987, the FCC eliminated the rule, and in 2011 the policy was removed from the Federal Register. And with the elimination of the rule coupled with the growth of an “always on” digital experience across platforms, we now have an incredibly fractured and biased media. Deciphering between what is biased, unbiased, accurate, inaccurate or fraudulent and truthful is now nearly impossible. Even sites such as Snopes, MBFC, and FactCheck.org, all focused on ensuring accuracy in content, have demonstrated their willingness to allow for bias. A “license” was once required to operate as a media company. Today, the blending of truth, deception, untruth, opinion, bias and fraud has effectively become the status quo with the responsibility being placed on the individual consuming the information to determine the validity. Let’s dig deeper on this one. The challenge was — is this information accurate comparatively to other journalists writing on the same or a similar topic? Now, as we’ve become well aware through the Cambridge Analytica controversy, it’s no longer “easily” possible to get to “truth” without bias. The content you see is based on what an algorithm is programmed to present to you. As we move further into AI and ML, these algorithms, once developed, will evolve on their own, further deepening the segmentation of what each of us view, ultimately defining our “understanding” of subjects. This understanding becomes our perception and opinion. Think about this for a moment. Is Facebook a media company? Absolutely. Is Google a media company? Absolutely. Having said this, as companies focus on social policies, campaigns and outreach, when do they cross the line into media companies? How different is Facebook with Instagram or Snap Inc. with Snapchat from Viacom or Liberty Media? Keep reading and you’ll get to thoughts, perspectives and maybe answers — *if you agree that they’re not biased. I had to add that disclaimer. Whether you agree or disagree with regulation, the current situation is clearly not optimal, and self-regulation has further exacerbated the challenge. Left unchecked, whether individual or corporate-owned, consolidated influence is growing. The power of Facebook and Google, through their algorithms coupled with a large-scale following of individuals that “influence” perspectives represent challenges that legislative bodies haven’t previously encountered. Any individual can become a “broadcaster” with nearly unlimited reach. The reality is, left unchecked/unregulated the core underpinning of the Internet — democratization of information, becomes its own achilles heel as what is “truth” becomes more difficult to ascertain and bias becomes the norm. Ironically, self-governed platforms such as Wikipedia have suffered from this issue from their inception. Let’s build on the issue of truth and bias with the issue of why terms of service and user agreements matter? The latest issue with FaceApp serves as a cautionary tale for why these matter and why individuals need to take privacy seriously. Companies can be nefarious in their terms of service and user agreements. Unfortunately, buried deep within the pages of a legal agreement between yourself and the site or app you’re using are terms that in any “normal” course of disclosure you would never agree to accept. Cookies are simple — yes or no. Go to incognito mode in Chrome, etc. It’s the complexity of how your data is aggregated, combined with third-party data sources (DMPs) and then applied within the experience and across the web. It’s how this data gets used that creates the challenge. It’s also the fact that in order to participate in digital experiences, in many instances, you have to forfeit your rights to your behaviors and your personally identifiable information (PII). Years ago when we built sites or apps that were focused on kids we had to ensure we were COPPA compliant — https://bit.ly/2O39U68. Today, Instagram, Snapchat, Hipstamatic, Vigo, TikTok, Imgur, etc. all essentially skirt or violate COPPA by allowing for targeting of ads and sponsored content through influencers. If you look at the fine print in the user agreements, you’ll be surprised to learn what all companies have “legally” defined is within their scope of “ownership” and what they can do with your data to monetize their business. Seriously, take a few minutes to read the current policy from Instagram. It’s clear that self-governance isn’t working and unfortunately, regardless of political affiliation or perspectives, when profits are involved, companies without constraints and ethical guidelines, will collect and use data in a manner that’s inconsistent with what is an individual’s best interest. In some instances, the basic business model doesn’t work without access to this data, which creates a significant issue as it relates to how certain companies can operate without using and selling your information to further the experience that you’ve signed up for. One of the most interesting and frightening issues I’ve experienced is that we’re raising digitally native generations that don’t understand how their data is used and, in fact, are naively willing to give away their personal information in return for access to communities (e.g. Fortnite, Snapchat, Instagram, TikTok, etc.). Terms of Service and User Agreements matter — so read the fine print before you give away your rights to someone like FaceApp. I hope this has been interesting and maybe it inspired a few thoughts on your part. Feel free to share them with me — Bob Morris, [email protected] or on https://twitter.com/digitalquotient. ¹ https://www.washingtonpost.com/blogs/ezra-klein/post/everything-you-need-to-know-about-the-fairness-doctrine-in-one-post/2011/08/23/gIQAN8CXZJ_blog.html?utm_term=.ceb8c1432531
https://uxdesign.cc/digital-implications-what-have-we-unleashed-599c38f9a813
['Bob Morris']
2020-10-05 16:39:51.171000+00:00
['Privacy', 'Social Media', 'Social Media Marketing', 'Digital Marketing', 'Startup']
The Mirror
The Mirror A reflection on being a writer Photo by Windows on Unsplash “The only thing grief has taught me, is to know how shallow it is.” — Emerson Sometimes when I read I find words looking back those pieces of mirror which I need to pick for myself. And I do that, words have been nourishing my soul since a long time. Words don’t judge me when I pause my thoughts or sit quietly with a blank page. I have met too many poems and I have held too many words, I think with time we become better in this. We become expert in exposing our flaws, we don’t feel awkward when we see that reflection looking back even when we fail to recognise it. That’s what words do to us, that keep abrading those scars which we love to carry till we start loving them. And our consciousness becomes much more alive, we realise that sadness is just a fleeting moment we learn to survive.
https://medium.com/spiritual-secrets/the-mirror-89b71388234
['Priyanka Srivastava']
2020-09-14 17:03:16.468000+00:00
['Awakening', 'Prose', 'Reflections', 'Spiritual Secrets', 'Writing']
OpenAI should Steal from Robinhood and Give to the Poor
OpenAI should Steal from Robinhood and Give to the Poor It’s time for OpenAI to enter financial markets Robinhood started with noble intentions. To steal from the rich and give to the poor. Unfortunately, like many tales of good intentions, the truth isn’t as appealing as myth. I’m talking, of course, about the trading app, not the historical figure. Ultimately, however, Robin Hood was quite possibly used as a stock alias for thieves. The moral of the story is that we shouldn’t allow names, myths, and folklore to distort the underlying reality. In this case, that thieves can pretend to be men of the people. And Robinhood is certainly a financial thief, of the legal sort. Robinhood started with a fairly radical idea: to allow anyone to buy or sell stocks commission free- $0.00 for any purchase or sale of financial instruments, including options, no questions asked. Traditionally, customers have had to pay $5–$10 every single trade. Zero commissions have a significantly bigger impact on small trading values compared to larger ones. Buying and selling $100 of a stock while paying commission could cost 10–20%, making it virtually impossible to profit off of small trades. In this sense, we can view a positive effect of Robinhood’s push for zero commission as encouraging small account traders to invest in more traditional stocks as opposed to penny stocks, since you don’t have to make as much profit to break even from commission costs. That’s also why the average account value of Robinhood customers is $1000-$5000, compared to tens or hundreds of thousands of dollars held in other brokerages. The idea was that it only costs fractions of a penny for big Wall Street firms to execute a trade, so why not cut out the middle man and charge the direct cost of nearly nothing to the consumer? But alas, there’s no such thing as a free lunch. Robinhood had to make money somehow. How? By selling their customer’s data and encouraging unsophisticated traders to take out leverage so they could lose their money faster. Now, I might sound a little salty. So, to be clear of my biases, I personally lost over $10k day trading options on Robinhood; however, this is not my problem with the app. I probably would have lost that money on eTrade or Interactive Brokers! Instead, my problem is with their business model. Since they don’t charge commission, they need an alternative to cover their costs and make a profit. They primarily do this through two means: Robinhood Gold, and selling customer data to high frequency traders (HFTs). While they technically make some profits by earning interest and rebates, the bulk >40% of their revenue comes from re-routing and allowing high frequency firms to operate their own pools to take advantage of unwitting customers. Logistically, there has to be a reason that HFTs pay for this kind of market control. Partially, this comes from arbitrage of purchasing from public markets at the lowest price and marking up to Robinhood customers. This is ostensibly to provide “liquidity” or “make markets.” But in reality, this is a tax on Robinhood traders. But an even more nefarious reason that HFTs want this level of access is to take the other side of a trade against Robinhood customers. Since most traders are unsophisticated and lose money (according to most estimates, 80% of day traders are unprofitable over the course of a year) market makers and HFTs will gladly take the opposite trade as the average Robinhood trader and will on average be right. This is especially true because they can exploit knowledge of Robinhood’s order book and individual customer history to know what kinds of mistakes the average Robinhood trader makes. They can access all sorts of data to backtest their models on. Robinhood also makes money by charging interest for margin accounts. But this only exacerbates the aforementioned problems with HFTs. Since most traders lose money, lending out on margin just means they will lose more money more quickly. I’m not 100% in principle against the concept of zero commissions in exchange for customized market pools that exploit their users. In other words, I don’t necessarily think the government should ban Robinhood’s business model. But we should be clear that HFTs don’t really add much value in terms of liquidity compared to traditional market makers, and by definition are making profits by taking money from Robinhood customers. And since Robinhood customers are on average poor relative to rich HFTs, Robinhood in effect is taking from the poor to give to the rich. To repeat, Robinhood’s business model amounts to taking from the poor to give to the rich, when the initial goal of zero commissions was supposed to reduce income inequality. What Robinhood needs is real competition. If only there was an institution out there with enough money and social consciousness to actually steal from the rich and give to the poor… Oh wait, OpenAI fits the bill! OpenAI has a multi- billion dollar endowment. They have the financial resources to actually help the little guy in the financial world. They recently changed their non-profit status to capped profit (100x initial investment). One of their first commercial ventures is GPT, their not open source super genius NLP algorithm. Sure, it’s not perfect; it recently told researchers to kill themselves. But hey, it’s not easy to create a cutting edge NLP model. I see at least 3 ways that OpenAI could radically alter financial markets in a socially conscious way. First, they could simply analyze financial data and provide free or low cost technical analysis, including real time notifications of asset mis-pricings. AI provides exponentially more returns the more data that is processed. This can be seen by comparing GPT-2 vs GPT-3, where billions of more parameters are analyzed. In fact, OpenAI claims they have only just begun to scratch the surface on the benefits of simply scaling up their models. Financial markets operate under similar circumstances. What you as an individual running Tensorflow will uncover is not even a fraction of what is feasible at scale. OpenAI could test out various algorithms, such as bayesian networks, LSTM models, etc. to see which algos are best at predicting price movements. By directly incorporating complete order book historic data into their models, as well as other economic data, perhaps sentiment analysis, etc., they could achieve orders of magnitude better accuracy than what an individual can do by accessing minute by minute prices. Second, OpenAI could launch their own dark pool fund and become a market maker. Dark pools are a mixed bag. Dark pools came about primarily to facilitate block trading by institutional investors who did not wish to impact the markets with their large orders and obtain adverse prices for their trades. Dark pools are sometimes cast in an unfavorable light but, in reality, they serve a purpose. However, their lack of transparency makes them vulnerable to potential conflicts of interest by their owners and predatory trading practices by some high-frequency traders. — Investopedia So, in theory, one can operate a dark pool in a socially altruistic way, hiding orders from predatory HFTs, but increasing market liquidity. OpenAI could operate such a pool. While the lack of transparency might seem anathema to both social altruism and OpenAI’s core principles, there is a difference between transparent operation and hiding the order book from the public. In other words, OpenAI could publicly explain exactly how they match users with capital, while hiding those orders from HFTs. Finally, OpenAI could launch its own technical only trading firm competing with the likes of Medallion Fund. By competing away profits from Medallion the Monopolist (more forthcoming), OpenAI could redistribute from the wealthy to the broader market/average market participant. Not literally a UBI, but by adding competition in the space, they compete away irrationality, making it easier for smaller players. Financial markets are ripe for disruption, especially thanks to AI. It’s time for OpenAI to steal from Robinhood and give to the poor. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/openai-should-steal-from-robinhood-and-give-to-the-poor-dcec9c33769e
['The Moral Economist']
2020-12-02 18:22:23.386000+00:00
['Investing', 'Artificial Intelligence', 'Finance', 'Computer Science', 'Economics']
Applied Psychology in Silicon Valley.
Applied Psychology in Silicon Valley. My answers to questions from Arjan Haring of the Persuasion API. Question 1: How do the people in the Bay Area look upon a neurologist/psychologist turned technologist? And do you consider yourself a converted internet cheerleader? I will say that psychologists and behavioral-scientists are seen, more or less, as omnipotent puppeteers. Most people believe that neuroscientists and psychologists have unraveled the human brain, and that if you understand these sciences, you can completely predict and change human behavior. The truth is a bit messier: Yes, we understand a lot about the brain and human behavior. Yes, we are better than most other people at changing behavior and inducing habits. But doing this requires a lot of experimentation and failure. Every group of people and every product is different—the dynamics are different. Because of this, your first attempts at changing behavior in a given group as a behavior change scientist are probably going to be ineffective, since you’re still getting a sense of how individuals operate within the system you’re building, and how all of the variables interact. However, if we are disciplined and thorough, we can begin to understand the driving factors in any given system and do some pretty amazing work. It takes patience, though. It’s not instant. It requires us to gather a lot of data from initial experiments and failures. We can’t just look at a person or a product and predict exactly what our product tweaks will do. As for being an internet cheerleader… I would say that I’m almost the opposite of an internet cheerleader. Most of the products currently being built make our lives easier in ways that are, I think, counterproductive. We are already built to be as lazy as possible. It helps us conserve energy. It also reduces our risk. Habits are built from success—they’re the behaviors that have consistently worked in our environment.But I’ve always been a believer in hard work. I think it keeps us energetic and productive. I’m sure that many of you have spent a vacation on the couch. What was that like? You probably felt tired, lethargic, and out of it most of the time. Sure, it was relaxing. But, it wasn’t invigorating and energizing. We need to expend energy to be energetic and be our best selves. We grow through hard work and stress. So, I’m not one for getting rid of all the hardships and chores we have to do in our lives. I think that it’s good to mow the lawn, take out the trash, and walk to work—these things keep us active and alive. With products like Instacart, Google Shopping Express, etc., we’re enabled to be as lazy as possible for a quite affordable price. Web technology is also getting rid of our need for human contact. No longer do you need to call a restaurant to make an order, you can do it online. No longer do you need to stop by a friend’s place, you can text or Facebook message them. I’ll talk more about this later.
https://medium.com/the-habit/applied-psychology-in-silicon-valley-81d001f0e172
['Jason Hreha']
2017-06-09 19:09:18.900000+00:00
['Behavior Design', 'Silicon Valley', 'Psychology']
Interpretation of HuggingFace’s model decision
Transformer-based models have taken a leading role in NLP today. In most cases using pre-trained encoder architectures in solving downstream tasks achieves super high scores. The main idea of ​​this approach is to train the large model on a big amount of unlabeled data and then add few layers to the top of it for text classification, coreference resolution, question answering, and so on. However, although such models give cool results, those models are still black boxes, whereas the interpretability of a model is very important for debugging and understanding how the model makes a decision. A couple of weeks ago I faced a demo from Allen NLP. I found a cool feature there that highlighted the words that impacted on the model’s decision. Then I wanted to do this for my models, but after spending some time looking for a tutorial on how to put a PyTorch model into the Allen NLP Predictor, I haven’t found anything useful. Then I decided to rewrite the interpreter to PyTorch and it was not difficult because Allen NLP is built on it, so some parts of the code were taken from here. I will consider mostly on pieces of code that will help you integrate your own model. Also, I will omit the explanation of some common and clear parts to not make this article too long. The full code you can find on my GitHub repository. Some updates: Have been added batching support. There is no need to make dataset instance that will return tokens (just usual dataset, that returns token_ids and so on) The repository was modified for simplification the integrating of your model not depending on the task it was made for (classification, token classification etc.) To avoid redundant computations I decided to exclude making predictions separately. It impacts on the accuracy of predictions (just on few points) as Smooth Grad adds the Gaussian noise and Integrated Gradients doesn’t include the upper limit of the integral (see the repo) Also, empirically I found that the Integrated Gradients algorithm highlights more “expected” words that are more relevant to class the model chose. There are 3 gradient-based algorithms that are covered by Allen NLP Interpret. I will choose Smooth Grad and a classification task as an example. The idea of this algorithm is simple: we first make a prediction, take argmax (as we usually do in classification), and assuming it as ground truth — make backpropagation. Then we sum the gradients for each embedding but as was shown in the paper the values can have outliers so we calculate gradients few times adding Gaussian noise and then take the average. Let’s start from the first step — making a prediction: def saliency_interpret(self, test_dataset): # Convert inputs to labeled instances predictions = self._get_prediction(test_dataset) ... In my case, test_dataset is the instance of the class that I inherited from PyTorch’s Dataset class. I use it to build the dataloader to make predictions. If you are feed examples to the model in another way, you can rewrite this method: def _get_prediction(self, test_dataset, batch_size=1): test_dataloader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False, num_workers=0) model_inputs = [] input_tokens = [] predictions = torch.tensor([], dtype=torch.float) model = self.model.to(self.device) model.eval() for inputs, tokens in test_dataloader: # collecting inputs, as they will be used in _get_gradients # and tokens to correspond them method output model_inputs.append(inputs) input_tokens.append(tokens) input_ids = inputs.get('input_ids') attention_mask = inputs.get("attention_mask") input_ids = input_ids.to(self.device) attention_mask = attention_mask.to(self.device) with torch.no_grad(): outputs = model( input_ids=input_ids, attention_mask=attention_mask ) predictions = torch.cat(( predictions, softmax(outputs, dim=-1) )) return predictions, model_inputs, input_tokens The model’s outputs are logits, so we are applying the softmax to get probabilities from it. I also collect model_inputs (tokens ids) that will be used in the next steps as well as input_tokens (tokenized text) that are returned by the dataloader. The reason why I save tokens is that the transformer models use special algorithms (such as bpe) to reduce the vocabulary size and the tokenizers split some words to word pieces, so not always one word is equal to one token. For example: >>> tokenizer.tokenize('transformer model') ['transform', '##er', 'model'] Then we apply the Smooth Grad for each example (didn’t investigate how to do it by batches): ... predictions = self._get_prediction(test_dataset) instances_with_grads = dict() for idx, (prob, inp, tokens) in enumerate(zip(*predictions)): # Run smoothgrad label = torch.argmax(prob, axis=0) grads = self._smooth_grads(label, inp) ... Where _smoth_grads method implements the described algorithm above: def _smooth_grads(self, label, inp): total_gradients = {} for _ in range(self.num_samples): handle = self._register_forward_hook(self.stdev) grads = self._get_gradients(label, inp) handle.remove() # Sum gradients if total_gradients == {}: total_gradients = grads else: for key in grads.keys(): total_gradients[key] += grads[key] # Average the gradients for key in total_gradients.keys(): total_gradients[key] /= self.num_samples return total_gradients There are some important points in _register_forward_hook and _get_gradients . In the first one, we have to define the embedding layer. Allen NLP has a specific method for it (it covers GPT and Bert models), but I decided to define the layer directly via keyword arguments or bert attribute by default: ... encoder = self.kwargs.get("encoder") if encoder: embedding_layer = self.model.__getattr__(encoder).embeddings else: embedding_layer = self.model.bert.embeddings ... While my model’s class looks: class DistilBertForSequenceClassification(nn.Module): def __init__(self, config, num_labels=2): super(DistilBertForSequenceClassification, self).__init__() self.num_labels = num_labels self.config = config self.bert = DistilBertModel.from_pretrained( 'distilbert-base-uncased', output_hidden_states=False ) self.dropout = nn.Dropout(config.dropout) self.classifier = nn.Linear(config.hidden_size, num_labels) nn.init.xavier_normal_(self.classifier.weight) def forward( self, input_ids, token_type_ids=None, attention_mask=None ): last_hidden = self.bert( input_ids=input_ids, attention_mask=attention_mask ) pooled_output = torch.mean(last_hidden[0], dim=1) pooled_output = self.dropout(pooled_output) logits = self.classifier(pooled_output) return logits In the second one, we need to do the same as we do during the training. In my case, I feed to the model input_ids and attention_mask to mask the paddings: ... embedding_gradients = [] hooks = self._register_embedding_gradient_hooks(embedding_gradients) input_ids = inp.get('input_ids') attention_mask = inp.get("attention_mask") outputs = self.model(input_ids=input_ids, attention_mask=attention_mask) batch_losses = self.criterion(outputs, label.unsqueeze(0)) loss = torch.mean(batch_losses) self.model.zero_grad() loss.backward() ... In _register_embedding_gradient_hooks we also need to define the embedding as we did before. After normalizing the gradients (to make the sum = 1) we are saving gradients, tokens, predicted label and the probability of it to the dictionary: ... instances_with_grads["instance_" + str(idx + 1)] = grads instances_with_grads["instance_" + str(idx + 1)]['tokens_input_1'] = [t[0] for t in tokens] instances_with_grads["instance_" + str(idx + 1)]['label_input_1'] = label.item() instances_with_grads["instance_" + str(idx + 1)]['prob_input_1'] = prob.max().item() ... As a result, we get the dictionary with tokens and their weights in the sentence. I add a util method to make a visualization. Let’s have a look at the example of the output. I trained DistilBert on the subset of Medium Post Titles Dataset with 93 classes which correspond to categories of the articles. The dataset consists of titles and subtitles, so let’s try to test the model on the title of this article plus the first sentence as the subtitle (the example of usage you can find in this notebook): As we can see, the most impact on the model’s decision makes [CLS] token, which is used for Next Sentence Prediction Objective during pre-training. The embedding of this token is fed to the linear layer with softmax activation to train the model to predict if the sentences separated by [SEP] token are from the same document or not. And the interesting point is that I didn’t use common practice when this token is used for classification (I used mean of the output embeddings). Also, we can emphasize the model gives the right prediction. To sum up, I think that these methods a great tool for analyzing your NLP models and can help you to find their weaknesses. There is one more amazing tool — BertViz that gives the ability to look inside the model, so I suggest to investigate it if you are interested in exploring your models.
https://korenv20.medium.com/interpretation-of-huggingfases-model-decision-9a4100b2fed7
['Vitaliy Koren']
2020-08-14 10:29:57.035000+00:00
['NLP', 'Transformers', 'Deep Learning', 'Hugging Face', 'Visualization']
I Was Heartbroken Witnessing How Unfair I Was Towards My Kitten
I Was Heartbroken Witnessing How Unfair I Was Towards My Kitten The surgery made me realize my fault! Courtesy of the author Back in October — curiously, it feels like it was a year ago — I met a little living miracle. She was the one who adopted me if I want to be fair! To describe the last two months we spent together as magical would not give them enough credit! Most of our moments together are hilarious or affectionate. I’ve always adored making silly noises and giggling at her funny reaction! The last time it happened, I was talking to my dear soul friend Jill Horton. I laughed loudly for some reason, and when seeing her facial expression, my laughter became even louder! I’m pretty sure she’s saying something like, “Why do you need to be that weirdo, you humans? Can you at least warn me whenever you decide to have a silliness crisis?” She’s an enthusiastic, playful, adorably curious, and stubborn kitten. Occasionally, her last attribute could get the best of me. I was finding myself so frustrated with her tendency to test the boundaries — her very favorite game — that I was shocked at the aggressiveness in my tone! I even found myself hitting her gently as my silly way to tell her what she was doing was not acceptable, that Mommy was not happy; hence punishing her for her lack of discipline. I was feeling too bad about myself afterward. Something was off. I was out of my integrity and I knew the root cause. Nonetheless, I wasn’t ready to face the ugly truth. It was another bias and pattern from the residual part of my distorted subconscious program, “I am a dog person; I’ve always had such genuine and immediate connections with dogs; dogs love you unconditionally while cats are selfish and show affection merely when they need something!” Being so engaged in fighting against NPD (Narcissistic Personality Disorder) symptoms didn’t help!
https://medium.com/know-thyself-heal-thyself/i-was-heartbroken-witnessing-how-unfair-i-was-towards-my-kitten-21bb6c2330c6
['Myriam Ben Salem']
2020-12-28 09:26:53.357000+00:00
['Self-awareness', 'Self', 'Pets', 'Love', 'Self Improvement']
Your Curated COVID-19 Reading List
Bored already? No sports, no problem! 1455 has some recommended reading. (Readers and writers like to mingle, and live to support our favorite dining & drinking establishments, but when we’re obliged to stay indoors, that’s when we shine. Take it from us: social distancing provides the quality time to get to know our bookshelves again…) Disclaimer: these are all works I’ve written about, so the list is subjective; on the other hand, no writer would take the time to wax rhapsodic about a book unless they have skin (and soul) in the game, literally as well as aesthetically. Moby Dick Good music and good literature have always seemed to intimidate, or bewilder otherwise open-minded individuals. This is doubtless at least in part due to teachers and critics seeking to justify their own intellectual enterprise by conferring upon art an ivory halo that renders it unreachable by ostensibly average, simple-minded citizens. Rather than regarding, say, jazz music or a 19th Century novel as sacred relics conceived by sullen saints, perhaps it would be beneficial to acknowledge–even endorse–the actuality that most of these works were produced by individuals whose lives were conventional as their creative minds were exceptional. Or, reduced to more practical terms, if jazz music, that greatest of American inventions, is gumbo, the archetypal American novel, with Moby Dick as its progenitor and arguably its apotheosis, is a chowder. This classic American text is also pioneering in its puissant, often sardonic assaults on institutions ranging from the patriarchal status quo, to slavery, to the Puritanical thought-police who cast a long, lamentable shadow on early U.S. history. This book celebrates our itinerant American roots and the notion of positive, peaceful diversity not as an apologetic ideology, but as an empowering, imperative axiom. Melville empathized with the underdog and more important, he understood them — he was one — and his real life experiences help inform the poetic prose that allows these otherwise unrenowned heroes to sing the songs of themselves, proceeding Walt Whitman’s masterpiece by a half-decade. So: a novel that fulfills on almost every conceivable level, a meditation on our individual essence as well as the push and pull of our similitude as human beings adrift in a turbulent universe that not a little resembles the untamed sea. More of that, here. 2. Slaughterhouse Five Slaughterhouse Five, like virtually all of Vonnegut’s novels, concerns itself with one of the oldest — and most perplexingly commonplace — human dilemmas: man’s inhumanity to man. But how does one discuss war, violence, insanity, and injustice (for starters) without either preaching or unintentionally trivializing? This was Vonnegut’s special gift, and why the concept of Billy Pilgrim coming “unstuck in time” is revelatory: the author was not using science fiction pyrotechnics to mask an inability to express his ideas directly, he had actually hit upon a means by which he could communicate what our increasingly disjointed world was like to live in. In this way, Billy Pilgrim is everyman even as everything he describes is unlike anything the average reader is likely to have experienced (walking in the snow behind enemy lines, living through the Dresden firebombing, being abducted by aliens, and being taught an entirely different theory of relativity by those aliens, the Tralfamadorians). Vonnegut, of course, was really writing about the ways in which the alienated, often lonely person is affected by the pressure and perversity of life. Never before had hilarity and horror danced on the same page in quite this way. Not surprisingly, people (especially younger people) responded. On the other hand, the fact that Kurt Vonnegut was — and remains — much more popular with college students than adults says more about us than it does about his novels. More, here. 3. The Lost City of Z Fawcett was, around the turn of the 20th Century, as close to a rock star as it came in those days. Had he cared about money or the shallow spiritual payoff of established notoriety, he likely would have lived a long life (he may, in fact, have lived forever). But where people all around the world were fascinated with him, he was fascinated by the unknown and unconquered. And by unconquered, it is crucial to point out that he was not interested in human conquest (and even the pirates who would have claimed they were only after treasure could not deny obtaining that bounty necessarily involved eradicating the Indians who possessed it). Fawcett was uninterested in subjugating the “savage” natives, and the practices of complicated Christian conversion or simple slaughter so common at that time repulsed him. Indeed, one of the many secrets of his almost inexplicable success over the years was an instinctive awareness that respect and humility were more powerful weapons than the ones favored (and utilized) by almost every other white man that stepped foot in the jungle. Certainly, Fawcett knew that if he was able to successfully confirm the existence of “the city of Z”, it would make his fortune and his career. On the other hand, Grann’s reportage makes it abundantly clear that the only magnet pulling him into the dark heart of the Amazon was his insatiable desire to see what others could not find, to know that his intuition was on target. By his own account, he was miserable if unable to continue his work. And if the work was exhilarating and dangerous in equal measure, it was also solitary: Fawcett was blessed withan inhuman constitution, and cursed by having to hire mere mortals to assist him. These unfortunate souls, no matter how ambitious and game, quickly found themselves out of their depth, and the target of Fawcett’s ire when he realized that they could not keep up. In this sense, Fawcett is a truly tragic figure: he was better equipped than anyone else to stalk the improbable; what kept him alive ended up killing him. More, here. 4. The Great Gatsby What Fitzgerald does, with these ostensibly soulless and unpleasant people, is interrogate cause and effect, motive and aftermath, and all aspects of that myth sold to us as the American Dream. He takes this construction and places it on the operating table, dissecting what causes it to breathe, thrive and rot from the inside out. In this single regard, Fitzgerald was more prophetic than his critics can comprehend: he predicted how the roaring ’20s would end and be remembered before they expired. If the people (like Nick) who wind up on the outside looking in see nothing but emptiness, it’s because all vanity, in the end, returns to the ashes whence it sprang. Fitzgerald is not describing anything Ecclesiastes did not say first, if less poetically. In addition, he depicted the way Americans would react to every calamity of the 20th Century: after each debacle, the architects of said crisis waltz away, licking their wounds and counting their cash. No amount of dour intuition could have prepared Fitzgerald to imagine that, in the 21st century, they also get paid to scold the complicit masses (receiving book deals, going into politics or appearing on TV –the lucky ones doing all three). Think about the cowards in Congress today, who lustily passed legislation (and deregulation) that hastened the latest crash, now pushing austerity (but not higher taxes!). It isn’t that their methods or strategies are predictable (they are), it’s the narrative they employ that is so quintessentially American: cynicism covered in money, preaching solidarity. Much, much more, here. 5. Stephen King: 1975–1979 Stephen King has been a bit more defiant in recent years, and he’s earned the right to be a tad truculent about his influence. Selling more than 350 million books and making multiple generations of readers into fanatics is undoubtedly gratifying and something a fraction of writers will ever experience. And he can boast penning at least three novels that anticipated colossal cultural trends: he made vampires cool again (a few decades ahead of schedule), he conjured up a delusional sociopath jump-starting a nuclear apocalypse before Reagan took office, and envisioned a devastating pandemic before AIDS became front-page news (‘Salem’s Lot, The Dead Zone, and The Stand, respectively). This trifecta alone earns him street cred that should extend beyond literary circles. Yet clearly, the critical backlash accumulated over the years sticks in King’s craw. As an éminence grise who, it might also be pointed out, paid his dues for many years before his “overnight” success, he is aware he’ll always be a tough sell for the lit-crit crowd. King correctly connects the dots between Nathaniel Hawthorne and Jim Thompson; he rightly invokes Twain and delivers some welcome insights on the ways we are conditioned to receive and respond to different mediums. And his commentary begs necessary — or at least worthwhile — questions regarding labels and poles, high-brow and third-rate, and whether the twain shall meet (they always do, of course, as Mark Twain himself proves). A whole lot more, here. 6. The Collected Works of Edgar Allan Poe (especially the short stories, particularly these ten). Arguably, no American figure has influenced as many brilliant — and imitated — writers as Poe. The entire genres of horror, science fiction and detective story might be quite different, and not for the better, without Poe’s example. More, his insights into psychology, both as narrative device and metaphysical exercise, are considerable; he was describing behavior and phenomena that would become the stuff of textbooks several decades after his death. He also happened to be a first rate critic, and his insights are as astute and insightful as anything being offered in the mid-19th Century (his essay “The Poetic Principle” comes as close to a “how to” manual for aspiring writers as Orwell’s justly celebrated “Politics and the English Language”). Oh, and he was a pretty good poet, too. When assessing Poe, 150-plus years after he died, it’s imperative to interrogate and untangle that fact that not all clichés are created equally. Or, put another way, we must remember that before certain things became clichés, they were unarticulated concerns and compulsions. The reason Poe remains so convincing and unsettling is because he doesn’t rely on goblins or scenarios that oblige the suspension of belief; he is himself the madman, the stalker, the outcast, the detective and, above all, the artist who made his life’s work a deeper than healthy dive into the messy engine of human foibles, obsessions and misdeeds. He stands alone, still, at the top of a darkened lighthouse, unable to promise a happy ending and half-insane from what he’s seen. So very much more, here. 7. Hellraisers: The Life and Inebriated Times of Richard Burton, Richard Harris, Peter O’Toole and Oliver Reed The fact of the matter was, these four rapscallions were (cliché alert!) men of the people, and by word — — and more significantly, by deed — — they were both entirely at ease and happiest when they were surrounded by the so-called common folk. Even though each of them was extraordinary in his own way(s), all of them came from difficult or at least potentially unpromising origins: they knew how little separated them from the coalminers they grew up with, and how fortunate they were for getting paid to pretend as opposed to breaking their backs in mine, or a factory. Furthermoe, (cliché alert!) talk about keeping it real: These chaps threw back pints and threw around their fists because they wanted to and, to a certain extent, they had to. Here’s an instructive anecdote: On a visit to Rome, Harris persuaded one of the film executives to join him in order to witness first hand that it wasn’t always the actor who started all the brawling. On their first night they went to a bar and listened as a drunken American tourist spelled out, in a loud voice, how he was going to ‘do in’ Harris. The executive advised his client to take no notice. “Do you want me to wait until I get a bottle across the face,” reasoned Harris, “or go in and get it over with?” The executive could see only logic in this statement and Harris took the insulting Yank outside and flattened him. Here’s the thing. That’s not old school; that is a one room and no electricity school. While I’m not endorsing or advocating a top tier artist (or any average citizen) employing violence to settle their disputes, there is something almost refreshing (not quite quaint, but close) in this mano a mano arithmetic. Consider that, and compare it to our contemporary film, rock, and especially rap superstars with their posses, guns and melodramatic beefs. Almost too much more, here. 8. The Rub of Time Reading Martin Amis’s non-fiction is like riding in a plane. As you cruise over miniaturized skyscrapers, crop circles, mountains, even oceans, you recognize — and remember — how tiny and insignificant your own piece of turf (wherever you came from; wherever you’re going) actually is. Amis’s works seems all-encompassing, and it’s enough, at times, to fill one with diffidence and awe. But mostly delight. His non-fiction…brooks no dispute: he easily ranks amongst our most proficient critics, seemingly without peer in terms of his range and scope. Once Amis renders judgment — on a book, an occasion, a politician, whatever — it stays judged, definitive upon arrival. Whatever else one can say about Amis, there’s never doubt that he cares deeply about language. More, he understands, and uses words with the same type of facility that, say, Richard Pryor used voice(s) and Miles Davis used silence. His genius with words is our joy, and Amis is one of those rare writers who can take a topic already beaten to death (Vegas, Trump, pornography) and render it not merely fresh, but imperative. Even if the reader isn’t aware or interested in the subject matter, Amis makes it interesting and enjoyable. As such, The Rub of Time is recommended to anyone for whom old-fashioned deliberation and erudition matter. Arguably the highest praise we can bestow upon any novelist is that they are a writer’s writer (fame and fashion are fleeting), and for a critic that they are a reader’s writer (tastemakers are seldom the ultimate arbiters of posterity). Martin Amis, at his best, is both, and in our increasingly post-history and two-sentence assessment era, this skill set is exceedingly rare, and indispensable. More on Martin, here. 9. Seven Plays From my own experience and what I’ve seen, read and heard, even our best literary practitioners have had a difficult time doing this with success. Most writers are on record, with equal parts regret and impunity, confessing that in order to fully dedicate themselves, it was inexorably at the expense of friends, family, life itself. Conversely, the inimitable Oscar Wilde lamented “I put all my genius into my life; I put only my talent into my works.” The moral? Artists, too, are only human. Even the best of the best can only do so much, and something has to give. This is one of the many reasons Sam Shepard has long been both idol and inspiration, as a writer and person. Off the top of my head, I’m not certain I can pinpoint anyone from the 20th Century who more fully realized his potential, as individual and artist. Like Wilde, he was blessed with talent and charm (not to mince words, he was a beautiful man), and he somehow managed to incorporate virtually every cliché of Americana, distilling it into his own, unique persona. Semi-tortured artist, channeling our pathologies via works that were, on arrival, sui generis? Yes. Prototypical rugged individual, who mostly shunned the hackneyed trappings of fame, preserving both his integrity and his soul? Yes. Man’s man comfortable in the outdoors, and adept at working with either animals or his bare hands? (Quick: think of how many playwrights you’d actually be able to hunt with, get shitfaced with, talk books and music with, and with whom you’d hope to have by your side if your car broke down in the middle of nowhere. Unlike most contemporary men of the pen, Shepard could change his own oil, literally and figuratively.) The dude who got to spend quality time with Jessica Lange? Yeah, he did that too. Oh, he was a pretty good actor, as well. A leading man who, like Neil Young, preferred heading into ditches of his own design. As I said, clichés abound, but Shepard somehow wore them like rented tuxedos, suitable for the occasion. Actually, that’s not accurate; Shepard never rented or borrowed anything. That was the point. More on the inimitable Sam Shepard, here. 10. The Complete Stories Flannery O’Connor’s unwavering allegiance to her craft leaves little to the imagination: she wrote, she talked about writing, she thought about writing and she wrote about writing. Allegedly, she ate and slept on occasion. “In my stories is where I live,” she said, a statement applicable on a variety of levels. And so, the people who stand to be fascinated by this distinctly uneventful life are the very people who might be enlightened by reading about it: writers. O’Connor’s monk-like commitment to her vocation could and should be a study guide for all aspiring scribblers. Never mind that dedication like hers is probably impossible to imitate today because of all the noise — electronic and digital — distracting us. There’s also the fact that her work is inimitable: the style; the substance; the entire package is pretty much unparalleled in American letters. I tend to feel uncomfortable throwing the G word around, but if any American writer of the last century could be called a genius, O’Connor is near the top of the short list. She didn’t manage to write the great American novel (though she may well have, had she been given even a few more years), but her best collected stories go toe-to-toe with any of the great white males (and females for that matter). She also happened to approach perfection on at least three occasions, with “Revelation”, “Everything That Rises Must Converge” and “A Good Man Is Hard To Find”. It’s the last of these three that most people know; like Beethoven’s Fifth and the ceiling of the Sistine Chapel, its ubiquity tends to diminish its actual import. As a remarkable point of fact, it’s even better than most people realize (and most people, if for no other reason than that they are told, recognize these things as immortal). What O’Connor manages to do, in less than twenty pages, with “A Good Man Is Hard To Find” is lay bare the essence of what Dostoyevsky and, to a lesser extent, Tolstoy grappled with in their biggest (and sometimes bloated) novels: the nature of man, the existence of God, the possibility of Grace and the symbiotic tension between violence and love. When The Misfit declares (ironically, truthfully) “It’s no real pleasure in life”, he is (O’Connor is) succinctly expressing our fundamental philosophical and literary dilemma, post-Descartes. Beyond whether God exists (Tolstoy) or why God torments us (Dostoyevsky), and right to the darkened heart of the matter: we may betray God, but God betrayed us first. A lot more (including observations on the remarkably similar sensibilities of O’Connor and John Coltrane) here.
https://bullmurph.medium.com/your-curated-covid-19-reading-list-7649ad8042cb
['Sean Murphy']
2020-03-17 19:33:51.576000+00:00
['Books', 'Reading', 'Readinglist', 'Book Review', 'Books And Authors']
Test With Python and Lemoncheesecake
We all know that testing is a crucial part of software development. But lets be honest, most of us procrastinate them until the base code is so big that is hard to know where to start, so you just don’t do it. I’m currently learning Go, making small and easy projects and I had an idea “this is the perfect moment to learn software testing”. For someone who never made a test the problem begins with the question “What do I have to test?”. There are so many tools to pick and so many kind of tests with different purpose that makes hard to know which one to choose. One day I got tired of reading theoretical testing posts and decided to get my hands dirty, “better start and make things the wrong way while I learn than overthink and do nothing”. With this idea in mind, I decided on my starting point: functional testing for my Go ApiRest. What is functional testing? There are a lot of definitions on the internet but for now lets assume that functional testing is a type of black-box test. This means that given an entry X the output Z will always be the same. Functional testing is a black-box kind of test. Ok. Now I know what kind of test I’ll do but, how do I make them?. There are a lot of functional testing tools to pick but since I’ve been working as a Python developer for the last two years, I wanted to choose a python framework. The most famous frameworks are for unit testing: unittes, nose and pytest. Of course those are great testing frameworks but didn’t meet my requirements. I was completely lost until somehow I end up in a presentation about Rest Api functional testing with python from the pycon2018. That’s how I discovered lemoncheesecake. Photho by Gimena Leguizamon Lemoncheesecake as defined in its web is a functional test framework for Python that brings trust around test results. Among its features I’d highlight the advanced test hierarchies using suites, tests, nested suites and the beautiful html reports. The purpose of this story is to give you a small intro in testing an API rest with lcc. To do so we’ll make request against a backend and check if the results are correct. Our lucky backend will be fruityvice but for now let’s get our environment ready. Set up the project Installation Create a virtual environment and install lcc through pip: virtualenv .venv source .venv/bin/activate pip install lemoncheesecake Create project’s structure One of the most powerful lemoncheesecake feature is it’s command line interface called lcc . We’ll get in touch later with the cli but for now type: lcc bootstrap fruityvice_tests This command will create a directory called fruityvice_tests with two directories and a python file. fruityvice_tests ├── fixtures ├── project.py └── suites project.py is the project file and allows the customization of several behaviors of lcc. is the project file and allows the customization of several behaviors of lcc. fixtures directory is where our fixtures are. I’ll explain what a fixture is later. directory is where our fixtures are. I’ll explain what a fixture is later. suites directory is where our test suites are. Testing Once we have set up our project it’s time to start testing. As I mentioned before, we’ll use fruityvice as our backend to test. You can check all its endpoints here. First test Let’s begin with an easy one. Fruityvice exposes and endpoint that let you retrieve all the fruits at the same time so let’s call it and check the response code is correct. First we have to create our suite in the suites directory, to do so create a file with desired name. In my case it’ll be suites/get_all.py First test First we have created a suite with the decorator @lcc.suite() , a suite is a way to group tests in a class. The tests are all the class methods with decorator @lcc.test() as in line 11. This suite is pretty simple, we have just one test that checks the response code of a GET request is 200. Note that the function check_that is part of the lemoncheesecake.matching package that was imported with * . lcc documentation recommend import the matching package with a wildcard for readability purpose. Before runing the test let’s check our project structure. lcc cli has a command for that, in the base dir of the project run lcc show and you’ll see hierarchically the structure of your suites and all the tests in them. $ lcc show * get_all * get_all.GetFruits - get_all.GetFruits.key_already_exists Now let’s run the test, just type lcc run and it’ll prompt the results of all tests. $ lcc run ======================== get_all.GetFruits ======================== OK 1 # get_all.GetFruits.key_already_exists Statistics : * Duration: 0.642s * Tests: 1 * Successes: 1 (100%) * Failures: 0 # You can check our test was a success but let’s be honest this is not useful in a production environment. We have the need of reports that we can check over the time. Don’t worry, lcc can create reports in various formats including: html, json, junit and more. By default every time you run lcc run a beautiful html report is created. You can find them in the newly created directories report/ and reports/ . The last report will always be in report dir while old ones will be moved automatically to reports/ . You can open report/report.html file with your browser to see the beautiful report lcc did for you. Full test report You can click on every test to see all the details. Get all fruits test result Test with logs You can add more information to the html report with lcc loggin. Let’s add to the previous test a log of the request. Test with log First test with log As you can see the log in line 15 was added to the html report. There are all the common logs levels but take in consideration that if you write a log with error level, the whole test will be consider as failed. Auto generate test Fruityvice provides an endpoint to retrieve one fruit’s family subset. Let’s write a simple suite to retrieve two existing families. Get Rutaceae and Bromeliaceae families. $ lcc run ======================== get_all.GetFruits ======================== OK 1 # get_all.GetFruits.get_all_fruits ==================== get_by_familys.GetFruits ===================== OK 1 # get_by_families.GetFruits.get_rutaceae OK 2 # get_by_families.GetFruits.get_bromeliaceae Statistics : * Duration: 2s * Tests: 3 * Successes: 3 (100%) * Failures: 0 # You can see the tests are exactly the same except for lines 14 and 22 where the family is specified. Luckily there’s an option to avoid duplicate test cases, you can implement the test function as a callable object implementing the magic method __call__ . Let’s write a callable test to get all the families. Test as callable object. You can observe in line 22 Test class takes three arguments the first one is the test name, second one is its description and third one the callable object. class lemoncheesecake.api.Test(name, description, callback) $ lcc run ======================== get_by_familys.get ======================== OK 1 # get_by_families.get.Musaceae OK 2 # get_by_families.get.Anacardiaceae OK 3 # get_by_families.get.Rutaceae OK 4 # get_by_families.get.Cucurbitaceae OK 5 # get_by_families.get.Solanaceae OK 6 # get_by_families.get.Bromeliaceae OK 7 # get_by_families.get.Rosaceae As you can see the name of the family is also the description of the test. Fixtures Fixtures are a powerful and modular way to inject dependencies into your tests. In the above tests the urls are hardcoded. What happens if one day instead of http://www.fruityvice.com/api/fruit/ the url change to http://www.fruityvice.com/api/v2/fruit/ . You’ll have to change every single test. This is one of the purpose of fixtures. Now the url can be injected in all the tests through the url fixture. Customize report Lemoncheesecake reports are one of its strength. You can customize them in the Project class. Let’s change the report’s tittle and specify app and version. Final report As you can see now the report tittle completely change and version and app are specify.
https://medium.com/swlh/test-with-python-and-lemoncheesecake-f930b057bfb7
['Julián Toledano']
2020-07-27 19:42:04.418000+00:00
['Python', 'Rest Api', 'Testing']
Here’s The Single Greatest Life Advice (If You Want To Live A Life of Peace)
You lie awake at night, thinking about all the things you must do the next day. Work. Kids. Dinner. Bills. Chores around the house. Book the summer vacation. Plan the birthday party. Finally, find the time to get together with a great friend. At that moment, it feels like the list is a mile long. The list keeps growing, and the time is shrinking. Worry and stress start to take over, and the next thing you know, you’ve been awake in bed for over an hour. On top of those concerns, you are also replaying that conversation with a client, hoping that they understood what you meant and aren’t upset. Or, you’re worried about your friend and her husband, who just received terrible news about his health. Meanwhile, you still can’t figure out why another friend just stopped calling or answering your texts? Why doesn’t your favorite TV show come on anymore? Did that client ever let you know when they wanted that project to start? When are my tires scheduled to be rotated? The questions, worries, and concerns never give you a break, and they can spin out of control pretty quickly if you aren’t mindful of this.
https://medium.com/live-your-life-on-purpose/heres-the-single-greatest-life-advice-if-you-want-to-live-a-life-of-peace-bdf9f45872a3
['Chase Arbeiter']
2020-12-28 16:02:34.596000+00:00
['Self-awareness', 'Self', 'Life Lessons', 'Anxiety', 'Life']
Artificial Intelligence Must-Know
Artificial Intelligence is the new buzzword that no one can go without. The reasons are numerous; AI has given as self-driven cars, fancy robots that have close to human intelligence, and many many more. Experts predict that AI will significantly improve the lives of humans in years to come. Even now, we are enjoying some of the benefits of this awesome technology. Artificial Intelligence is the act of giving machines the ability to perform human-level tasks without explicit programming. Algorithm: An Algorithm is an advanced mathematical function that finds patterns in data. Augmented Intelligence: This refers to a situation where people and machines work together to create knowledge from data in a way that enhances human expertise. Cognitive Computing: This refers to systems that learn to perform human-level tasks by interacting and experiencing their environment. Strong AI: These are systems that are autonomous enough to think and act on their own. Examples include; DeepMind, Human Brain Project, and OpenAI, etc. Weak AI: These are systems that cannot think and act on their own. An example is a chatbot. Levels of AI Artificial Narrow Intelligence(ASI): This system belongs to the Weak AI category. It refers to applying AI to a specific single domain. Chatbots fall into this category. Artificial General Intelligence(AGI): This falls in the Strong AI category. Machines produced from this system can perform almost all tasks on their own. We are currently heading towards this system. Artificial Super Intelligence(ASI): This also falls within the Strong AI category. Sytems build from this category have full autonomy. They can perform tasks that surpass human intelligence AI Systems Deterministic System: These systems produce a known output with a given input. This means that all outcomes are known and certain. 2. Probabilistic Systems: These systems depend on the confidence values of returned responses. Their outcomes are not known with certainty. AI Influencers Big Data: This is a crucial influencer of AI. Big Data has 5 attributes; Variety: Big Data comes in different categories. There are text, audio, video, etc. Big Data comes in different categories. There are text, audio, video, etc. Volume: Big Data comes in large volumes. This is because people are constantly generating content on platforms like Facebook, Twitter, etc. Big Data comes in large volumes. This is because people are constantly generating content on platforms like Facebook, Twitter, etc. Velocity: The rate at which data is generated is rapid. For instance, indicates that 2.5 quintillion bytes of data are created each day. The rate at which data is generated is rapid. For instance, indicates that 2.5 quintillion bytes of data are created each day. Veracity: This refers to the uncertain and imprecise nature of Big Data. This refers to the uncertain and imprecise nature of Big Data. Visibility: This talks about how the information needs to make sense to people. If people can’t “see” information, they cannot benefit from it. 2. Advances in Computing Power: advancement in computing devices and storage. For example, GPU and TPU have significantly reduced the time needed to run some algorithms that usually took about weeks and months to compile. As move towards the use Quantum computers, the time taken to train models will significantly reduce. 3. Open-Source Frameworks: The advert of the numerous wonderful open-source Machine Learning Frameworks and libraries like Scikit-Learn, Tensorflow, Pytorch and many others has also contributed significantly to the growth of AI. AI Areas Machine Learning Natural Language Processing Computer Vision Business Analytics Big Data Robotics Generative Models references https://developer.ibm.com/africa/skills/artificial-intelligence-v2/ https://www.amazon.com/Practical-Machine-Learning-Python-Problem-
https://medium.com/analytics-vidhya/artificial-intelligence-must-know-29399b4c0e9f
['Boadzie Daniel']
2020-10-15 12:44:36.757000+00:00
['Artificial Intelligence']
Fighting Islamic Radicalism through comics in Pakistan
“Pakistan was born from a lot of pain, Afeef.” Fawzia Naqvi, a Pakistani-American working on impact investments at one of the largest Foundations in the U.S. graciously begins our few-hour discussion on the history and context of modern Pakistan. I went to her first to get a foundation. She is a proper expert and I needed to understand the country’s plethora of complexities before I attempted to opine on anything that was going on there while I visited. How its neighbors Afghanistan and India affected its livelihood or how China’s 60 billion economic upgrade in the country made it more geographically relevant than ever before. Maybe most relevantly, how the diversity,the poverty and the current uptick in Islamism in the region operated in Pakistan. Naqvi was absolutely fascinating to learn from — she ended our long conversation by telling me that if she had a graph she could literally graph the progress of the youth and future of Pakistan. She had hope for her country. I left to Pakistan weeks later with one goal in mind: to interview an ex-jihadi whom I had gotten in touch with through a friend I had made years earlier in my Masters program at the University of St. Andrews. I wanted to understand how the old school recruitment method of Islamic extremists compared to the new school methods we are witnessing everyday. I wanted to know what being recruited was like. I wanted to start with this story and its happy ending despite the context of Pakistan still seemingly so bleak. The truth is, my journey has to be checked through the acknowledgement of an undeniable privilege. Flying into Lahore in December meant I would be paraded into wedding parties and regular parties and underground parties. I would have the opportunity to see museums and art exhibits and mosques like the Badshahi, a 15th century mosque that’s beauty first captivated me when I attended my best friend’s wedding in 2013. Lahore gave me the chance to get acquainted with a rather green and beautiful city. One where walking around as a westerner only got me friendly confusion and smiles. There, I met and interviewed Gauher Aftab — a 30-something-year-old visionary who was fighting Pakistan’s major problems through comic books. The kicker is that Gauher was a jihadi at the age of twelve. “Back then it was so glamorous,” he explained. Gauher detailed his Islamic instructor’s charisma. “To fight the Russians in Afghanistan they would take drugs and be up for days,” Aftab regaled. He described himself as a nerdy kid who had trouble finding himself. “All I did was read — I wasn’t good at sports — my teacher preached everyday about being a ‘real’ Muslim and about Jihad.” Gauher explained that his teacher would encourage Gauher to pay rupees so he could help them secure the bullets they needed. “He explained that a rupee would help kill those that were oppressing Muslims worldwide and that I would receive blessings from Allah because of my monetary sacrifice.” Gauher was a pre-teen at the time, 12 going on 13 years old, so he remembered the entire thing looking like a video game formulating his identity. He would listen to his teacher who wore a flowing white shalwar kameez and had a red-dyed beard. He was slowly becoming brainwashed. Gauher, like many disenfranchised Muslim youth, like marginalized youth in general worldwide in fact, was an easy target for a gang, or cult, or a group that thrives on naive people to do their bidding. Gauher was meant to give his teacher 700 rupees or roughly 7 dollars in order to catch a bus to Kashmir, write a farewell letter to his ‘sinful’ yet Muslim parents and journey into the darkness that was Islamist extremism. After months of slow and effective manipulation, Gauher was ready to be a soldier of Islam and to fight against the injustices the world was causing his devoted peers. Whether it was because of Kosovo or Palestine, the pre-teen version of Gauher was taken by the idea of wielding a weapon and having a purpose in his life. “I just didn’t trust my parents or my schoolmates to tell me what being a good Muslim looked like,” Gauher explained to me informally. As the months drew on, he became increasingly distant from those around him only to get closer to the teachings of an Islam he admits today he can’t find in the Quran. “You see,” Gauher reveals, “they simply recount the Prophet Mohammad’s war efforts and manipulate the Hadith to ingrain a sort of justification for heinous acts.” For the first time, I began to understand the process of radicalization. It wasn’t this idea that you were born into, sometimes it was a club you were told would help you survive whatever thing you were going through. Gauher felt alone and in some ways he felt existentially (a rather normal thing to feel as a pubescent teen). The people that loved him didn’t understand that his piety was turning violent. No one in his family felt similarly about Islam and revenge, but the cause was crystallizing for Gauher. It was imperative that he make meaning of his life. As he paid his teacher with whatever money he could scrounge, he prepared for a journey he didn’t intend to come back from. An adult man, I sit across from him as he smokes countless cigarettes, seemingly carrying the weight of Pakistan’s vicious cycles of violence on his shoulders. “I was lucky, Afeef. My grandmother got sick and I basically just missed that bus. My parents had also slowly caught wind that something was up.” Gauher was literally put on lock down for months, and used the time to read and re-read the particulars of the faith only to come up short for explanations of his teacher’s beliefs. He had intended to finally get on that bus which would lead to a camp where he would be trained to fight with other boys for the great cause of Islam. His violent jihad ended, however, in order to sprout a more beautiful version of itself. Gauher is now proudly pushing comic books all over Pakistan with political messaging that directly opposes the Taliban, Al-Qaeda and, of course, ISIS. He explains that even the poorest families share cellphones with Internet access nowadays. “Teleco has over 70 percent penetration in Pakistan, there are 120 million subscribers in Pakistan and growing” he expertly instructs. Recruitment and Islamism are accessible on all streams of media. In fact, on a ride home one night I caught snippets of a preacher on the radio calling for an uprising. Gauher explains that the only way to push back is to give an alternative space for messaging. The comics are varied, some cover domestic corruption while others hit Islamic extremism head on. “The point is to give the Muslim or Pakistani child other kids to identify with that do good.” One of the collections called Paasban, or “The Guardian” translated from Urdu, is an assertive attempt to fight the spread of terrorism in young minds by basically taking the reader through the same steps that Gauher experienced and researched himself. The protagonists go to the camps, have to make hard choices and have to reject violence. The process is emotional and real and it illuminates the darkness that children are manipulated into. The vulnerability of families to sell their children off to Islamist groups who promise a beautiful life, or for the kids themselves to be charmed into a lifestyle that eventually ruins theirs — radicalism has become a major opiate for the angry disenfranchised Muslim the world over. Gauher was recruited and manipulated face-to-face, but he realizes that the problem is now online. He has distributed his stories in schools around the country, for free on the Internet and is attempting to take back what jihad really means: the internal struggle to be better. “My jihad is ongoing, and these comics are a way to do good in this world,” Gauher says genuinely. We left the conversation praying for blessings and peace upon each other — as Muslims often do — and let the Quran play above as we shared more cigarettes. He explained that my version of Pakistan would shift slightly as I moved on to Karachi. The danger relative to Lahore would be almost incomparable. In Karachi, guards who held AK-47s constantly flanked me. They were ex-Taliban and were recruited to guard the homes of the financial elite in the city. My friend never let me off of her grounds without at least one guard and a family member in dark tinted cars. Although I looked Pashtun, we were traveling with Europeans who had a harder time “passing.” In Karachi, we were given illegal alcohol to drink while we danced with each other in beautiful venues for beautiful weddings. In between long family board games, we would take the guards with us wherever we were allowed to go. We even got to ride camels on the Arabian Sea; the guards on a camel with their weaponry locked and loaded. Karachi felt dark, but the good Taliban/bad Taliban distinction seemed to die in the wake following the Peshawar school shootings. The country seemed to be unifying against violent Islamism as a legitimate philosophy to battle the ills that had befallen the country. After days of lazily spoiling myself with the elite of Karachi, challenging myself to visit Korangi where a traffic jam could turn into a gun shoot out, was a move I would not regret. Amidst some of the poorest slums in the whole world, I walked through and learned about AmanTech. The institution is modeled after Georgia Tech, a university I had grown up miles from in Atlanta. When I got there, I realized Fawzia had been right. Scores of young men were working hard with goggles on their face and machinery under their control. They were learning carpentry or plumbing or some other vocation that would ignite them and their families into a middle class; a middle class that would destroy the point of violence: survival. They were formerly lost boys who had dropped out of high school, roaming around the slums totally vulnerable to recruitment. Poor and disenfranchised, these teenagers were prime targets for ISIS or Al-Qaeda or the Taliban. They still are and so are their friends. The promise of money for loyalty or food for commitment was knocking on their fate and it traditionally seems to knock more temptingly than education knocks. Somehow, though, AmanTech was breaking through. There weren’t as many females as one would hope, but I had been vigorously encouraged that women were entering the middle class through various different outlets as well. The experience was uplifting to say the least. The intelligentsia of Pakistan had created a model to potentially turn their most at-risk youth into a prosperous middle class. I left Pakistan with an attachment I can’t seem to put into words, but that attachment came from a very clear place. The familial closeness I experienced in the home, at the table and at a party made it obvious that the country’s resilience and faith in itself starts in the Muslim home that isn’t necessarily extremely religious but certainly nods it’s head to conservative practices and community over individualism. With the Taliban and ISIS continuing to threaten the safety and prosperity of Pakistan, the country certainly seems to be surviving and fighting for itself, and with itself, to thrive.
https://medium.com/ramel-media/fighting-islamic-radicalism-through-comics-in-pakistan-e82aba64992f
['Afeef Nessouli']
2016-09-07 14:32:19.469000+00:00
['Islam', 'Pakistan', 'Comics', 'ISIS']
Love Smells: Body Odor and Sexual Attraction
Years ago, I worked with a woman that I will call Cassandra. Not only is that a beautiful name, but it is also the name of the mythical cursed prophetess. She knew exactly what was going to happen, but no one believed her. “Hey, Troy is going to be burned, and everyone is going to die,” she basically said. “Haha! Great joke, Cassandra!” everyone responded and went on their merry way (to be murdered just a couple of days later). Like the mythical Cassandra, my work Cassandra was wise, but unheeded. Over lunch one day, she described one of the many reasons she left her ex-husband: “He smelled. All the time. It made me want to gag.” I remember screwing up my face at this pronouncement. I really had no idea what she meant. How could someone always smell bad? I thought of my current husband then. He had a smell I did not love, but it was just normal, right? I mean, who wants to hug or bang someone who has just come back from the gym? Is that not always a mandatory shower-before-intimacy sort of thing? But then I left that husband and met my soon-to-be second husband. My future husband looks like a Greek god when he is shirtless, and he looks that way because he spends a lot of time in the gym. He has never once smelled bad to me. I have had zero problems with peeling sweaty clothes off his body and smashing my freshly showered one against his. My nose has never crinkled at the smell of him post-gym or cutting the lawn in 90 degree humidity. This goes both ways as well. My partner has told me that I do not have a smell. Even when I have come home after a harrowing experience covered with a damp fear sweat, he has said, “I don’t smell anything.” What is to this difference? Why do I not mind my soon-to-be second husband’s odor while I did my first? Lindsey Bordone, assistant professor of dermatology at Columbia University Medical Center, has said, “ You might just be more forgiving of [someone’s odor] because you’re attracted to the other person and the overall, underlying scent that is uniquely theirs.” Animal pheromones have been documented for seals, pigs, and rats, but not for humans. The French physician Paul Broca asserted that monkeys, apes, and humans represent the evolution of sniffing beasts to sight-oriented ones. Animals smell pheromones through the vomeronasal organ (VNO) located in each of their nostrils. In the mid-1980s, a similar organ was found within humans’ nostrils using microscope probes, though it seems to present a very small role in our mating practices. Not to Prince though, who sings in his song “Pheromone,” “…Controlling my every emotion./…Pheromone. When your body’s wet…” To be fair as well, a person’s scent today is more a combination of each of the products they use on a daily basis: their body wash, shampoo, deodorant, cologne or perfume, hair product, fabric-softener, and any other scented product they might use. While there is uniqueness to a person’s scent, there are many other things that influence the final ‘product’.” Bordone also says, “A recurring hypothesis regarding body odor and sexual attraction is that a person’s immune system influences what he or she perceives as attractive, and also influences what their own unadulterated scent would be minus all of the personal-care products.” A person’s immune system influencing their scent has been studied many times. “Women seem to favor the smells of men who have immune genes that differ from their own,” says one study. The conclusion is that women might be sniffing out “men’s major histocompatibility complex (MHC), a group of genes that effect the immune system” in order to assess whether their offspring would be able to handle a myriad of threats. Whether this is true or not is inconclusive, but it is fascinating nonetheless. There is also plenty of research looking into whether men can smell the fertility of women and how that may create a response in hormone levels. A study from Frontiers in Endocrinology had 115 men smell the body odor and genital odor of forty-five women. They found that the men’s testosterone and cortisol levels increased in response to both odors if they came from fertile women. There is still much to suss out about scent and sexual attraction in humans, which is complicated by how much work we spend on covering up our natural body odors. Love blinds us to many things, and it may to someone’s unsightly body odors as well. When we fall out of love with a partner, we likely become more aware of their faults and that would manifest in every way. As for me and my soon-to-be second husband, I am going to continue to enjoy the scents of our love, body odor and all.
https://tarablairball.medium.com/love-smells-body-odor-and-sexual-attraction-6470f5e5fc32
['Tara Blair Ball']
2019-07-17 15:57:00.517000+00:00
['Relationships', 'Love', 'Sex', 'Psychology', 'Sexuality']
I Deleted All Social Media From My Life and This Is What Happened
An assessment after 1 month: It’s one of the best decisions I’ve ever made. It wasn’t that hard. Maybe because the decision had had time to mature for many months before I made it. And because I knew why I was doing it. The hardest part was not being able to share my photographic work on Instagram anymore. Because I am a photographer too. But I decided that my mental health was more important than that. Removing all forms of social media from my life has brought many benefits. Here is a summary. I am much more focused. Scrolling teaches us to be deconcentrated. To unplug our mind and put it under an infusion of uninteresting scrolling images before our empty gazes. I have regained some of my ability to focus. Just like the concentration I had as a child. When I grew up, I missed those long afternoons spent reading. It was as if I couldn’t focus anymore. Now I understand why. I am also less distracted, thanks to the drastic reduction in notifications. I now can focus on what matters. On myself. What I want to do with my life. Without any unsolicited outside influence. I am in my own little bubble. “Remove all these useless tools. It’s a wonderful world.” — The Social Dilemma I have more time. I used to get lost in endless scrolling. I grew lazy. Do you feel mentally tired? A good scrolling session doesn’t require to turn on any brain cell. Which means that I was losing one to several hours of my day, every day. Now that I no longer have this easy and constant distraction, I read a lot more. Up to 1 or 2 hours more each day. It feels good to focus your attention on something really meaningful, something that adds value to your life. My screen time has decreased a lot. The only screen time I have now is when I’m working on my laptop. And that’s already enough. I have a lot more time in general. Less time for a ‘fake community’, more time for my real community: my loved ones. I have realized that, contrary to what we are led to believe, life is better without. They have created a false need. But when we no longer have it, we lack nothing. On the contrary, it’s as if I had freed myself. Social media have only brought one positive thing in my life. When I was younger and discovered myself as a homosexual. It made it easier for me to see that there was a community, people like me. I could learn from what I saw of them, and that helped me to build my identity. But I’m sure I would have done just as well without it. We don’t need it. Why would you want to see what other people’s lives are like? You only see what they allow you to see. It makes us envious. Curious in an unhealthy way. Narcissistic. It puts us in competition with each other. It erases our individuality. Why make a spectacle of us? I believe that this constant need to show ourselves reveals a void in our lives.
https://medium.com/the-ascent/i-deleted-all-social-media-from-my-life-and-this-is-what-happened-8aabbd8d15db
['Auriane Alix']
2020-10-22 18:33:36.180000+00:00
['Self', 'Mental Health', 'Social Media', 'Life', 'Self Improvement']
Selecting Subsets of Data in Pandas: Part 1
Part 1: Selection with [ ] , .loc and .iloc This is the beginning of a four-part series on how to select subsets of data from a pandas DataFrame or Series. Pandas offers a wide variety of options for subset selection which necessitates multiple articles. This series is broken down into the following four topics. Black Friday Special 2020 — Get 50% Off - Limited Time Offer! If you want to be trusted to make decisions using pandas, you must become an expert. I have completely mastered pandas and have developed courses and exercises that will massively improve your knowledge and efficiency to do data analysis. Get 50% off all my courses for a limited time! Assumptions before we begin These series of articles assume you have no knowledge of pandas, but that you understand the fundamentals of the Python programming language. It also assumes that you have installed pandas on your machine. The easiest way to get pandas along with Python and the rest of the main scientific computing libraries is to install the Miniconda distribution (follow the link for a comprehensive tutorial). If you have no knowledge of Python then I suggest completing an introductory book like Exercise Python cover to cover. The importance of making subset selections You might be wondering why there need to be so many articles on selecting subsets of data. This topic is extremely important to pandas and it’s unfortunate that it is fairly complicated because subset selection happens frequently during an actual analysis. Because you are frequently making subset selections, you need to master it in order to make your life with pandas easier. Always reference the documentation The material in this article is also covered in the official pandas documentation on Indexing and Selecting Data. I highly recommend that you read that part of the documentation along with this tutorial. In fact, the documentation is one of the primary means for mastering pandas. I wrote a step-by-step article, How to Learn Pandas, which gives suggestions on how to use the documentation as you master pandas. The anatomy of a DataFrame and a Series The pandas library has two primary containers of data, the DataFrame and the Series. You will spend nearly all your time working with both of the objects when you use pandas. The DataFrame is used more than the Series, so let’s take a look at an image of it first. Anatomy of a DataFrame This image comes with some added illustrations to highlight its components. At first glance, the DataFrame looks like any other two-dimensional table of data that you have seen. It has rows and it has columns. Technically, there are three main components of the DataFrame. The three components of a DataFrame A DataFrame is composed of three different components, the index, columns, and the data. The data is also known as the values. The index represents the sequence of values on the far left-hand side of the DataFrame. All the values in the index are in bold font. Each individual value of the index is called a label. Sometimes the index is referred to as the row labels. In the example above, the row labels are not very interesting and are just the integers beginning from 0 up to n-1, where n is the number of rows in the table. Pandas defaults DataFrames with this simple index. The columns are the sequence of values at the very top of the DataFrame. They are also in bold font. Each individual value of the columns is called a column, but can also be referred to as column name or column label. Everything else not in bold font is the data or values. You will sometimes hear DataFrames referred to as tabular data. This is just another name for a rectangular table data with rows and columns. Axis and axes It is also common terminology to refer to the rows or columns as an axis. Collectively, we call them axes. So, a row is an axis and a column is another axis. The word axis appears as a parameter in many DataFrame methods. Pandas allows you to choose the direction of how the method will work with this parameter. This has nothing to do with subset selection so you can just ignore it for now. Each row has a label and each column has a label The main takeaway from the DataFrame anatomy is that each row has a label and each column has a label. These labels are used to refer to specific rows or columns in the DataFrame. It’s the same as how humans use names to refer to specific people. What is subset selection? Before we start doing subset selection, it might be good to define what it is. Subset selection is simply selecting particular rows and columns of data from a DataFrame (or Series). This could mean selecting all the rows and some of the columns, some of the rows and all of the columns, or some of each of the rows and columns. Example selecting some columns and all rows Let’s see some images of subset selection. We will first look at a sample DataFrame with fake data. Sample DataFrame Let’s say we want to select just the columns color , age , and height but keep all the rows. Our final DataFrame would look like this: Example selecting some rows and all columns We can also make selections that select just some of the rows. Let’s select the rows with labels Aaron and Dean along with all of the columns: Our final DataFrame would like: Example selecting some rows and some columns Let’s combine the selections from above and select the columns color , age , and height for only the rows with labels Aaron and Dean . Our final DataFrame would look like this: Pandas dual references: by label and by integer location We already mentioned that each row and each column have a specific label that can be used to reference them. This is displayed in bold font in the DataFrame. But, what hasn’t been mentioned, is that each row and column may be referenced by an integer as well. I call this integer location. The integer location begins at 0 and ends at n-1 for each row and column. Take a look above at our sample DataFrame one more time. The rows with labels Aaron and Dean can also be referenced by their respective integer locations 2 and 4. Similarly, the columns color , age and height can be referenced by their integer locations 1, 3, and 4. The documentation refers to integer location as position. I don’t particularly like this terminology as its not as explicit as integer location. The key thing term here is INTEGER. What’s the difference between indexing and selecting subsets of data? The documentation uses the term indexing frequently. This term is essentially just a one-word phrase to say ‘subset selection’. I prefer the term subset selection as, again, it is more descriptive of what is actually happening. Indexing is also the term used in the official Python documentation. Focusing only on [] , .loc , and .iloc There are many ways to select subsets of data, but in this article we will only cover the usage of the square brackets ( [] ), .loc and .iloc . Collectively, they are called the indexers. These are by far the most common ways to select data. A different part of this Series will discuss a few methods that can be used to make subset selections. If you have a DataFrame, df , your subset selection will look something like the following: df[ ] df.loc[ ] df.iloc[ ] A real subset selection will have something inside of the square brackets. All selections in this article will take place inside of those square brackets. Notice that the square brackets also follow .loc and .iloc . All indexing in Python happens inside of these square brackets. A term for just those square brackets The term indexing operator is used to refer to the square brackets following an object. The .loc and .iloc indexers also use the indexing operator to make selections. I will use the term just the indexing operator to refer to df[] . This will distinguish it from df.loc[] and df.iloc[] . Read in data into a DataFrame with read_csv Let’s begin using pandas to read in a DataFrame, and from there, use the indexing operator by itself to select subsets of data. All the data for these tutorials are in the data directory. We will use the read_csv function to read in data into a DataFrame. We pass the path to the file as the first argument to the function. We will also use the index_col parameter to select the first column of data as the index (more on this later). >>> import pandas as pd >>> import numpy as np >>> df = pd.read_csv('data/sample_data.csv', index_col=0) >>> df Extracting the individual DataFrame components Earlier, we mentioned the three components of the DataFrame. The index, columns and data (values). We can extract each of these components into their own variables. Let’s do that and then inspect them: >>> index = df.index >>> columns = df.columns >>> values = df.values >>> index Index(['Jane', 'Niko', 'Aaron', 'Penelope', 'Dean', 'Christina', 'Cornelia'], dtype='object') >>> columns Index(['state', 'color', 'food', 'age', 'height', 'score'], dtype='object') >>> values array([['NY', 'blue', 'Steak', 30, 165, 4.6], ['TX', 'green', 'Lamb', 2, 70, 8.3], ['FL', 'red', 'Mango', 12, 120, 9.0], ['AL', 'white', 'Apple', 4, 80, 3.3], ['AK', 'gray', 'Cheese', 32, 180, 1.8], ['TX', 'black', 'Melon', 33, 172, 9.5], ['TX', 'red', 'Beans', 69, 150, 2.2]], dtype=object) Data types of the components Let’s output the type of each component to understand exactly what kind of object they are. >>> type(index) pandas.core.indexes.base.Index >>> type(columns) pandas.core.indexes.base.Index >>> type(values) numpy.ndarray Understanding these types Interestingly, both the index and the columns are the same type. They are both a pandas Index object. This object is quite powerful in itself, but for now you can just think of it as a sequence of labels for either the rows or the columns. The values are a NumPy ndarray , which stands for n-dimensional array, and is the primary container of data in the NumPy library. Pandas is built directly on top of NumPy and it's this array that is responsible for the bulk of the workload. Beginning with just the indexing operator on DataFrames We will begin our journey of selecting subsets by using just the indexing operator on a DataFrame. Its main purpose is to select a single column or multiple columns of data. Selecting a single column as a Series To select a single column of data, simply put the name of the column in-between the brackets. Let’s select the food column: >>> df['food'] Jane Steak Niko Lamb Aaron Mango Penelope Apple Dean Cheese Christina Melon Cornelia Beans Name: food, dtype: object Anatomy of a Series Selecting a single column of data returns the other pandas data container, the Series. A Series is a one-dimensional sequence of labeled data. There are two main components of a Series, the index and the data(or values). There are NO columns in a Series. The visual display of a Series is just plain text, as opposed to the nicely styled table for DataFrames. The sequence of person names on the left is the index. The sequence of food items on the right is the values. You will also notice two extra pieces of data on the bottom of the Series. The name of the Series becomes the old-column name. You will also see the data type or dtype of the Series. You can ignore both these items for now. Selecting multiple columns with just the indexing operator It’s possible to select multiple columns with just the indexing operator by passing it a list of column names. Let’s select color , food , and score : >>> df[['color', 'food', 'score']] Selecting multiple columns returns a DataFrame Selecting multiple columns returns a DataFrame. You can actually select a single column as a DataFrame with a one-item list: df[['food']] Although, this resembles the Series from above, it is technically a DataFrame, a different object. Column order doesn’t matter When selecting multiple columns, you can select them in any order that you choose. It doesn’t have to be the same order as the original DataFrame. For instance, let’s select height and color . df[['height', 'color']] Exceptions There are a couple common exceptions that arise when doing selections with just the indexing operator. If you misspell a word, you will get a KeyError If you forgot to use a list to contain multiple columns you will also get a KeyError >>> df['hight'] KeyError: 'hight' >>> df['color', 'age'] # should be: df[['color', 'age']] KeyError: ('color', 'age') Summary of just the indexing operator Its primary purpose is to select columns by the column names Select a single column as a Series by passing the column name directly to it: df['col_name'] Select multiple columns as a DataFrame by passing a list to it: df[['col_name1', 'col_name2']] to it: You actually can select rows with it, but this will not be shown here as it is confusing and not used often. Getting started with .loc The .loc indexer selects data in a different way than just the indexing operator. It can select subsets of rows or columns. It can also simultaneously select subsets of rows and columns. Most importantly, it only selects data by the LABEL of the rows and columns. Select a single row as a Series with .loc The .loc indexer will return a single row as a Series when given a single row label. Let's select the row for Niko . >>> df.loc['Niko'] state TX color green food Lamb age 2 height 70 score 8.3 Name: Niko, dtype: object We now have a Series, where the old column names are now the index labels. The name of the Series has become the old index label, Niko in this case. Select multiple rows as a DataFrame with .loc To select multiple rows, put all the row labels you want to select in a list and pass that to .loc . Let's select Niko and Penelope . >>> df.loc[['Niko', 'Penelope']] Use slice notation to select a range of rows with .loc It is possible to ‘slice’ the rows of a DataFrame with .loc by using slice notation. Slice notation uses a colon to separate start, stop and step values. For instance we can select all the rows from Niko through Dean like this: >>> df.loc['Niko':'Dean'] .loc includes the last value with slice notation Notice that the row labeled with Dean was kept. In other data containers such as Python lists, the last value is excluded. Other slices You can use slice notation similarly to how you use it with lists. Let’s slice from the beginning through Aaron : >>> df.loc[:'Aaron'] Slice from Niko to Christina stepping by 2: >>> df.loc['Niko':'Christina':2] Slice from Dean to the end: >>> df.loc['Dean':] Selecting rows and columns simultaneously with .loc Unlike just the indexing operator, it is possible to select rows and columns simultaneously with .loc . You do it by separating your row and column selections by a comma. It will look something like this: >>> df.loc[row_selection, column_selection] Select two rows and three columns For instance, if we wanted to select the rows Dean and Cornelia along with the columns age , state and score we would do this: >>> df.loc[['Dean', 'Cornelia'], ['age', 'state', 'score']] Use any combination of selections for either row or columns for .loc Row or column selections can be any of the following as we have already seen: A single label A list of labels A slice with labels We can use any of these three for either row or column selections with .loc . Let's see some examples. Let’s select two rows and a single column: >>> df.loc[['Dean', 'Aaron'], 'food'] Dean Cheese Aaron Mango Name: food, dtype: object Select a slice of rows and a list of columns: >>> df.loc['Jane':'Penelope', ['state', 'color']] Select a single row and a single column. This returns a scalar value. >>> df.loc['Jane', 'age'] 30 Select a slice of rows and columns >>> df.loc[:'Dean', 'height':] Selecting all of the rows and some columns It is possible to select all of the rows by using a single colon. You can then select columns as normal: >>> df.loc[:, ['food', 'color']] You can also use this notation to select all of the columns: >>> df.loc[['Penelope','Cornelia'], :] But, it isn’t necessary as we have seen, so you can leave out that last colon: >>> df.loc[['Penelope','Cornelia']] Assign row and column selections to variables It might be easier to assign row and column selections to variables before you use .loc . This is useful if you are selecting many rows or columns: >>> rows = ['Jane', 'Niko', 'Dean', 'Penelope', 'Christina'] >>> cols = ['state', 'age', 'height', 'score'] >>> df.loc[rows, cols] Summary of .loc Only uses labels Can select rows and columns simultaneously Selection can be a single label, a list of labels or a slice of labels Put a comma between row and column selections If you are enjoying this article, consider purchasing the All Access Pass which includes all my current and future material for one low price. Getting started with .iloc The .iloc indexer is very similar to .loc but only uses integer locations to make its selections. The word .iloc itself stands for integer location so that should help with remember what it does. Selecting a single row with .iloc By passing a single integer to .iloc , it will select one row as a Series: >>> df.iloc[3] state AL color white food Apple age 4 height 80 score 3.3 Name: Penelope, dtype: object Selecting multiple rows with .iloc Use a list of integers to select multiple rows: >>> df.iloc[[5, 2, 4]] # remember, don't do df.iloc[5, 2, 4] Use slice notation to select a range of rows with .iloc Slice notation works just like a list in this instance and is exclusive of the last element >>> df.iloc[3:5] Select 3rd position until end: >>> df.iloc[3:] Select 3rd position to end by 2: >>> df.iloc[3::2] Master Python, Data Science and Machine Learning Immerse yourself in my comprehensive path for mastering data science and machine learning with Python. Purchase the All Access Pass to get lifetime access to all current and future courses. Some of the courses it contains: Exercise Python — A comprehensive introduction to Python (200+ pages, 100+ exercises) — A comprehensive introduction to Python (200+ pages, 100+ exercises) Master Data Analysis with Python — The most comprehensive course available to learn pandas. (800+ pages and 350+ exercises) — The most comprehensive course available to learn pandas. (800+ pages and 350+ exercises) Master Machine Learning with Python — A deep dive into doing machine learning with scikit-learn constantly updated to showcase the latest and greatest tools. (300+ pages) Get the All Access Pass now! Selecting rows and columns simultaneously with .iloc Just like with .iloc any combination of a single integer, lists of integers or slices can be used to select rows and columns simultaneously. Just remember to separate the selections with a comma. Select two rows and two columns: >>> df.iloc[[2,3], [0, 4]] Select a slice of the rows and two columns: >>> df.iloc[3:6, [1, 4]] Select slices for both >>> df.iloc[2:5, 2:5] Select a single row and column >>> df.iloc[0, 2] 'Steak' Select all the rows and a single column >>> df.iloc[:, 5] Jane 4.6 Niko 8.3 Aaron 9.0 Penelope 3.3 Dean 1.8 Christina 9.5 Cornelia 2.2 Name: score, dtype: float64 Deprecation of .ix Early in the development of pandas, there existed another indexer, ix . This indexer was capable of selecting both by label and by integer location. While it was versatile, it caused lots of confusion because it's not explicit. Sometimes integers can also be labels for rows or columns. Thus there were instances where it was ambiguous. You can still call .ix , but it has been deprecated, so please never use it. Selecting subsets of Series We can also, of course, do subset selection with a Series. Earlier I recommended using just the indexing operator for column selection on a DataFrame. Since Series do not have columns, I suggest using only .loc and .iloc . You can use just the indexing operator, but its ambiguous as it can take both labels and integers. I will come back to this at the end of the tutorial. Typically, you will create a Series by selecting a single column from a DataFrame. Let’s select the food column: >>> food = df['food'] >>> food Jane Steak Niko Lamb Aaron Mango Penelope Apple Dean Cheese Christina Melon Cornelia Beans Name: food, dtype: object Series selection with .loc Series selection with .loc is quite simple, since we are only dealing with a single dimension. You can again use a single row label, a list of row labels or a slice of row labels to make your selection. Let's see several examples. Let’s select a single value: >>> food.loc['Aaron'] 'Mango' Select three different values. This returns a Series: >>> food.loc[['Dean', 'Niko', 'Cornelia']] Dean Cheese Niko Lamb Cornelia Beans Name: food, dtype: object Slice from Niko to Christina - is inclusive of last index >>> food.loc['Niko':'Christina'] Niko Lamb Aaron Mango Penelope Apple Dean Cheese Christina Melon Name: food, dtype: object Slice from Penelope to the end: >>> food.loc['Penelope':] Penelope Apple Dean Cheese Christina Melon Cornelia Beans Name: food, dtype: object Select a single value in a list which returns a Series >>> food.loc[['Aaron']] Aaron Mango Name: food, dtype: object Series selection with .iloc Series subset selection with .iloc happens similarly to .loc except it uses integer location. You can use a single integer, a list of integers or a slice of integers. Let's see some examples. Select a single value: >>> food.iloc[0] 'Steak' Use a list of integers to select multiple values: >>> food.iloc[[4, 1, 3]] Dean Cheese Niko Lamb Penelope Apple Name: food, dtype: object Use a slice — is exclusive of last integer >>> food.iloc[4:6] Dean Cheese Christina Melon Name: food, dtype: object Comparison to Python lists and dictionaries It may be helpful to compare pandas ability to make selections by label and integer location to that of Python lists and dictionaries. Python lists allow for selection of data only through integer location. You can use a single integer or slice notation to make the selection but NOT a list of integers. Let’s see examples of subset selection of lists using integers: >>> some_list = ['a', 'two', 10, 4, 0, 'asdf', 'mgmt', 434, 99] >>> some_list[5] 'asdf' >>> some_list[-1] 99 >>> some_list[:4] ['a', 'two', 10, 4] >>> some_list[3:] [4, 0, 'asdf', 'mgmt', 434, 99] >>> some_list[2:6:3] [10, 'asdf'] Selection by label with Python dictionaries All values in each dictionary are labeled by a key. We use this key to make single selections. Dictionaries only allow selection with a single label. Slices and lists of labels are not allowed. >>> d = {'a':1, 'b':2, 't':20, 'z':26, 'A':27} >>> d['a'] 1 >>> d['A'] 27 Pandas has power of lists and dictionaries DataFrames and Series are able to make selections with integers like a list and with labels like a dictionary. Extra Topics There are a few more items that are important and belong in this tutorial and will be mentioned now. Using just the indexing operator to select rows from a DataFrame — Confusing! Above, I used just the indexing operator to select a column or columns from a DataFrame. But, it can also be used to select rows using a slice. This behavior is very confusing in my opinion. The entire operation changes completely when a slice is passed. Let’s use an integer slice as our first example: >>> df[3:6] To add to this confusion, you can slice by labels as well. >>> df['Aaron':'Christina'] I recommend not doing this! This feature is not deprecated and completely up to you whether you wish to use it. But, I highly prefer not to select rows in this manner as can be ambiguous, especially if you have integers in your index. Using .iloc and .loc is explicit and clearly tells the person reading the code what is going to happen. Let's rewrite the above using .iloc and .loc . >>> df.iloc[3:6] # More explicit that df[3:6] >>> df.loc['Aaron':'Christina'] Cannot simultaneously select rows and columns with [] An exception will be raised if you try and select rows and columns simultaneously with just the indexing operator. You must use .loc or .iloc to do so. >>> df[3:6, 'Aaron':'Christina'] TypeError: unhashable type: 'slice' Using just the indexing operator to select rows from a Series — Confusing! You can also use just the indexing operator with a Series. Again, this is confusing because it can accept integers or labels. Let’s see some examples >>> food Jane Steak Niko Lamb Aaron Mango Penelope Apple Dean Cheese Christina Melon Cornelia Beans Name: food, dtype: object >>> food[2:4] Aaron Mango Penelope Apple Name: food, dtype: object >>> food['Niko':'Dean'] Niko Lamb Aaron Mango Penelope Apple Dean Cheese Name: food, dtype: object Since Series don’t have columns you can use a single label and list of labels to make selections as well >>> food['Dean'] 'Cheese' >>> food[['Dean', 'Christina', 'Aaron']] Dean Cheese Christina Melon Aaron Mango Name: food, dtype: object Again, I recommend against doing this and always use .iloc or .loc Importing data without choosing an index column We imported data by choosing the first column to be the index with the index_col parameter of the read_csv function. This is not typically how most DataFrames are read into pandas. Usually, all the columns in the csv file become DataFrame columns. Pandas will use the integers 0 to n-1 as the labels. See the example data below with a slightly different dataset: >>> df2 = pd.read_csv('data/sample_data2.csv') >>> df2 The default RangeIndex If you don’t specify a column to be the index when first reading in the data, pandas will use the integers 0 to n-1 as the index. This technically creates a RangeIndex object. Let's take a look at it. >>> df2.index RangeIndex(start=0, stop=7, step=1) This object is similar to Python range objects. Let's create one: >>> range(7) range(0, 7) Converting both of these objects to a list produces the exact same thing: >>> list(df2.index) [0, 1, 2, 3, 4, 5, 6] >>> list(range(7)) [0, 1, 2, 3, 4, 5, 6] For now, it’s not at all important that you have a RangeIndex . Selections from it happen just the same with .loc and .iloc . Let's look at some examples. >>> df2.loc[[2, 4, 5], ['food', 'color']] >>> df2.iloc[[2, 4, 5], [3,2]] There is a subtle difference when using a slice. .iloc excludes the last value, while .loc includes it: >>> df2.iloc[:3] >>> df2.loc[:3] Setting an index from a column after reading in data It is common to see pandas code that reads in a DataFrame with a RangeIndex and then sets the index to be one of the columns. This is typically done with the set_index method: >>> df2_idx = df2.set_index('Names') >>> df2_idx The index has a name Notice that this DataFrame does not look exactly like our first one from the very top of this tutorial. Directly above the index is the bold-faced word Names . This is technically the name of the index. Our original DataFrame had no name for its index. You can ignore this small detail for now. Subset selections will happen in the same fashion. DataFrame column selection with dot notation Pandas allows you to select a single column as a Series by using dot notation. This is also referred to as attribute access. You simply place the name of the column without quotes following a dot and the DataFrame like this: >>> df.state Jane NY Niko TX Aaron FL Penelope AL Dean AK Christina TX Cornelia TX Name: state, dtype: object >>> df.age Jane 30 Niko 2 Aaron 12 Penelope 4 Dean 32 Christina 33 Cornelia 69 Name: age, dtype: int64 Pros and cons when selecting columns by attribute access The best benefit of selecting columns like this is that you get help when chaining methods after selection. For instance, if you place another dot after the column name and press tab, a list of all the Series methods will appear in a pop-up menu. It will look like this: This help disappears when you use just the indexing operator: The biggest drawback is that you cannot select columns that have spaces or other characters that are not valid as Python identifiers (variable names). Selecting the same column twice? This is rather peculiar, but you can actually select the same column more than once: df[['age', 'age', 'age']] Summary of Part 1 We covered an incredible amount of ground. Let’s summarize all the main points: Before learning pandas, ensure you have the fundamentals of Python Always refer to the documentation when learning new pandas operations The DataFrame and the Series are the containers of data A DataFrame is two-dimensional, tabular data A Series is a single dimension of data The three components of a DataFrame are the index , the columns and the data (or values ) , the and the (or ) Each row and column of the DataFrame is referenced by both a label and an integer location and an There are three primary ways to select subsets from a DataFrame — [] , .loc and .iloc , and I use the term just the indexing operator to refer to [] immediately following a DataFrame/Series to refer to immediately following a DataFrame/Series Just the indexing operator’s primary purpose is to select a column or columns from a DataFrame Using a single column name to just the indexing operator returns a single column of data as a Series Passing multiple columns in a list to just the indexing operator returns a DataFrame A Series has two components, the index and the data ( values ). It has no columns and the ( ). It has no columns .loc makes selections only by label makes selections .loc can simultaneously select rows and columns can simultaneously select rows and columns .loc can make selections with either a single label, a list of labels, or a slice of labels can make selections with either a single label, a list of labels, or a slice of labels .loc makes row selections first followed by column selections: df.loc[row_selection, col_selection] makes row selections first followed by column selections: .iloc is analogous to . loc but uses only integer location to refer to rows or columns. is analogous to but uses only to refer to rows or columns. .ix is deprecated and should never be used is deprecated and should never be used .loc and .iloc work the same for Series except they only select based on the index as there are no columns and work the same for Series except they only select based on the index as there are no columns Pandas combines the power of python lists (selection via integer location) and dictionaries (selection by label) You can use just the indexing operator to select rows from a DataFrame, but I recommend against this and instead sticking with the explicit .loc and .iloc and Normally data is imported without setting an index. Use the set_index method to use a column as an index. method to use a column as an index. You can select a single column as a Series from a DataFrame with dot notation Way more to the story This is only part 1 of the series, so there is much more to cover on how to select subsets of data in pandas. Some of the explanations in this part will be expanded to include other possibilities. Master Python, Data Science and Machine Learning Immerse yourself in my comprehensive path for mastering data science and machine learning with Python. Purchase the All Access Pass to get lifetime access to all current and future courses. Some of the courses it contains: Exercise Python — A comprehensive introduction to Python (200+ pages, 100+ exercises) — A comprehensive introduction to Python (200+ pages, 100+ exercises) Master Data Analysis with Python — The most comprehensive course available to learn pandas. (800+ pages and 350+ exercises) — The most comprehensive course available to learn pandas. (800+ pages and 350+ exercises) Master Machine Learning with Python — A deep dive into doing machine learning with scikit-learn constantly updated to showcase the latest and greatest tools. (300+ pages) Get the All Access Pass now!
https://medium.com/dunder-data/selecting-subsets-of-data-in-pandas-6fcd0170be9c
['Ted Petrou']
2020-11-25 14:39:06.469000+00:00
['Python', 'Python Pandas', 'Pydata', 'Jupyter', 'Data Science']
Bottomless Media: How Screen-Scrolling Fuels An Anxious World
We know how we’re distracted. But what are we distracted from? There will always be new ways to distract human attention. It is not the new technology itself, says Furedi, but our inability to create meaning from the innovations and changes that unfold around us, of which we are inextricably part. We confuse our inability to make meaning with distractibility. Distractibility — aided by scrolling and its many digital derivatives, like autoplay, compulsive notification checking and multi-tab, algorithmized browsing —is cyclical. Not checking the phone results in anxiety, and anxiety is relieved by checking the phone. The cycle is maintained not only by the dopamine-fueled reward-seeking approach, but also by anxiety avoidance. But what are we anxious about? You could say it’s the state of the world — divisive politics, systemic racism, climate change, terrorism, economic slumps, widening global inequality, surveillance capitalism, the pandemic. It must be all that, right? Because if you’re not concerned (or angered, dismayed, disheartened, etc.) about at least one of those issues, you must be dangerously out of touch and lacking empathy for the human condition, right? Most of us are anxious and uncertain; most of us have, at some point this year, felt bleak about the future. But perhaps the source of our anxiety is simpler, far more shallow than what we want to believe. We want these issues to give us proof of our own moral goodness; being aroused in any way about them is a sign that we care about something bigger than ourselves. But if you, like me, live in a developed, democratic society where your basic needs are taken care of, the only tangible thing you’re anxious about on a moment-to-moment basis is the fact that your phone is unattended. There’s nothing else in your immediate environment that justifies your acute anxiety! It’s difficult to recognize that the source of anxiety might be closer to us than we think and not necessarily attributed to grand-scale issues like climate change. We should be willing to consider that our moment-to-moment dis-ease might be related to our devices’ presence in our environments. That’s it. Photo by Jonatan Pie on Unsplash The Curated Present Modern humans (important caveat: I refer to those living in societies where food, shelter and basic medical care are available and there is no immediate threat to life) live in a world where many actions we take on a given day don’t yield any immediate benefits. We take these actions in the hope of a future reward. For example, you take a walk every evening to maintain heart health and protect yourself against developing osteoporosis when you’re older. We live with constant awareness of the future, knowing that what we do today will have an effect later in life, whether beneficial or deleterious. Psychologists call this the delayed-return environment. Historically, humans existed more-or-less for the moment, each day a cutthroat game of survival to fulfil basic needs and evade immediate dangers. Today, we’re almost guaranteed a longer life. Provided life isn’t abruptly terminated by an accident or unexpected illness, and that we take the best self-preserving actions along the way, most of us can expect to live well into older age. By our very natures we don’t focus entirely on the present. We’re always conscious of the future and the uncertainty it brings. When our minds wander there, we need something to rein them in. So we start scrolling. Instantly, the mind anchors itself to a tangible present, a present that takes the form of live, real-time digital stuff, the distraction-fodder of that endless reel that yanks your wayfaring, anxious mind away from the indeterminate future to a numb, charming version of the present filled with memes and echo chambers and happy notifications that pop up cartoonishly whenever you log in. This curated present is far easier to deal with. Photo by Cristian Dina from Pexels There is an invisible battle underway: the battle for directing and re-directing human attention. This animated short, featuring lecture material from former Google employee James Williams, highlights the battle for human attention as “defining political and moral challenge of our time”. You and I are caught up in (and most likely contributing to) the endless spool of stuff that runs across our screens, an ecosystem of content which algorithms curates and on which corporate advertising depends. Whenever you reach for your device to relieve the mounting anxiety, you become distracted from thinking about why you’re anxious in the first place. Your awareness shifts away from truly contemplating the disproportionate power this heavily designed, algorithm-driven technology has over your life, which, if you’re living in a free society, you might consider relatively autonomous and endowed with all the liberties expected of a democratic society. Distracted by the perpetual digital tombola concealed inside your phone, you don’t get a chance to consider that perhaps it is this very libertarian existence, and the delayed-return environment that accompanies it, that’s at the heart of your anxiety. Let the Algorithm Take You As a member of the generation who grew up on the cusp of traditional and digital media (as in, I listened to tapes, watched films on VHS and was part of the first cohort of young people to use Facebook), I’ve noticed a fundamental change in the way we use the Internet today. In the early days of connectivity, we went online with a specific question in mind, searching for an answer. The Internet itself was limited: we had dial-up, which cost a fortune, so I was only allowed thirty minutes a day. I had to make the most of that time. Infinite, leisurely scrolling wasn’t possible because we didn’t yet have smart devices with responsive interfaces. Photo by Michael Dziedzic on Unsplash Our new behaviours aren’t quite as goal-directed. Consider how often you might open Netflix, not really intending to watch anything specific, and let the platform show you what you need. Or you’re idly scrolling through your Facebook feed, not really searching for anything in particular, but hoping that something you need, be it informational, entertaining, distracting, salacious, might appear. Through infinite scrolling, we resign ourselves to the algorithm without realizing it. Some of the most biggest platforms on the planet use an infinite scroll mechanism, an endlessly loading feed of content that constantly updates itself, tailored to the user using powerful machine learning capability. Facebook, Twitter and Instagram all rely on the infinite scrolling principle to keep their users scrolling endlessly down what might be equated to hundreds of miles of content. If you were to unfurl it and lay it all side by side, it may well wrap itself several times around the Earth. Scrolling isn’t so much navigation as it is aimless wandering, stumbling through a sea of content. Unlike its namesake — the ancient papyrus scrolls used by Roman officials to record the activities of the Empire — there is no true end to the digital scroll, neither is there intention, goal-directedness. We don’t decide what we see next as we move ours thumbs obediently up the screen, pushing the infinite, instantly-gratifying Internet up towards the top of our phones. The algorithm does. Media Without Boundaries When I was younger, media had boundaries. You could physically touch its vessels: vinyl LPs had a limited playtime; audio cassettes whose sticky, magnetic tape seemed to go forever but eventually reached an end, and VHS tapes whose finite amount of recording time necessitated make that difficult decision about whether or not to tape over something because you’d run out of space. Before internet streaming, going to the cinema, you had only one ticket to see one movie. Television programming ran on its own schedule, not ours. When my favorite show aired once a week, I spent the preceding days in giddy anticipation of the next instalment. Even when it came to that eagerly awaited slot, we endured commercial breaks every fifteen minutes. Photo by Shelly Still from Pexels As annoying as these interruptions were, they forced us off the couch for short intervals to do something else, like finish a quick chore or speak to a family member. Television viewing simply wasn’t on demand like it is today. Now there’s no need to find something else to do. Our media consumption is rarely interrupted, nor is there an end in sight. Streaming services give us a bottomless supply of media. A subscription to one is a ticket to everything. Anticipation can’t truly exist when the time between desire and wish fulfilment is so brief. Platforms like Netflix have eliminated the need to wait for the next instalment of a story. With entire seasons uploaded at once, there’s really no reason not to watch it all in a single sitting. You’ve no sooner finished watching a series, there’s a tantalizing new feature uploaded on the platform, teasing you to click Play. Keen to escape the anxiety for which you can’t easily identify a true source, you sit back and allow autoplay to take over your life for the next couple of hours. One of the fundamental differences between media consumption a generation ago and media consumption today is the knowledge of uninterrupted supply. The internet hosts a virtually unlimited amount of content. It’s available all the time, anywhere. But the infinite, uninterrupted supply of media we have access to isn’t the important part — it’s our knowledge of it. We know that behind every click, at the edge of the fold just behind what’s visible, after ever swipe, behind that ‘next episode’ button is more content, and more rewards. It’s a door behind a door behind a door, and we can’t stop opening them all. The guaranteed presence of something novel behind every click is the binge factor that’s built into the very fabric of Netflix’s user interface.
https://medium.com/digital-diplomacy/bottomless-media-how-screen-scrolling-fuels-an-anxious-world-7b75210abe41
['Aimee Dyamond']
2020-11-20 17:33:02.846000+00:00
['Digital', 'Attention Economy', 'UX', 'Psychology', 'Human Behavior']
Psych Ward Reviews Update 1/11/17: Becoming a more Comprehensive Site
Psych Ward Reviews Update 1/11/17: Becoming a more Comprehensive Site Exciting changes are happening: submission options for news articles on hospitals, Psych Ward Reviews in the news, and more! I want to make the site more comprehensive and include as much data as possible - thus giving people more information to make decisions, and furthering the goal of holding hospitals accountable - I am initiating the following changes: Submission Options Now Include Hospital News Articles If you find a news article on any psychiatric hospitals - including but not limited to ones you may have been at - you may now submit it for our collection of articles! Upcoming Collection of News Articles In addition to taking submissions, I'm adding articles I find and also articles collected courtesy of Morgan Shields and their team - authors of the study mentioned in my Establishment article assessing the quality of inpatient services. They have collected a great many articles. Article collection (page link) will be both for hospitals with reviews on the site and those without. Ideas: Possible Upcoming Information on Core Quality Measures Also possibly upcoming is a data table on core quality indicators for a large number of psychiatric hospitals in the United States. Many, if not most, of the reviewed hospitals will be on this data table - and others that are not yet reviewed will be on there if this comes to fruition. Some Site Reformatting I have added the pages for submitting and news article collection, and also have added some pages and links that have all reviews listed, and all news articles listed. I have modified the Categories and Tags page to be Searching the Site. It has an updated explanation of how I use the categories and tags, and an updated explanation of searching the site. It no longer has the list of hospitals - that is now under "View All Reviews." Upcoming Articles Regarding Psych Ward Reviews Psych Ward Reviews will soon be featured in another multimedia outlet, and also possibly featured elsewhere on the web. More info to come!
https://medium.com/psych-ward-experiences/psych-ward-reviews-update-1-11-17-becoming-a-more-comprehensive-site-98076d557688
['Kit Mead']
2017-01-11 22:02:25.857000+00:00
['Mental Health', 'Disability', 'Psychiatry']
My Husband Sucks and This Inspires Me
It looks like I’m that kind of entrepreneur, after all — goes big, or goes home; I have decided in less than a month on Medium, to start my first publication. My mental process was quite straight-forward. When I published my first story about my husband, as a calming mechanism for all the frustration I have collected since our last fight, I never thought there would be that many people — women and men, like — that would be curious to read about it. There’s no wonder then, the look on my face when I realized that this piece — which I wrote in less than an hour, has reached 1.5k in views and 1k in reads within two weeks. Personally, these numbers depict a pseudo-viral piece that should receive some attention. OK, at least from me. Consequently, I did a bit of experimentation with other topics too, and to my surprise, even if they were way more comprehensive and studied, have received less interest. Writing in different publications has also helped me understand that the number of followers a publication has, does not guarantee anyone that a story will have a wonderful reach. Therefore, what started to look like a pattern to me, among my published work, was that whenever I was complaining about my husband, it seemed to bring more interested readers than when I was writing about fitness how-tos, writing, or mental health. That was the turning point, where I was able to put my finger on it: people love strong emotions. They can easily relate to it. We all need to vent once in a while — including on things that might look like being childish from the outside. For example, why are you complaining that your boss doesn’t acknowledge your achievements and offers you that promotion (some would say, that you should just be happy you have a job during these days), or why are you moaning about your kid being all over you (some women would be happy to have a child), not to mention why would you whine about your husband (you can always divorce his ass) — well, it’s not that easy. You just don’t call it quits whenever you please, because…well, we need to be reasonable and think about the problems in a context, rather than make it exclusively about us. Still, your emotions of frustration, disappointment, and anger, are very real. They should be dealt with in a healthy way. And what better way to do it, then shout it out in writing. Give a virtual punch in the face with a knock off story. Add a bit of sarcasm in it and a bit of self-humor and you’re up for a successful recipe. This is your safe space, dear Writer. Here you can vent. Here you have just spill it all out so you don’t have to punch the asshole in the face and deal with the consequences. Here you can write in your name, under a pen name, or submit anonymous stories. I bet there are thousands of Readers out there that can easily relate to your raw feelings. And during these tough times, we all need a good support system, because not everyone can afford mental health support. So, without further ado, I invite you to enjoy my publication — The Venting Machine.
https://medium.com/the-venting-machine/my-husband-sucks-and-this-inspires-me-cabfb107ae7
['Eir Thunderbird']
2020-12-14 20:58:31.285000+00:00
['Mental Health', 'Husband Wife Problem', 'Venting Club', 'Husband Wife Dispute', 'Venting Machine']
Hacking Culture > Hacking Growth
In just a few short years, Fab went from a $1 billion valuation to a $15 million sale. Across industries, success is more unpredictable than ever. When it comes to cultural products, things that worked in the past often do not work in the future, the sheer number of Avengers sequels notwithstanding. But despite the inherent unpredictability of our tastes and the complex way they interact, VCs still put a heavy bet on pattern recognition. These patterns — be it a proprietary product, low-cost customer acquisition tactics, or the ability to reach scale fast — are hardly reliable predictors of success. For example, Harry’s proprietary product, manufactured in its German factory, was a great initial way to differentiate from Gillette’s low-quality but expensive razors. But, superior product quality has since become table stakes in the shaving market, with a number of startups all offering the same key features. Five years and 375 million VC dollars later, Harry’s has only 5% market share in the traditional retail sales market. It is a distant third in the online manual shave market. Not until Walmart — ironically the retailer that Harry’s DTC model set to disrupt — provided its massive distribution muscle, did Harry’s business started to shift. To stay competitive in this mass market, Harry’s now needs to worry about the shelf space and brand marketing — just like Gillette. Dollar Shave Club, with 21% of the online market share, was not profitable when Unilever bought it in 2016. Its VC-beloved debut online video was viewed more than 25 million times since 2012. Social media quickly made a lot of people know of Dollar Shave Club but also undid its staying power. The main lesson is that wide awareness doesn’t mean conversion and that seeing the fast user growth doesn’t mean profitability. To hack growth, startups have to hack culture first. In addition to the usual signals, VCs should look whether a company has roots in a subculture or trend. A subculture is made up of people who are more informed and passionate about a topic than anyone else. They are likely to be beta-testers, source material, and advocates for a new product or service. Cycling brand Rapha started from cycling obsessives. Apparel brand Patagonia started from the subculture of social responsibility. Deep subculture entrenchment ensures that a company can maintain and enhance its difference as it scales. Long-term defensibility has more to do with whether a company can believably connect with a community through the shared things they like than if it has a proprietary product or acquisition channels. Success also has to do with what Japanese call kuuki wo yomu or, reading the atmosphere. In the October 2013 article titled “Yes, Real Men Drink Beer and Use Skin Moisturizer,” Bloomberg quotes Mintel’s data on the 5-year rise in the global sales of personal-care merchandise geared to men. Harry’s was founded earlier that year, Dollar Shave Club two years prior. Both of them capitalized on the shift in the culture of modern masculinity, but neither of them invented it. The shift was already happening. As sociologist Duncan Watts notes in his research on social influence, if a society is ready to embrace a trend, almost anyone can start one — and if it isn’t then almost no one can. Success of Harry’s or Dollar Shave Club didn’t have to do much with a spiffy video or on the German factory-produced razors. It had more to do with how susceptible men already were to the idea of grooming and how easily persuaded they were to invest in it. Social influence is often mistaken for disruption. As the dynamics of how trends spread shifts from brands, media, and retailers pushing ideas to mass market to the Internet networks of niches and taste communities, both startups and VCs have to consider social processes that ultimately define success of their inventions. In addition to engineering products and services, startups then need to engineer social influence in their market. The fastest way is to piggyback on the already existing social influence, and amplify it through go-to-market strategy that emphasizes social activity among a company’s initial following. This social activity then serves an ad for a product or service aimed at the mass audience. Luggage brand Away’s initial community of travelers — and their stories — became an ad for its products; rides of the Rapha’s Cycling Clubs are the ad for Rapha’s gear. Social activity in a market accumulates social capital. How a social currency is going to be created and exchanged is the inherent part of business plan. It’s a business’ core value unit, and whether a company has the potential to build and trade in social currency should become part of VCs evaluative criteria. Beauty brand Glossier’s currency is beauty preferences of its fans. Glossier’s currency is so strong that this brand is now creating the entire marketplace around it. Social currency builds scale, defensibility, and network effects. To prevent social currency from being devalued due to the reverse network effects, companies need to maintain and grow their distinction as they scale. Best way to do this is through product and service diversification. A brand is an umbrella for a portfolio of unique products. Streetwear brand Supreme mastered the art of distinction, with a large part of its audience owning unique brand products and the limited number of people who own the exact same thing. Product diversification increases the number of bets, reduces risk, preserves social currency, and organizes a company around the inherent unpredictability of people’s tastes. The ultimate irony of the popular disruption narratives is that they venerate a deeply anti-social attitude. They celebrate an outsider and a renegade who “moves fast and breaks things.” But without social influence that creates the susceptible mood and allows new products, services, and ideas to spread, there is no “disruption.” Instead of applauding the world’s outliers, we should direct our attention to the society that makes them thrive. There should be a sociologist among engineers.
https://andjelicaaa.medium.com/hacking-culture-hacking-growth-a0cbf22917cf
['Ana Andjelic']
2019-10-28 01:38:17.680000+00:00
['Strategy', 'Growth', 'Culture', 'Startup', 'Hacking']
Submission Guidelines
We DON’T own the article you submit, so you can remove it whenever you want, for whatever reason. But we hope you’ll contact us to discuss your decision before taking that drastic step! Just be courteous. It’s only good manners if we improved your copy with extensive rewrites, helped with feedback, or gave you valuable suggestions. (We aren’t here to give free assistance and support to writers who quickly remove their work and use it elsewhere.) If we created bespoke graphics for your article when it was part of our site, these must NOT remain as part of your article if removed. (If they are, we’ll kick up a stink with Medium’s administrators.) While your work is hosted by us, you agree to let it be used for marketing purposes, which is a mutually beneficial promotion for the site and yourself. Writers have complete control over whether or not their submission is only accessible behind Medium’s paywall (more details on that in the section below). We strongly advise you don’t keep it behind this paywall forever, so weigh up the pros and cons of keeping something paywalled for longer than 14 days. If your work is making you money even months after publication, obviously keep it there! We may add bespoke graphics, logos, or links to other content within your article, to encourage readers to recommend and share your work and other relevant content. Writing imported material into Medium from an external source will automatically include the following footer: Originally published at [link to your site or blog]. Payment & the Medium Partner Program (MPP) We can’t pay anyone directly for their submissions, but you can enrol in Medium’s Partner Program. This means you can “meter” your article behind a paywall where only certain people can access it: “Medium members”. Subscribers who pay $5 per month for paywalled content and other privileges. Anyone sent a special “Friend Link” to access a paywalled article without requiring a Medium subscription. Anyone accessing an article through Twitter. This means authors are paid each calendar month based on an article’s engagement (i.e. how long a paid subscriber spent actually reading your submission). ‘Claps’ no longer affect payments as much, it’s just a way to show appreciation, but leaving a response (comment) perhaps helps a little. It’s a bit of grey area. The publication owner is NOT paid a percentage of what an article makes, currently, despite often having a lot of input with editing and formatting. Hopefully, this will change soon, to help us cover the costs of running www.framerated.co.uk, together with compensating us for the added work editing, providing feedback, and promoting online. (What about it Ev Williams? Fair’s fair!) Authors are paid by Medium each calendar month, but this requires signing up for a Stripe account. Stripe’s the company Medium has partnered with to manage payouts to each user’s bank account. You don’t have to be a paying Medium member to set this up, but we recommend you consider enrolling in the MPP because members seem to have work curated more often. (Curation is when Medium staff select your work for promotion on a topic page, which can bring it to a wider audience. This makes it more likely your work will be engaged with, thus earning you more money!) Please note that not every country can participate in the MPP and Stripe isn’t available everywhere either, so some writers simply can’t be paid at this time. A handy list of countries where Stripe operates can be read here. We’re happy to accept work from writers who don’t want to enrol on the MPP, of course, but you won’t be compensated financially by us right now. Regular Contributions: While payment isn’t guaranteed because it mainly relies on readers engaging by your monetised article in large numbers, being a regular contributor does give you some perks when joining Frame Rated:
https://medium.com/framerated/submission-guidelines-bd6098e2930
['Dan Owen']
2020-10-29 10:52:08.941000+00:00
['Features', 'Submission', 'Film', 'Criticism', 'Writing']
Mother Earth has a Rhythm
Mother Earth has a Rhythm Poem Mother Earth has a rhythm that must not be disturbed a distant drumming heartbeat that often goes unheard. We’re chewing and choking on our piece of Heaven and leaving scars on the land. We’re snatching and grabbing more than our share and we all got the blood on our hands. It’s an over-consumption an awful malfunction and we’re digging our own grave. We cannot create a more perfect place than what is ours to save.
https://medium.com/driftwood-chronicle/mother-earth-has-a-rhythm-d251a826c1b7
['Tauna Pierce']
2016-11-18 14:01:55.628000+00:00
['Environmental Issues', 'Environment', 'Making A Difference', 'Poetry And Prose', 'Poetry']
Designing Anticipated User Experiences
Anticipatory Design is possibly the next big leap within the field of Experience Design. “Design that is one step ahead” as Shapiro refers to it. This sounds amazing, but where does it lead us? And how will it affect our relationship with technology? I’ve dedicated my Master thesis to this topic to identify both ethical as design challenges that come with the development of predictive UX and application of Anticipatory Design as design pattern. With as overarching question “How Anticipatory Design might challenge our relationship with technology”. A Future Without Choice Anticipatory Design is an upcoming design pattern within the field of predictive user experiences (UX). The premise behind this pattern is to reduce cognitive load of users by making decisions on behalf of them. Despite its promise, little research has been done towards possible implications that may come with Anticipatory Design and predictive user experiences. Ethical challenges like data, privacy and experience bubbles could inhibit the development of predictive UX. We’re moving towards a future with ambient technology, smart operating systems and anticipated experiences. Google Home, Alexa, Siri and Cortana are all intelligent personal assistants that learn from your behavior, patterns and data and will likely anticipate your needs in the near future pro-actively. Anticipated user experiences are a promising development that releases us from our decision fatigue. With the approximately 20.000 decisions we make on daily average, most of us are suffering from it. Less Choice, More Automation Anticipatory Design is a design pattern that moves around learning (Internet of Things), predicting (Machine Learning) and anticipation (UX Design). Anticipatory Design Mix Smart technology within the Internet of Things learns by observing, while our data is interpreted by machine learning algorithms. UX design is crucial for delivering a seamless anticipated experience that take users away from technology. Anticipatory Design only works when all three actors are well aligned and effectively used. Anticipatory Design as design principle is already used in quite a few products without us being actively aware of it. Products like Nest, Netflix and Amazon’s Echo are good examples of how products learn, adjust and anticipates on given data of the user. 5 Design Considerations Over the past few months I’ve interviewed several experts in the field of UX and A.I. to investigate what challenges lie ahead and what considerations are there to make. The following 5 design considerations were distilled: 1. Design Against the Experience Bubble We saw what happened with Trump, the filter bubble is real and most of us circle around in our own ‘reality’. Eli Pariser described with ‘the filter bubble’ in 2011 how the new personalized web is changing what people read and how people think. The same risk applies when devices around us anticipate our needs and act on them. An Experience Bubble at which you get stuck in a loop of returning events, actions and activities. Algorithms are causing these returning events. Algorithms are binary and unable to understand meaning behind actions. It is worrisome that algorithms are not conversational. There should be a way to teach algorithms on what is right, wrong and accidental behavior. 2. Focus on Extended Intelligence Instead of Artificial Intelligence The head of MIT Media Lab, Joi Ito, gave a very interesting perspective that colored my beliefs regarding design principles to follow. Mr. Ito said that humanity should not pursue robotics and Generalized AI but rather focus on Extended Intelligence. This, because it is in humans nature to use technology as an extension of itself. It would feel inhuman to replace our daily activities by machines. 3. Responsive Algorithms Make Data Understandable Current used algorithms are binary and limited to the actions and input of users. Conceptually they pretend to be ‘personal’ and ‘understandable’ about our actions but in real-life it is a matter of ones and zero’s. Algorithms are not ready for predictive systems and need to be more responsive in order to adapt to people’s motives and needs. Revisiting the feedback loop is a way to implement responsiveness. In this way, people can teach algorithms what- but foremost why they like or dislike things. 4. Personality Make Interactions More Human-Like The Internet of Things (IoT) is growing as a market and there’s a shift from mobile first to A.I. first, meaning that users will get a more personal and unique relation and experience with their device. When I interviewed respondents and asked them about their view on smart operating systems and Artificial Intelligence, most people referred to the movie Her as a future perspective. This perspective is intriguing. However, looking at recent developments for smart assistants like Siri, Cortana and Google Home an essential feature is missing: personality. Personality adds huge value to our interactions with devices, because it gives a human touch. We can relate more to devices if it has a personality. Looking at services like Siri, I believe that the personality will be more relevant in the future than the amount of Gigabytes. 5. Build Trust by Giving Control and Transparency Today, people need to hack their own online behavior to receive the right content. It is so frustrating when you buy a gift for someone else, and get bombarded after purchase with adverts of the same product (THE SAME PRODUCT, that you just bought…). Algorithms often misinterpret my actions. There’s room for improvement. Data interaction has become a crucial element in developing experiences for the future. Respondents that I’ve interviewed voiced their concerns about the lack of transparency and control that comes with the internet. Much personal data ends up in a ‘black box’. No one knows how our data is used and processed by big tech firms. Providing options for automation should build trust and enable growth. UX Design is Evolving The craft of UX Designers is changing. Increasing responsibilities, interactions and forms influence the design approach. User Interfaces for example increasingly take different forms (e.g. voice-driven interfaces) that require a different way of design thinking. UX designers are getting more exposed to ethical design since a lot of confidentiality is involved by creating predictive user experiences. With the dawn of fully automated consumer-facing systems, a clear view on design mitigations and guiding principles are desired since future designers will face much more responsibility concerning topics like privacy and data. Current sets of design principles from Rams, Nielsen (1998), Norman (2013) and Schneiderman (2009) are insufficient for automation because principles regarding transparency, control, loops and privacy are missing. The evolvement of Experience Design within a context of automation requires discussions and design practices to mitigate forecasted design challenges. Let’s Continue This Conversation Predictive UX is an increasingly growing field of expertise. The craft of UX design is changing with it. As we are at the shift of a new AI- driven era, it is important to share design stories, insights and practices to continue the development of Anticipatory Design as pattern, and predictive UX as a service. Please join the movement and share your thoughts on Predictive UX & Anticipatory Design www.anticipatorydesign.com
https://uxdesign.cc/designing-anticipated-user-experiences-c419b574a417
['Joël Van Bodegraven']
2017-02-04 08:34:13.364000+00:00
['Machine Learning', 'Anticipatory Design', 'Design', 'UX', 'Predictive Ux']
Your Smartphone Will Make You Miserable
Do you remember the first day you owned a smartphone? I do. In the fall of 2010, I was about to start college and Apple had just launched the iPhone 4. It was a quantum leap in the evolution of phones, nothing less. We had it sent to a friend in France, because it was a bit cheaper, if that word even applies to a 700 € piece of technology. When my Dad brought it home, I couldn’t wait to take it out of the box and set it up. Afterwards, me and my family examined it in amazement. No buttons, crystal clear colors, great photos. High resolution, fast surfing, tons of apps, and, again, the screen! I remember what it felt like, too. To now be part of this new, shiny, ever-connected world. It felt like I was joining a revolution. Rebellious. Enough with the old, it’s time to disrupt! Little did I know how right I was. It didn’t happen the way I expected it, but life would never be the same. Through The Hedge I grew up in three different places, but each of them was a dead-end street. Ten or so houses, lined up in a perfect little row, ours always being the very last. To this day, my favorite remains the smallest one we lived in. My home until 1999. There was a tiny, square patch of grass attached to our back porch. You could barely call it a garden, but it connected to that of our neighbors, separated only by a tall hedge. Luckily, there was a huge hole in it, so us kids always snuck through to surprise each other and play games. Another thing we did, which is rather unheard of today, is whenever we were looking for company, we walked down the street and rang the doorbell. You know, to see if our friends were home. A decade after Apple made smartphone history, I feel this is the part of the story that’s most dramatically changed. A Strange Place Indeed When sat navs made it into serial production, paper maps disappeared from the glove compartments of our cars. So did our ability to read and interpret those maps. To some extent, smartphones are doing the same to our communication skills. They allow us to get any message across with minimal effort, so after a while, minimal effort is all we’re capable of. Sometimes, when friends visit me, they will text me that they are at the door. That is insane. And if I don’t see it, they might call me before considering to ring the bell. To you, that might be as obviously nuts as it is to me, but to kids growing up today, it’s probably the norm. They don’t know any other way and if we don’t teach them one, they never will. I’m no exception to all this. For example, I was never strong in face-to-face confrontations, and if anything, owning a smartphone has made me weaker. But more than that, people’s reactions to those making an effort have also changed. I miss the days when I could call someone or show up at their doorstep unannounced and it wasn’t weird. Nowadays it’s mostly voicemail and awkward smiles. So on top of lacking fundamental human skills, many of our rewards for becoming better at them have disappeared. It’s funny. Those things are considered creepy and annoying when at the same time, we get excited about strangers adding us on LinkedIn or following us on Instagram every day. It seems the joys of human attention now heavily depend on the medium that attention is received in. Being asked for coffee is terrifying, but getting a random comment on your Instagram story about being hit on is cool. Huh. Okay. Smartphones hurting our capacity to talk to one another isn’t a particularly new or unobserved issue, but so far, that hasn’t stopped it from being a problem. It’s also only half the story. Back In The Old Days… Though unverified, Albert Einstein supposedly once said there are only two ways to live your life: “One is as though nothing is a miracle. The other is as though everything is.” It’s a quote about curiosity. About exploring, adventure, and imagining something new. But since technology has exploded so much in the past three decades, I think we‘re now suffering from miracle-fatigue. New Yorker cartoonist David Sipress captured it brilliantly with this quip: Barry Schwartz expands on the idea in his acclaimed TED talk: “The reason that everything was better back when everything was worse is that when everything was worse, it was actually possible for people to have experiences that were a pleasant surprise. Nowadays, the world we live in — we affluent, industrialized citizens, with perfection the expectation — the best you can ever hope for is that stuff is as good as you expect it to be. You will never be pleasantly surprised because your expectations, my expectations, have gone through the roof.” As much as I love Apple products, I can’t shake the feeling that the iPhone, the smartphone in general, really, has been the greatest catalyst in exponentially raising our expectations. It’s the perfect weapon against surprise. You can use it not just to eradicate surprise from your interpersonal relationships, but also from those you maintain with yourself. Think about it. Any experience you even remotely suspect you might have, you will prepare for using your phone. We look at purchases through the eyes of thousands of reviewers in advance. We read accounts of other people’s vacations, once-in-a-lifetime adventures, restaurant visits and what it’s like at work. And no matter what emotional state you’re in when you look into the mirror, the world will gladly explain to you what feeling comes next. As a result, we’re used to everything but surprise itself. For this same reason, we’d rather send a signal to a tower miles away, that then sends another one to our friend’s phone upstairs, than to press the button that makes a piercing noise. We’ve come to hate the unexpected so much, we go out of our way to not impose it on others. The day we turned on those phones is the day surprise died. And while it’s come to haunt us in a great many ways, the following may be the worst. The Secret To Happiness Isn’t it ironic? As a result of being equipped incredible communication technology, we’ve become hard to talk to and even harder to please. Because our new, default expectation is to always know what to expect. While it’s bad that we’re not willing to be pleasantly surprised by serendipitous events, rejecting the notion of surprise altogether is what’ll really make us miserable. If we do that, we won’t just miss out on happy coincidence, we’ll also forever overreact to negative developments. How can we deal with life’s curveballs if we can’t even handle a sure home run pitch? One obvious answer to address many of these problems is to just use your phone less. A lot less. Don’t google everything. Delegate and let people sweep you off your feet. Show up unannounced. Bring flowers. Be a nice surprise. And in all that, mind what Barry Schwartz said next: “The secret to happiness — this is what you all came for — the secret to happiness is low expectations.” Maybe, we’ll even dare go a step further than that. Maybe, the secret to happiness is no expectations. Whatever you do, make room for surprise in your life. Give good things a chance to happen and wait for bad things to actually arrive. I hope that some day, we can all say we were indeed part of a revolution. It just wasn’t the one we thought it would be.
https://medium.com/personal-growth/your-smartphone-will-make-you-miserable-5373eed0ac77
['Niklas Göke']
2018-08-18 23:36:12.255000+00:00
['Technology', 'Happiness', 'Psychology', 'Life', 'Self Improvement']
From Flash to HTML5: How Banner Ads Have Evolved and Stayed Relevant
Introduction Remember when clicking a banner ad was a one-way ticket to Spamville? Long ago, the flashy excitement of epilepsy-inducing digital display ads caught the eye of many unsuspecting internet browsers enamored with the interactivity on offer. These kinds of banners still exist, but nowadays, brands big and small occupy the digital advertising space with thought-out, well-planned and well-executed ad campaigns. So much has changed in the digital landscape in such a short period of time. Does anyone remember 1991? We won’t blame you if you don’t. That was when the World Wide Web was born and changed everyone’s lives. Wired.com (formerly known as HotWired) invented the first web banner ad in 1994, which then set in motion a digital advertising boom that according to IAB and PriceWaterhouseCoopers, was worth $124.6 billion in 2019. In comparison, advertisers spent about $70 billion dollars on television advertising in the same year. These numbers make it clear that internet advertising is by far the largest ad medium for marketers. Whether you’re a fan of ad banners or you’re the type of person to block any ad banner getting in your way, there is no denying the advertising industry has been forever altered and revolutionized by their impact. Mobile banner ads are the most popular form of mobile digital advertising for a reason. They generate very high impression rates and are the second highest revenue generating format globally. They also are very clear about branding and message due to the limited space and time that the ad has to deliver its purpose. The ads offer a solution to a potential problem, which causes the user to click through to find out more information. A poorly designed ad may not give you the same results you desire however, so if you’re looking for high quality and professional creatives for your next campaign or product, our design experts can help you! From static to animated Ad formats have developed and changed over time as internet speeds and technology has progressed. Static banners came first, followed by animated GIFS. From the start of the new millennium up until just a few years ago, flash banners dominated the online advertising space. Now HTML5 ads have pulled focus from these other formats, although the other formats still remain popular and successful for marketers as they have evolved in their own ways. Here is a quick rundown of the different ad types mentioned: Static Ads Usually in GIF or JPEG format, static banner ads contain no animation or interactivity. JPEG ads are known for a high colour count and can be saved in Low, Medium or High quality versions. The downside is that as a lossless image format, the image quality will gradually decrease as it is compressed or downsized to be able to be uploaded. PNG image files typically have a better resolution and can also have transparency in layers, allowing for more depth in the composition. They do however, export larger than a JPEG image, and this can cause problems when using it for web ads. Larger file sizes take longer to load, and depending on someone’s internet speed, this could be problematic as the ad might not even be seen. The files may also be too big to be uploaded onto the web in the first place. Here is a comparison of PNG v JPEG: Animated GIFS GIF banners can be static or animated, as long as they carry the .gif extension to the file. Basically, GIFs are the in-between between static and video. They are made from a number of frames that when exported, appear as an animated format. GIFs are smaller in size than a video file, but still allow for exciting animation and transitions. GIF banners are generally accepted by most ad networks, although the file size needs to be small (max 150kb). Animations are also limited to 30 seconds or shorter, although they can be looped. They must also be exported at a low frame rate (less than 5FPS). The ads are also suitable for mobile devices and are easy to make, which makes them a popular marketing choice. Flash Flash was launched in 1999 by a company called Macromedia (later acquired by Adobe). The next year, it became the king of the digital ad hill, overtaking QuickTime and Java as the market leader for the digital advertising space. It’s rise can be attributed to the fact that it facilitated video streaming, and at the time, a huge jump in quality and standards to previous advertising found on the Internet. After the Adobe acquisition, Flash grew in popularity after allowing advertisers to create more interactive banners, gaming content and more. After a long period at the top, the final days of Flash appear imminent, though the demise of this format is not very surprising. Many advertising platforms have banned flash banners and many internet browsers are no longer compatible with the format. Flash ads led to many security breaches, which after many updates and patches led to incompatibility with browsers and websites. What showed on a banner one day, simply did not render the next. Users also had to download the third party software from Adobe to even view the ads, which is a huge negative for marketers. HTML5 HTML5 ads have been around for quite some time, but have only recently become a force to be reckoned with in the digital advertising space. While more complicated to produce, the benefits are far superior to Flash ads. File sizes are small, the format is accepted by every single internet browser, and it can be embedded with audio and video content. HTML5 ads use CSS and javascript to create animated effects, allowing for more content to be visible on the ad due to the ability to switch layers around, and basically manipulate it anyway you want. Though difficult to create compared to other ad formats, it doesn’t require expert coding experience to generate these ads. There are tools and programs available that have a slight learning curve, but are easy to adjust to if one already has a background in motion graphics, animation and general video editing. This is an example of a hugely successfulhtml5 ad from Nike, which features a great video, concise and legible copy and a fun user experience for the consumer: The ad had over 50.000 views and a click-through rate (CTR) of 22.3 percent, which is much higher than the average CTR for search and display ads which sets at around 2%. So, what does the future of digital advertising look like? Recent trends indicate that digital advertising via the internet will continue to rise. Many advertisers and marketers are moving their budgets online, preferring online ads over television and other ad mediums. Personalisation has become important to consumers and marketers as they try to create content that reflects obtained user data and interests, cleverly using budgets and time to target specific individuals. You and your colleague sitting next to you may see the same brand advertising on your browser, but be flooded with completely different ads. This is the norm and is set to continue as marketers pinpoint efficient ways to optimize consumer engagement no matter what budget. Relevancy is key to consumer targeting, and will also be key to the future of digital advertising. While HTML5 ads are the most popular ads to create at this point in time, even more advanced formats are beginning to trend and may soon become the norm. As virtual and augmented reality technology continues to evolve, and designers learn how to master these tools, what is currently expensive cutting edge technology may soon be available to the many, prompting a different and exciting future for web ads. Looking to increase your CTR and impressions? Boost your mobile marketing game today by working with the creative experts here at Customlytics. Head to our contact page or get in touch with us at [email protected] and let’s get to work. 💡 Knowledge sharing is at the core of what we do. Learn more about the app industry and discover useful resources by signing up for our newsletter or by bookmarking the Customlytics App Marketing blog in English or German. 📚 We love useful stuff. That’s why we co-wrote the Mobile Developer’s Guide to the Galaxy. Get your free paperback copy or download the eBook here providing you with all the mobile knowledge you need.💜 Become part of our community on LinkedIn, Twitter, Xing, Glassdoor or Medium.
https://customlytics.medium.com/from-flash-to-html5-how-banner-ads-have-evolved-and-stayed-relevant-e216d6e086e8
['Customlytics Gmbh']
2020-10-07 12:21:46.490000+00:00
['Banner Ads', 'Design', 'Mobile Ads', 'Mobile Advertising', 'Mobile Marketing']
To the Truck Driver who Saved My Dog —
Gregory Johnston via Dreamstime.com Thank you. Her name is Venus now. I was told you called her Lady. I read in the adoption paperwork that she is seven years old, and that you got her when she was a puppy at a truck stop. As a short-haul trucker you work long days, logging hundreds of thousands of miles on I-75 from Detroit to Dayton and on I-70 from Pittsburgh to Indianapolis. Ohio is in the middle, where the two of you once lived. California, Los Angeles more specifically, a junkyard exactly was Pippin’s place of origin. For nine years, he was my dog. I had named him Leroy until my family objected. Renamed Pippin, this dog matched his Tolkien namesake — full of good intentions with a skillset equally aspirational. Pippin’s ignorance of all things home and family was cute. When routine and repetition did not clue him in, we invented commands like “cha-cha.” Cha-cha means, “move out of the way; I’m trying to walk down this very narrow hallway carrying this huge basket of laundry.” We should have known that Pippin’s propensity, to stick closer to me than a shadow, was a face of anxiety. We should have known that a rescued junkyard dog could have PTSD. We learned that one Sunday afternoon when Pippin launched himself at my teen son, who 30 minutes earlier had been sitting on the floor with Pippin stroking his side before going upstairs to do homework. What changed? My son had pulled his hoody over his head, as it was his study habit. When he came downstairs the hood was still over his head. Pippin saw a guy in a hoody; he did not see my son. (Thank heavens, the jaw I pulled open was clenched onto only fabric). Hoodies were triggers for Pippin. Other triggers were candles flames, cooking on a stovetop, bathrobes if the belt was untied, balls, sticks, anything thrown, sudden noises, praise given in a raised voice, or entering a room where he was dozing. Ears flattened, whale-eyed, body tense from nose to tail meant Pippin’s next move would be all defense. His move usually was running away to cower in my office, but not always. I made an end-of-life appointment with our veterinarian. (I had previously called the rescue agency but they made it clear that rehoming rather than rehabilitation would be the course of action). It is a credit to my children that I drove instead to a university animal behaviorist clinic and that Pippin returned home with me, albeit with a prescription for doggy downers and a muzzle. Neither drugs nor restraints would cure Pippin but they would slow him down enough for me to intercede and redirect. The professionals had assured me that Pippin identified me as the benevolent alpha to his fearful gamma. It took two years of redirection and new associations for Pippin to become a family dog who was 90% predictable, 90% of the time. The remaining 10% kept me within Pippin’s physical sphere many of the hours in a week. In the following years, Pippin learned that balls and sticks were games, meeting new people was fun, and family life was good. no attribution, dreamstime.com #142928568 Venus has been fortunate. She has not been beat, tortured, or used as bait in a fight pit. Venus thinks all people are good. Being petted is divine. Games are fun. Car rides are interesting. Household noises are not threatening. Someone walking into a room is a reason to be happy. These are important things you showed her. On the other hand, Venus regards house rules and obedience as optional. The command “Sit” is fine for a moment. The command “Come” is a maybe. Yet for a treat, it is a definite. She thinks my bed is her bed and her dog bed is unknown territory. Our leash is a tug toy. To these things I say, so what. I have the time and Venus has the temperament; we will refine family life together. And I’ll take care of the bald spot on her left thigh, the skin tag the size of a teat that is rubbed by her new harness, the mats behind her ears, the tartar on her teeth. I’ll teach her to walk on leash; not to bolt through the door when I open it; that food on the counter is not hers. Venus has already learned to sit patiently as I prep her food; return reliably when I call her; not to chase squirrels while on lead. You did the most important work of raising a pup — you loved her. Your love shows in the eager gleam of her eyes whenever she brings me her rope toy. Your love shows in the heavy lean she gives people when petted. Your love shows in the way she helicopters her tail when I return home. Her name was Lady, but when I called her by the name you had given her, there was no recognition of it. Her life as Lady had been broken in a way that so many things have been broken during this COVID-19 year. Dear Trucker, your surrender paperwork answered why: eviction. You may be one of many independent truck drivers who are out of work, because big agri-business would rather bury tons of potatoes with backhoes and because social distancing has made online shopping for all things preferred? It must be an expression of grief that the name ‘Lady’ is broken too. Your sorrow is not balanced by my joy. Still, I hope you can take some comfort knowing your dog was adopted and that I am grateful. I love her now too.
https://medium.com/literally-literary/to-the-truck-driver-who-saved-my-dog-5f6dccc89eb9
['Lisa Patrell']
2020-07-20 19:43:30.862000+00:00
['Animal Adoptions', 'Animal Abuse', 'Dogs', 'Nonfiction', 'Covid 19']
Portfolio Analysis Basics: Volatility and Sharpe Ratio
Portfolio Analysis Basics: Volatility and Sharpe Ratio In this story, we are going to discuss why volatility is so important and how to assess its impact on return. Daily Returns of Four Price Time Series When we look at a stock or any type of investments, the first thing we care about is usually how much money we can make from it, in other words, the returns of the investments. However, there is actually quite a bit more to it than just that. Here we’ll demonstrate with an example. First things first, I need some stock prices and their daily returns to do analysis on. So, I am going to generate 4 different price time series and compare them to see which one I want to buy! Price Simulation In order to do that, I am going to use the Geometric Brownian Motion (GBM) process that I mentioned in an earlier story (link): import pandas as pd import numpy as np def daily_returns(prices): # cur_price / prev_price - 1.0 = daily_returns res = (prices/prices.shift(1) - 1.0)[1:] res.columns = ['return'] return res def brownian_prices(start, end, mu=0.0001, sigma=0.01, s0=100.0): bdates = pd.bdate_range(start, end) size = len(bdates) np.random.seed(1) wt = np.random.standard_normal(size) # GBM process st = s0 * np.cumprod( np.exp(mu - (sigma * sigma / 2.0) + sigma * wt)) return pd.DataFrame(data={'date': bdates, 'price': st}).set_index('date') With this process, here are my 4 stock price time series: start = '20100101' end = '20200101' returns1 = daily_returns(brownian_prices(start, end, mu=0.002, sigma=0.01, s0=100.0)) returns2 = daily_returns(brownian_prices(start, end, mu=0.002, sigma=0.02, s0=100.0)) returns3 = daily_returns(brownian_prices(start, end, mu=0.0022, sigma=0.05, s0=100.0)) returns4 = daily_returns(brownian_prices(start, end, mu=0.0054, sigma=0.1, s0=100.0)) Here I have generated 4 GBM daily time series, each with slight different parameters: price 1: mu = 0.002, sigma = 0.01 price 2: mu = 0.002, sigma = 0.02 price 3: mu = 0.0022, sigma = 0.05 price 4: mu = 0.0054, sigma = 0.1 mu and sigma are standard inputs to the GBM process, mu stands for drift (or growth of the time series) and sigma stands for volatility (the random noise that causes the time series to go up and down in a random walk fashion). Average Daily Returns Which one is better, you say? Suppose I am your best friend, and I tell you: oh yeah, I’ve got this amazing stock, and it returned an average of 1% every day for the past year! You must think this is a great stock, right? Well, let’s take a look at the average daily returns first then: print(np.nanmean(returns1)) print(np.nanmean(returns2)) print(np.nanmean(returns3)) print(np.nanmean(returns4)) >>> 0.0021505103073854353 0.002198836614378123 0.0019434904977809336 0.0023861582203702834 Well, all of them have an average daily return of ~0.2%, for the past 10 years… It looks like stock 4 is the best, followed by stock 2, then stock 1 and stock 3… But hold on, no rush to judgement, let’s take a closer look. Here are the cumulative returns of each of the price time series: def cumulative_returns(returns): res = (returns + 1.0).cumprod() res.columns = ['cumulative return'] return res cret1 = cumulative_returns(returns1) cret2 = cumulative_returns(returns2) cret3 = cumulative_returns(returns3) cret4 = cumulative_returns(returns4) Cumulative Returns of the Four Price Time Series Wow, what happened? I thought stock 4 was supposed to be the best… but it barely made any money! If I invested in stock 4 for 10 years, I would have lost all my investments! On the other hand, stock 1 did amazing! for the past 10 years it more than tripled my initial investment! (say if I put in 100 dollars on 2010–01–01, I would have 300 dollars by now.) Stock 2 did almost as well, close to tripling my investment. However, stock 3 did almost as badly as stock 4, returning nothing for my investment (although I didn’t actually lose my initial investment).
https://medium.com/the-innovation/portfolio-analysis-basics-volatility-sharpe-ratio-d7521d79ba0
['Shuo Wang']
2020-11-18 05:49:29.642000+00:00
['Investing', 'Python', 'Money', 'Data Science', 'Quantitative Analysis']
What About Gender Bias In The AI Of Self-Driving Cars
Dr. Lance Eliot, AI Insider [Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/] Here’s a topic that entails intense controversy, oftentimes sparking loud arguments and heated responses. Prepare yourself accordingly. Do you think that men are better drivers than women, or do you believe that women are better drivers than men? Seems like most of us have an opinion on the matter, one way or another. Stereotypically, men are often characterized as fierce drivers that have a take-no-prisoners attitude, while women supposedly are more forgiving and civil in their driving actions. Depending on how extreme you want to take these tropes, some would say that women shouldn’t be allowed on our roadways due to their timidity, while the same could be said that men should not be at the wheel due to their crazed pedal-to-the-metal predilection. What do the stats say? According to the latest U.S. Department of Transportation data, based on their FARS or Fatality Analysis Reporting System, the number of males annually killed in car crashes is nearly twice that of the number of females killed in car crashes. Ponder that statistic for a moment. Some would argue that it definitely is evidence that male drivers are worse drivers than female drivers, which seems logically sensible under the assumption that since more males are being killed in car crashes than women, men must be getting into a lot more car crashes, ergo they must be worse drivers. Presumably, it would seem that women are better able to avoid getting into death-producing car crashes, thus they are more adept at driving and are altogether safer drivers. Whoa, exclaim some that don’t interpret the data in that way. Maybe women are somehow able to survive deadly car crashes better than men, and therefore it isn’t fair to compare the count of how many perished. Or, here’s one to get your blood boiling, perhaps women trigger car crashes by disrupting traffic flow and are not being agile enough at the driving controls, and somehow men pay a dear price by getting into deadly accidents while contending with that kind of driving obfuscation. There seems to be little evidentiary support for those contentions. A more straightforward counterargument is that men tend to drive more miles than women. By the very fact that men are on the roadways more so than women, they are obviously going to be vulnerable to a heightened risk of getting into bad car crashes. In a sense, it’s a situation of rolling the dice more times than women do. Insurance companies opt for that interpretation, including too that the stats show that men are more likely to drive while intoxicated, they are more likely to be speeding, and more likely to not use seatbelts. There could be additional hidden factors involved in these outcomes. For example, some studies suggest that the gender differences begin to dissipate with aging, namely that at older ages, the chances of getting killed in a car crash becomes about equal for both male and female drivers. Of course, even that measure has controversy, which for some it is a sign that men lose their driving edge and spirit as they get older, become more akin to the skittishness of women. Yikes, it’s all a can of worms and a topic that can readily lend itself to fisticuffs. Suppose there were some means to do away with all human driving, and we had only AI-based driving that took place. One would assume that the AI would not fall into any gender-based camp. In other words, since we all think of AI as a kind of machine, it wouldn’t seem to make much sense to say that an AI system is male or that an AI system is female. As an aside, there have been numerous expressed concerns that the AI-fostered Natural Language Processing (NLP) systems that are increasingly permeating our lives are perhaps falling into a gender trap, as it were. When you hear an Alexa or Siri voice that speaks to you if it has a male intonation do you perceive the system in a manner differently than if it has a female intonation? Some believe that if every time you want to learn something new that you invoke an NLP that happens to have said a female sounding voice, it will tend to cause children especially to start to believe that women are the sole arbiters of the world’s facts. This could also work in other ways such as if the female sounding NLP was telling you to do your homework, would that cause kids to be leery of women as though they are always being bossy? The same can be said about using a male voice for today’s NLP systems. If a male-sounding voice is always used, perhaps the context of what the NLP system is telling you might be twisted into being associated with males versus females. As a result, some argue that the NLP systems ought to have gender-neutral sounding voices. The aim is to get away from the potential of having people try to stereotype human males and human females by stripping out the gender element from our verbally interactive AI systems. There’s another perhaps equally compelling reason for wanting to excise any male or female intonation from an NLP system, namely that we might tend to anthropomorphize the AI system, unduly so. Here’s what that means. AI systems are not yet even close to being intelligent, and yet the more that AI systems have the appearance of human-like qualities, we are bound to assume that the AI is as intelligent as humans. Thus, when you interact with Alexa or Siri, and it uses either a male or female intonation, the argument is that the male or female verbalization acts as a subtle and misleading signal that the underlying system is human-like and ergo intelligent. You fall readily for the notion that Alexa or Siri must be smart, simply by extension of the aspect that it has a male or female sounding embodiment. In short, there is ongoing controversy about whether the expanding use of NLP systems in our society ought to not “cheat” by using a male or female sounding basis and instead should be completely neutralized in terms of the spoken word and not lean toward using either gender. Getting back to the topic of AI driving systems, there’s a chance that the advent of true self-driving cars might encompass gender traits, akin to how there’s concern about Alexa and Siri doing so. Say what? You might naturally be puzzled as to why AI driving systems would include any kind of gender specificity. Here’s the question for today’s analysis: Will AI-based true self-driving cars be male, female, gender fluid, or gender-neutral when it comes to the act of driving? Let’s unpack the matter and see. Self-Driving Cars And Gender Biases For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving. At first glance, it seems on the surface that the AI is going to drive like a machine does, doing so without any type of gender influence or bias. How could gender get somehow shoehorned into the topic of AI driving systems? There are several ways that the nuances of gender could seep into the matter. We’ll start with the acclaimed use of Machine Learning (ML) or Deep Learning (DL). As you’ve likely heard or read, part of the basis for today’s rapidly expanding use of AI is partially due to the advances made in ML/DL. You might have also heard or read that one of the key underpinnings of ML/DL is the need for data, lots, and lots of data. In essence, ML/DL is a computational pattern matching approach. You feed lots of data into the algorithms being used, and patterns are sought to be discovered. Based on those patterns, the ML/DL can then henceforth potentially detect in new data those same patterns and report as such that those patterns were found. If I feed tons and tons of pictures that have a rabbit somewhere in each photo into an ML/DL system, the ML/DL can potentially statistically ascertain that a certain shape and color and size of a blob in those photos is a thing that we would refer to as a rabbit. Please note that the ML/DL is not likely to use any human-like common-sense reasoning, which is something not often pointed out about these AI-based systems. For example, the ML/DL won’t “know” that a rabbit is a cute furry animal and that we like to play with them and around Easter, they are especially revered. Instead, the ML/DL simply based on mathematical computations has calculated that a blob in a picture can be delineated, and possibly readily detected whenever you feed a new picture into the system, attempting to probabilistically state whether there’s such a blob present or not. There’s no higher-level reasoning per se, and we are a long ways away from the day when human-like reasoning of that nature is going to be embodied into AI systems (which, some argue, maybe we won’t ever achieve, while others keep saying that the day of the grand singularity is nearly upon us. In any case, suppose that we fed pictures of only white-furry rabbits into the ML/DL when we were training it to find the rabbit blobs in the images. One aspect that might arise would be that the ML/DL would associate the rabbit blob as always and only being white in color. When we later on fed in new pictures, the ML/DL might fail to detect a rabbit if it was one that had black fur, because the lack of white fur diminished the calculated chances that the blob was a rabbit (as based on the training set that was used). In a prior piece, I emphasized that one of the dangers about using ML/DL is the possibility of getting stuck on various biases, such as the aspect that true self-driving cars could end up with a form of racial bias, due to the data that the AI driving system was trained on. Lo and behold, it is also possible that an AI driving system could incur a gender-related bias. Here’s how. If you believe that men drive differently than women, and likewise that women drive differently than men, suppose that we collected a bunch of driving-related data that was based on human driving and thus within the data there was a hidden element, specifically that some of the driving was done by men and some of the driving was done by women. Letting loose an ML/DL system on this dataset, the ML/DL is aiming to try and find driving tactics and strategies as embodied in the data. Excuse me for a moment as I leverage the stereotypical gender-differences to make my point. It could be that the ML/DL discovers “aggressive” driving tactics that are within the male-oriented driving data and will incorporate such a driving approach into what the true self-driving car will do while on the roadways. This could mean that when the driverless car roams on our streets, it is going to employ a male-focused driving style and presumably try to cut off other drivers in traffic, and otherwise be quite pushy. Or, it could be that the ML/DL discovers the “timid” driving tactics that are within the female-oriented driving data and will incorporate a driving approach accordingly, such that when a self-driving car gets in traffic, the AI is going to act in a more docile manner. I realize that the aforementioned seems objectionable due to the stereotypical characterizations, but the overall point is that if there is a difference between how males tend to drive and how females tend to drive, it could potentially be reflected in the data. And, if the data has such differences within it, there’s a chance that the ML/DL might either explicitly or implicitly pick-up on those differences. Imagine too that if we had a dataset that perchance was based only on male drivers, this landing on a male-oriented bias driving approach would seem even more heightened (similarly, if the dataset was based only on female drivers, a female-oriented bias would be presumably heightened). Here’s the rub. Since male drivers today have twice the number of deadly car crashes than women, if an AI true self-driving car was perchance trained to drive via predominantly male-oriented driving tactics, would the resulting driverless car be more prone to car accidents than otherwise? That’s an intriguing point and worth pondering. Assuming that no other factors come to play in the nature of the AI driving system, we might certainly reasonably assume that the driverless car so trained might indeed falter in a similar way to the underlying “learned” driving behaviors. Admittedly, there are a lot of other factors involved in the crafting of an AI driving system, and thus it is hard to say that training datasets themselves could lead to such a consequence. That being said, it is also instructive to realize that there are other ways that gender-based elements could get infused into the AI driving system. For example, suppose that rather than only using ML/DL, there was also programming or coding involved in the AI driving system, which indeed is most often the case. It could be that the AI developers themselves would allow their own biases to be encompassed into the coding, and since by-and-large stats indicate that AI software developers tend to be males rather than females (though, thankfully, lots of STEM efforts are helping to change this dynamic), perhaps their male-oriented perspective would get included into the AI system coding. In The Field Biases Too Yet another example involves the AI dealing with other drivers on the roadways. For many years to come, we will have both self-driving cars on our highways and byways and simultaneously have human-driven cars. There won’t be a magical overnight switch of suddenly having no human-driven cars and only AI driverless cars. Presumably, self-driving cars are supposed to be crafted to learn from the driving experiences encountered while on the roadways. Generally, this involves the self-driving car collecting its sensory data during driving journeys, and then uploading the data via OTA (Over-The-Air) electronic communications into the cloud of the automaker or self-driving tech firm. Then, the automaker or self-driving tech firm uses various tools to analyze the voluminous data, including likely ML/DL and pushes out to the fleet of driverless cars some updates based on what was gleaned from the roadway data collected. How does this pertain to gender? Assuming again that male drivers and female drivers do drive differently, the roadway experiences of the driverless cars will involve the driving aspects of the human-driven cars around them. It is quite possible that the ML/DL doing analysis of the fleet collected data would discover the male-oriented or the female-oriented driving tactics, though it and the AI developers might not realize that the deeply buried patterns were somehow tied to gender. Indeed, one of the qualms about today’s ML/DL is that it oftentimes is not amenable to explanation. The complexity of the underlying computations does not necessarily lend itself to readily being interpreted or explained in everyday ways (for how the need for XAI or Explainable AI is becoming increasingly important). Conclusion Some people affectionately refer to their car as a “he” or a “she,” as though the car itself was of a particular gender. When an AI system is at the wheel of a self-driving car, it could be that the “he” or “she” labeling might be applicable, at least in the aspect that the AI driving system could be gender-biased toward male-oriented driving or female-oriented driving (if you believe such a difference exists). Some believe that the AI driving system will be gender fluid, meaning that based on all how the AI system “learns” to drive, it will blend together the driving tactics that might be ascribed as male-oriented and those that might be ascribed as female-oriented. If you don’t buy into the notion that there are any male versus female driving differences, presumably the AI will be gender-neutral in its driving practices. No matter what your gender driving beliefs might be, one thing is clear that the whole topic can drive one crazy. For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website The podcasts are also available on Spotify, iTunes, iHeartRadio, etc. More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/ For his AI Trends blog, see: www.aitrends.com/ai-insider/ For his Medium blog, see: https://medium.com/@lance.eliot For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot Copyright © 2020 Dr. Lance B. Eliot
https://lance-eliot.medium.com/what-about-gender-bias-in-the-ai-of-self-driving-cars-2dd4430e41b1
['Lance Eliot']
2020-10-09 05:16:42.552000+00:00
['Self Driving Cars', 'Artificial Intelligence', 'Driverless Cars', 'Autonomous Cars', 'Autonomous Vehicles']
Towards a demystification of Quantum Mechanics
The first step is to re-examine classical mechanics in view of an introduction to quantum mechanics. This episode is devoted to clarifying the concept of “state”. In the following episodes we will deal with other, often neglected, concepts in classical physics. Figure taken from Wikimedia made by Geek3 The concept of state In classical physics we sometimes use the concept of “state”. In particular, in most textbooks you can find that: 1. the state of a pointlike particle is often said to be given when its position and velocity are known; 2. the state of a gas is given by the ideal gas law (also called the gas’ equation of state) stating that pV﹦nRT. What is intended for “state”? Looking for its definition in a physics textbook is a waste of time: as a matter of fact, I could not find a single textbook in which it is defined. Let’s then try to find its definition on a vocabulary. The Merriam-Webster dictionary defines a state as a “mode or condition of being” (interestingly enough, one of the items refer to the state of an atomic system: see below). Indeed, this is the actual meaning to be given to this word in physics, too. When we want to describe a physical system (being it a pointlike particle, a gas, a circuit or a solid), we need to characterise it providing a set of (mutually independent) quantities obtained measuring them. The “state” of a system is then given when a complete list of measurable physical quantities are given for it. In other words it comprises all and only those quantities needed to fully characterise the system at time t such that we can predict its state at another time t’. For an ideal gas, for example, four physical quantities determine its state: its pressure, its volume, its temperature and its quantity. They are related to each other by means of the “equation of state”, such that only three are needed to specify its state. In principle, there could be other measurable quantities of interest: its color, for example, could be one of the state variable. However, as long as we are not dealing with the changes in the color of a gas, its inclusion in the state is useless, similarly to what happens for the mass in mechanics, where the latter is nearly always considered constant. The state, then, only comprises “interesting” quantities, i.e. quantities that are going to be measured and do not depend on each other. In summary, borrowing quantum mechanical notation, we can specify the state of an ideal gas as |p, V, T〉as well as |p, V, n〉or any other combination of p, V, n and T. In kinematics, the state of a particle is often said to be given when position and velocity are known. In fact, in this case, the state depends on the choice of the reference frame in which coordinates are expressed (note that when talking about gases we implicitly use a reference frame at rest with respect to the gas container). Physics cannot depend on our choices, then there must be a way to express the state of a system independently of it. One can choose a “privileged” reference frame consisting in the frame in which the particle is at rest when t=0. In this case the state can be fully characterised giving the velocity v. If the speed of the particle is constant, in the given reference frame it remains in the |x=0, v=0〉state, otherwise its state changes into a different one. Of course we could make similar arguments for various systems. For example, the state of a capacitor can be characterised when the charge Q and its voltage ΔV are known. In this case, the definition of capacitance C=Q/ΔV plays the role of the equation of state in the physics of gases. You can do the exercise of identifying the state on every physics system you know. So, the state depends on the system we are interested in, not only because it comprises only quantities relevant to the specific problem: depending on the system, the list of potentially interesting variables changes. Manifestly, we cannot characterise the state of a gas using position and velocities: these quantities simply do not make sense for a gas (they make sense for its constituents, not for the gas as a system — note, also, that kinetic theory of gases is a relatively recent discovery). A gas can be contained in a volume: we can identify the position (of a point) of the container, not of the gas. It has a certain temperature: it does not move, then it has no velocity (its constituents have it, but they are not a gas; they are pointlike particles). It must be noted that here we are talking about an ideal gas in equilibrium. When we do the physics of a gas flowing in a tube, its velocity makes sense (but it has a somewhat different definition) and in fact is part of the state of the fluid, being possible to predict its value using the Bernoulli’s equation and knowing its state at t=0. It turns out that the state of an electron in an atom cannot be represented by the same variables used for a pointlike particle. Simply because in this case the position and the velocity of an electron are meaningless: they cannot be measured, due to the Heisenberg principle, so they cannot be part of the state. Of an electron in an atom (a quantum mechanical system) we can measure its energy (e.g. using photoelectric effect) and its angular momentum (from the analysis of absorption and emission spectra). So, energy and angular momentum are meaningful quantities for such a system and can be the quantities to be included in its state. Figure taken from Wikimedia made by Ranjithsiji For a free subatomic particle like a muon (again a quantum mechanical system), we can measure its position and its velocity in certain conditions, so it makes sense to include them in the state. A better choice (the system is often relativistic) is to specify the kinematical state of the muon providing its energy and momentum. Muons are unstable and they decay into an electron and two neutrinos. The state of the system is then characterised by the number and the flavour of the particles in it (that can initially be given by the mass of the muon). After a certain time the state evolved such that, at sufficiently long times, the state comprises three particles: an electron and two neutrinos, each of which has its own energy and momentum. In summary, the concept of state traditionally arises in quantum mechanics as a central concept that most people have difficulties to grasp. Indeed, it is a rather simple concept, existing in classical physics, too, and the difficulties arise only because its role is not enough emphasised when we are exposed to classical physics. If we were used to write (classical) physics laws in terms of the evolution of a state, we were not be surprised when we switch to a complete different, quantum mechanical, description of an electron in an atom. Simply stated, the Bohr’s model of the atom is wrong. Continuing teaching it is like to continue teaching the “caloric theory”. It is instructive to mention it, but none of us believe it is useful to discuss it in details. Classical physics is to quantum mechanics as kinetic theory is to thermodynamics. It is just a change in the way we describe the state, in turn due to fact that physics quantities used in classical physics loose their meaning in certain conditions (and, note, this is not peculiar of quantum mechanics: even in the physics of gases, position and velocity loose their meaning).
https://medium.com/carre4/towards-a-demystification-of-quantum-mechanics-de9053357cb8
['Giovanni Organtini']
2020-11-23 11:40:48.215000+00:00
['Physics', 'Science', 'Quantum Mechanics', 'State', 'Teaching']
What Do You Know Review
Firstly, mashAllah! Secondly, Alhamdulillah! Thirdly, wow! Leyla Habebti travelled to Norwich on Saturday with intentions to deliver a talk on awareness of Mental Health, and her personal experiences. Not only did she do that, but she also brought about a sense of connection. Leyla told the small community of ladies in Norwich about her own story and battles, and she emphasised “everyone has a story to tell” — we all knew this, and it suddenly dawned on us, that yes, we all do have a story. Whether we say it out loud or not, we all have a story, and this brought about a stronger connection than just sisters in Islam, we became sisters with stories. Everyone came to a mutual understanding that mental health is a sensitive topic, but it doesn’t mean we shouldn’t speak about it — or totally push it out of the way because we haven’t got the exact answers or solutions, but be brought to the forefront and taken with awareness. When good and positive company was mentioned, I couldn’t help but think of the hadith that mentions a person is on the religion of their friends — and this probably had the most impact on the young teenage girls who I later overheard say “I’m glad you would understand if I was to ever come to you with a problem”, so we can definitely say that Leyla exceeded her intentions. After making it clear how common mental health problems are, we all realised that we are not alone. We no longer felt like strangers to one another, but sensed some sort of deeper understanding of one another. I noticed that a few sisters exchanged meaningful glances as if they were saying with their eyes “we have some talking to do”, or smiling with their slightly teary eyes as if to say “I know, and I am here”. The talk made many sisters emotional, and lots of tissues were handed round, some shared their stories out loud, whilst others spoke to Leyla at the end and reached out for help. Many sisters were eager to ask questions on how to deal with friends with mental health but were unaware of their situation, or help friends who keep putting themselves down, to how to speak out about it and bring themselves back up — even after the talk ended, and food was being served, Leyla had a queue of sisters waiting to ask her something. MashAllah the talk was successful in educating, caring, and connecting, Alhamdulillah the talk brought us all closer together and more aware, and most importantly, we’ve just made another community active on empowering and putting mental health first.
https://medium.com/inspirited-minds/what-do-you-know-review-a278def989c5
['Inspirited Minds']
2015-12-06 00:46:14.183000+00:00
['Islam', 'Mental Health', 'Depression']
Kissing With Mask NOT the Same as Sex With Condom
Kissing With Mask NOT the Same as Sex With Condom Experts say… Wall painting by street artist Pobel in Bryne, Norway. Kissing with a mask on I’m not sure what to expect From such an oxymoron Would it be the same effect As having sex with a condom? Yes, they say, so sinless And, yes, they are adorable But their science is as senseless As my question was rhetorical For while both suck, indeed And that’s correct to say At least condoms succeed At keeping something at bay A mask on the other hand Is a gimmick for a germ That can easily cross that line So I hope they understand That if corona was a sperm They’d be pregnant in no time
https://medium.com/no-crime-in-rhymin/kissing-with-mask-not-the-same-as-sex-with-condom-2da68e66fd0b
['Daniele Ihns']
2020-11-01 08:32:20.375000+00:00
['Humor', 'Pandemic', 'Sex', 'Poetry', 'Coronavirus']
The Wrong Way to Deal with Anxiety
Photo by Min An from Pexels Storms always start off slowly with one cloud drifting into clear skies. A thought tinged with emotion, but just slight. “I haven’t talked to Olivia in a while, what if she doesn’t like me anymore.” Add another cloud. “What if I did something wrong to make her not like me.” I make a mental note of the word what if clearly starting each sentence. “ It’s just anxiety. That can’t possibly be true, can it?” I seek for reassurance inside my mind, “No, she’s probably just busy with Crossfit and Running Club. She’s been way more busy lately. It’s not me.” At this point the skies are still sunny. There’s just two pesky clouds in another wise beautiful blue sky. These thoughts are easy to rationalize. Easy to dismiss. Easy to forget. But the forecast, suggests heavy rains later in the week and I am not at all prepared for the weather. Partly Cloudy If my mind was the weather the feeling of stability and truth I feel when I am not anxious would be the sun. It constantly shines and is perpetually present even when I cannot see it. Most of the time I cannot see it. When the clouds inevitably roll in, normally, I can manage them. I notice, I ponder and I dismiss the clouds as shaped like elephants even though I am worried they might be giraffes. I can reason my way out of the uncertainty. That is until, there’s a storm. Chance of Rain Storms always start as white and puffy clouds. Before long they hang dark and heavy threatening rain and lightning. In my brain, when there is too much uncertainty, the clouds start to accumulate. I cannot reason them away. I cannot remember the sunlight. The thoughts are perceived as too pressing, too real, too tangible, to be cast away as just another neural firing. This normally occurs when the thoughts in question relate strongly to two areas of my life: fulfillment and relationships. You see, after years of observing the same pattern, I have come to realize most of my anxiety centers around the loss of my true potential and the loss of those I love. Worse still is when those two fears meld together, but I will get to that. Fear is uncertain. The bigger the fear, the bigger the desire to qualm the fear or rather qualm the notion that your fear will not come true in the future. It does not exist here. It does not exits now, yet your thoughts can cast your reality into shadows. Your fears, magnified by your thoughts, seem to exist in the present. Suddenly, uncertainty is palpable and if it is palpable it can be made certain. At least that’s what I tell myself, as I begin the series of mental gymnastics to try to figure the uncertainty out. “Did I miss out on some pivotal point of my own personal growth by not going to teach full time internationally?” Maybe if I mentally compare how I feel now, to how I felt while teaching English for the summer in Peru, I will know if I got enough out of the experience. Thoughts that used to be small and easily tossed aside begin to carry more weight. “Have my friend Olivia and I outgrown our friendship? What if I did something to push her away? She has always been busy, there must be something else wrong.” I will analyze. I will obsess. I will try and try again to figure out the answer to unanswerable questions until I come to a conclusion. But as is the case with anything built on shaky foundation, my tower of reason eventually falls and I have to start back over. It’s futile really. I can’t change the weather. Why do I think I can out wit the storm? Thunder and Lightning The more I try to prevent the storm, the more catastrophic it begins to feel. These thoughts are no longer small clouds, easily ignored and brushed aside. They are no long just dark and heavy threatening rain. They are raw and ragged, opening up in a torrential down pour complete with thunder and lightning. This is the point when my uncertainty begins to metastasize and I am faced with my biggest fear of all. “What if to live the life of my dreams I need to leave my boyfriend.” Now I have done a lot of work with this fear. In moments of sunshine I can objectively look at this thought and see how clearly it defines my past. My parents were unhappily married my entire life. Neither of them really ever became, what I consider, fully formed versions of themselves. There was a lot of lost potential, that only now are they somewhat getting back after divorce even though they both have latched on to new significant others in an attempt to feel complete. I internalized my parent’s unsaid message, that you partner is supposed to be the driving force behind your fulfillment. Disney and the romanticized version of happily ever after, has in no way helped to deter these beliefs. Furthermore, my household was riddled with constant conflict that often resulted in threats of divorce or abandonment. Multiple times, I was asked to choose which parent I was going to stay with before my mom left the house threatening to never come back. I can see why I need to find certainty in doubt that hits so close to my childhood. I can also see how the idea that living the life of my dreams and being with my boyfriend are so heavily tied together in my mind, even if I don’t believe the sentiments of my parents. This understanding, while immensely helpful in my own personal development, does nothing to put at bay my fear. Fear is fear. It has no hard edges from which to hold on to. It only becomes stronger with threats of reason or dismissal. With every effort to try to contain the uncertainty fear creates, there is more of it. More intangibility. More clouds. More rain. There is no use trying to stop the storm, but there is courage in waiting it out. Sunshine In a mental rainstorm, there is no knowing for certain whether or not you will be okay. Sure, you can logically look at your life and see to which end you are leaning. You can take action in the direction you want to go. You can even reevaluate down the road and change direction, but fear isn’t about logic. Your thoughts bearing thunder and lightning are not academic. They are pain. Raw emotion. Fear is the crippling uncertainty as to whether or not we will feel that pain again. So what can be done during a storm? Nothing. And that is a courageous act. In doing nothing, you sit out the rain. You feel the rain. You pain becomes known, without trying to change it or lesson its impact. You cannot reason away you what if’s even when they are small puffy clouds and it is easy to do so. Applying reason to your emotions, only seeks to create the conditions for those dark thoughts to accumulate. Instead, you need to sit with your suffering no matter how illogical it may be. You need to acknowledge your pain as part of yourself. Only then do you gain clarity and in gaining clarity acceptance and understanding. “I have pushed so many friends away in my life I do not want to push away Olivia.” I strongly value our friendship and want to continue to work for it. “I could never make my parents happy, and let down my dad by not teaching abroad in Spain like I said I would.” I am not responsible for my parents happiness, I am responsible for mine. “I am a failure for not staying true to my word and teaching abroad right out of college and instead choosing love.” I mourn the loss of something I never had, but I cannot be any more grateful and joyful for the way life has unfolded. No matter how many rain clouds obstruct its view, remember the sun is always still shining. With time you will be able to let yourself feel it.
https://medium.com/invisible-illness/the-wrong-way-to-deal-with-anxiety-a13f053725bf
['Kim Buchwald']
2019-08-15 20:25:53.765000+00:00
['Mental Health', 'Anxiety']
Supporting The Anxiously Attached Partner
However, unlike healthy relationships, there is an insecure attachment style that should be mentioned here: Anxious attachment — marked by one (or both) partners seeking constant approval and reassurance from the other, an overarching fear that their partner is either cheating, planning to cheat, or going to leave them, and bending over backwards to try and please their partner often by neglecting their own needs. These types of relationships are often filled with a lot of passion and toxic peaks and valleys where solutions aren’t found, often because partners may not be able to pinpoint the issues at hand. Or, they’re used to having dramatic and unstable relationships as “normal” — and even comfortable — so a solution may not want to be found. Not for nothing, these relationships should have a time-out until each partner understands themselves, their part in the cycle and where and how it started (*hint: childhood). Yet, the reasons that call for a relationship breather, are the same reasons keeping the push-pull dynamic in full effect. Adding another layer to these unstable relationship dynamics, is that Anxiously-attached partners and Avoidantly-attached partners are usually drawn to each other like two moths to the flame. If we think about it, this makes perfect sense. Each type of insecure attachment style seeks out (chases), and simultaneously pushes away (runs) from the qualities the other has. This works two ways: they want to approach the other for belonging and at the same time they reproach the other because that person is seen as a threat to their sense of autonomy. …two sides to the same coin. For example, those who are Anxiously-attached may want to be “fixed” or “saved” in the relationship where their partner’s needs come before their own in an effort to make them feel whole or complete, while many Avoidantly-attached partners like doing the “fixing” or “saving” to shy away from their own growth. Partners who are Anxiously-attached often have a fear of being left behind or abandoned that can be triggered by an Avoidantly-attached partner who often has a fear of engulfment, or a feeling of losing themselves in the relationship. The Anxiously-attached partner may ‘chase’ the Avoidantly-attached partner to be reassured they’re loved, or wanted, which in turn can cause the Avoidantly-attached partner to ‘run’ from emotional overwhelm. Those with an Anxious attachment can misconstrue their partner’s autonomy or need for personal space as being abandoned by their partner, whereas those with an Avoidant attachment can misconstrue their partner’s belongingness, emotional availability and ease of intimacy as something to fear and avoid at all costs. Anxiously-attached partners commonly report feeling fear, anxiety or impending doom whereas an Avoidantly-attached partner is often seen as shallow, aloof and independent. On the flip-side, Avoidantly-attached partners often report feeling numb, bored or indifferent where the excitement, spontaneity and passion of an Anxiously-attached partner makes them feel alive or gives them a sense of purpose.
https://thebehaviordoc.medium.com/supporting-the-anxiously-attached-partner-41a402a7708a
['Annie Tanasugarn']
2020-12-29 23:44:37.537000+00:00
['Self', 'Love', 'Psychology', 'Life', 'Self Improvement']
Among Us v. Fall Guys: What Makes a Product Sticky?
Among Us v. Fall Guys: What Makes a Product Sticky? The 200 IQ strategies that made Among Us win quarantine Yeah heartbreak sucks, but have you ever been murdered by your bro outside of cams? (Looking at you, Ariful). If you don’t know what the hell I’m talking about, you probably haven’t heard of Among Us. Among Us is an online multiplayer party game like Werewolf or Mafia. It’s been downloaded 100 million times and has the population of Italy in terms of daily active players (60 million). Fall Guys is another online multiplayer party game that saw an explosion in popularity recently but seems to have lost some of its momentum — unlike Among Us, which seems to be here to stay. Why is Among Us stickier than Fall Guys? What makes one product more marketable than another? Here are my thoughts.
https://medium.com/better-marketing/among-us-v-fall-guys-what-makes-a-product-sticky-bb11b996bda3
['Murto Hilali']
2020-10-21 15:26:45.684000+00:00
['Gaming', 'Marketing', 'UX', 'Product', 'Product Design']
Decorators in Python: Fundamentals for Data Scientists
Decorators in Python: Fundamentals for Data Scientists Understand the basics with a concrete example! Photo by Goran Ivos on Unsplash Decorators in Python is used to extend the functionality of a callable object without modifying its structure. Basically, decorator functions wrap another function to enhance or modify its behaviour. This post will introduce you to the basics of decorators in Python. Let’s write a Python3 code that contains examples of decorator implementations: Decorator Definition def decorator_func_logger(target_func): def wrapper_func(): print("Before calling", target_func.__name__) target_func() print("After calling", target_func.__name__) return wrapper_func def target(): print('Python is in the decorated target function') dec_func = decorator_func_logger(target) dec_func() Output: air-MacBook-Air:$ python DecoratorsExample.py ('Before calling', 'target') Python is in the decorated target function ('After calling', 'target') Above decorator structure helps us to display some notes on the console before and after a target function is called. Here are the simple steps for defining a decorator; First, we should define a callable object such as a decorator function which also contains a wrapper function inside. which also contains a wrapper function inside. Decorator functions should take a target function as a parameter. as a parameter. And it should return wrapper function which extends the target function passed as an argument. which extends the target function passed as an argument. Wrapper function should contain a target function call together with the code extending the behaviour of target function. Photo by Doruk Yemenici on Unsplash def decorator_func_logger(target_func): def wrapper_func(): print("Before calling", target_func.__name__) target_func() print("After calling", target_func.__name__) return wrapper_func @decorator_func_logger def target(): print('Python is in the decorated target function') target() Output: air-MacBook-Air:$ python DecoratorsExample.py ('Before calling', 'target') Python is in the decorated target function ('After calling', 'target') With the help of the syntactic sugar provided by Python, we can simplify the decorator definition as shown above. Note that @decorator_func_logger is added just before the target function we would like to decorate. We can then directly call the target function. There is no need to explicitly assign the decorator, as we did in the first example. Photo by Doruk Yemenici on Unsplash Defining Multiple Decorators and Decorating Functions with Arguments import time def decorator_func_logger(target_func): def wrapper_func(*args, **kwargs): print("Before calling", target_func.__name__) target_func(*args, **kwargs) print("After calling", target_func.__name__) return wrapper_func def decorator_func_timeit(target_func): def wrapper_func(*args, **kwargs): ts = time.time() target_func(*args, **kwargs) te = time.time() print (target_func.__name__, (te - ts) * 1000) return wrapper_func @decorator_func_logger @decorator_func_timeit def target(loop): count = 0 print('Python is in the decorated target function') for number in range(loop): count += number target(100) target(3000) Output: air-MacBook-Air:$ python DecoratorsExample.py ('Before calling', 'wrapper_func') Python is in the decorated target function ('target', 0.015974044799804688) ('After calling', 'wrapper_func') ('Before calling', 'wrapper_func') Python is in the decorated target function ('target', 0.47397613525390625) ('After calling', 'wrapper_func') You can easily decorate a target function with multiple decorators, by adding several decorators before the target function with using ‘@’ syntax. The order that the decorators are being executed will be the same as how they are listed before target function. Note that we have a parameter, loop, in our target function. This is no issue as long as the same parameter is used for the wrapper function. To make sure that the decorator is flexible to take an arbitrary number of parameters, (*args, **kwargs) parameters are used for the wrapper function. Key Takeaways Decorators define reusable code blocks that you can apply to a callable object (functions, methods, classes, objects) to modify or extend its behaviour without modifying the object itself. Consider that you have many functions in your script performing many different tasks and you need to add specific behaviour to all of your functions. In such a case, it is not a good solution to copy the same code block into your functions to have the required functionality. You can simply decorate your functions instead. Conclusion In this post, I explained the basics of decorators in Python. The code in this post is available in my GitHub repository. I hope you found this post useful. Thank you for reading!
https://towardsdatascience.com/decorators-in-python-fundamentals-for-data-scientists-eada7f4eba85
['Erdem Isbilen']
2020-06-24 07:25:59.162000+00:00
['Python', 'Data Science', 'Decorators', 'Programming', 'Object Oriented']
How to Take Responsibility for Women Who Never Do Anything Wrong
Red Flags Red flags are everywhere. There are signs all around that tell you where you should and should not be. There is a gut feeling, an intuition that screams at you when it’s time to leave a situation, to find the nearest exit, and create a shift. And then, you talk yourself out of it. You settle. You decide to love yourself a little less every day and live with a circumstance that doesn’t make you happy. The moment you make this decision, whatever happens next is your responsibility. Consequences When you ignore red flags, when you go against what you know in your heart is right, you put yourself in danger of experiencing some pretty painful consequences. But, such is life. We are not always going to make the right choices, and no matter what, we must lie in the beds we have made. The key to living with your consequences is accepting your role and responsibility in what has happened. Responsibility It’s time to stop blaming everyone else for what “they’ve done” to you. Instead, question how you put yourself in such a position in the first place! Reverse engineer the problem, go back in time and pinpoint the exact moment you saw or felt the first red flag. That’s the moment you should have walked away. That’s the moment you decided to take on the consequence. Take responsibility, quit playing the victim, and stop giving the power to the person or the circumstance that hurt you. None of this would have happened without your consent. Take the responsibility and begin feeling the shift. Always remember, although you are free to make your own decisions, you are never free from the consequences of those decisions.
https://medium.com/the-gorgeous-girls-guide/how-to-take-responsibility-for-women-who-never-do-anything-wrong-7096bdeb37a3
['Elisabeth Ovesen']
2020-09-19 20:41:33.850000+00:00
['Mental Health', 'Inspiration', 'Advice', 'Life Lessons', 'Self Improvement']
An App With SCA: Flow Testing
Last week we discussed how to export the dependencies of our Snake app to gain full control over our codebase. We also finalized the test of the state we were not able to finish a couple of weeks ago. In the process of factoring out the dependencies, we worked on the timer, moving it from the Reducer to the Environment to have the possibility to control it. However, we did not use it in our tests. Today, we complete the test of our app, showing an interesting property of the Composable Architecture Test Support: thanks to its ergonomic API, it is possible to fully test our app in a very simple way. Our codebase so far In order to properly understand our tests, let me report here the pieces of the app we need. The first element we need to recall is the Environment . This is the part of the app that gathers all the dependencies used to interact with the external world. The Environment is a container for a set of API: we use a version of those API for the production environment, but we use some mocked implementations in the test environment, to properly control them. Another piece of code we need to remember is the Reducer . The reducer is a pure function that takes the current State , the Action performend, and the Environment and it updates the State . This is the single point where we implement the whole logic of our app. This is the function that we are going to test. Lastly, let’s recall the State and the Action . They are pretty simple: the State contains the structure that describes the Snake , the current location of the Mouse , and whether we should present an alert or not. The Action is an enum with the cases handled by the Reducer . For the sake of completeness, this is what they look like: Preparing the test Now, let’s set up the testing environment. As already stated, we would like to fully control our test environment. That means, for example, that we need to know exactly what is returned by the dependencies and when the publishers publish a new value. And the value they publish, of course. Luckily, this is pretty simple to achieve. We defined our Environment so that we can plug the implementation we want. Thus, we can write a mock version of the environment as it follows: In this mocked environment, we are going to use an RNG that always returns 0 . This can be used to compute a new location for the Mouse . We also pass a parameter that is an Effect , generic in the types produced by the timer. This step is important because it allows us to pass a Publisher that we fully control as a dependency for the reducer. Our test can be deterministic and fully predictable in this way. From the theory, we know that a good test should follow the AAA rule: A rrange rrange A ct ct A ssert We are not done with the Arrange part. We created the dependency we need, but we still need to assemble it into the test. So, let’s move to the actual test to complete the first step: I omitted the Act and Assert to focus on the Arrange part. In the testGameFlow() function, we are creating a PassthroughSubject . This is an object of the Combine framework that can act both as a Publisher and as a Subscriber . We are going to use it mainly as a publisher in our test. Then we create the Snake for the initial state. Finally, we create the TestStore using another zeroGenerator that will place the initial mouse in the top-left corner of the game field. For the Environment , we use the mock we prepared, passing the PassthroughtSubject as an effect. At this point, our test has all the ingredients to be finally written, so… let’s cook it! Writing the Test If you remember from the State Testing article, the ComposableArchitectureTestSupport allows us to Act and Assert in a single step, thanks to several utility functions and types the good guys of pointfree.co prepared for us. In short, the TestStore has an assert method that lets us specify a set of Step s to simulate the evolution of our app. There are different types of steps: one to simulate an action entering in the reducer, another to simulate an action received from an effect, a generic step to perform some operation, and a step to update the environment. By chaining wisely these steps, we can simulate a whole execution flow for the app. We can for, example, startGame , move the snake, changeCurrent(direction:) , move the snake again, and so on. After every step, we have the possibility to describe how the resulting state should be. For example, after a move action, we describe the new positions for the snake’s head and for its body. After a changeCurrent(direction:) , we describe where the snake is facing, and so on. If any of these descriptions does not match the actual execution outcome, the test fails. Now, let’s see what the final aspect of this test is: The first 8 lines of this snippet are the same lines of the Arrange step of the previous one. The interesting part comes after line 10. In the TestStore.assert function we pass a set of steps that describes the evolution of a game. After the game start, for example, we update the current direction to up . At this point, we want to make the snake move by one step. In the actual snake game, this action is triggered by the timer. In the tests, we have to simulate that and we can do it by leveraging our PassthroughSubject publisher. The do step lets us access the subject and use it to send a new DispatchTime . This simulates the firing of the timer. If we look at the reducer, we know that every time the timer fires, a move action is injected in the reducer. Therefore, we can handle it by using the receive step and describing the new expected state after the snake has moved. We then perform the same set of changeCurrent(direction:) , send a DispatchTime through a do step, and receiving a move action a couple more time to make the snake move left and down . This last movement, however, has a slightly different outcome: before the last move , the snake was facing its own body. By moving toward it, the game ends. This condition is captured at line 48, by describing the expected state as having an AlertState with the "Game Over" title, a message stating the exact length of the snake and an Ok button. Remember, the Store is still subscribed to the timer Effect . We need to clean up our resources otherwise the ComposableArchitecture would warn us with the following message: The last do step at line 54 does exactly that: by signaling the completion of the publisher, the Store is able to cancel the subscription to it and the cleanup is performed. Conclusion In this article, we harnessed the power of a completely controlled Environment . We have been able to fully test the execution of the app: from the startGame action to the final move that lead to the Game Over alert to be presented.
https://medium.com/swlh/an-app-with-sca-flow-testing-ada82518d313
['Riccardo Cipolleschi']
2020-10-22 13:38:42.966000+00:00
['Swift Programming', 'App Development', 'iOS', 'Testing', 'Apple']
Own Well Being
Recent scatterings of thoughts and feelings Striving for elevation above chaotic turmoil Meditatively focused on my own well being Events and happenings which I can control Faithfully nurturing spiritual inner being. Multiple things I wish were different But not allowing emotional attachment Knowing little I can do to change them, Maintaining peace in heart and soul Demeanor and feelings, I can control. Watching fiery sun set in the west Profusion of painted reds and orange Faint moon begins appearing in east Enjoying surreal beauty in day’s end With fading day evolving into night. Concerned about where country is right now Marching and demonstrating on occasions, But most important to maintain inner peace By assiduously nurturing my own well being.
https://medium.com/flicker-and-flight/own-well-being-58218ae651b5
['Randy Shingler']
2020-10-30 17:15:30.184000+00:00
['Self-awareness', 'Mindfulness', 'Wellbeing', 'Emotions', 'Poetry']
K-Nearest Neighbors Algorithm Using Python
With the business world entirely revolving around Data Science, it has become one of the most sorts after fields. In this article on the KNN algorithm, you will understand how the KNN algorithm works and how it can be implemented by using Python. What is KNN Algorithm? “K nearest neighbors or KNN Algorithm is a simple algorithm that uses the entire dataset in its training phase. Whenever a prediction is required for an unseen data instance, it searches through the entire training dataset for k-most similar instances and the data with the most similar instance is finally returned as the prediction.” KNN is often used in search applications where you are looking for similar items, like find items similar to this one. The algorithm suggests that if you’re similar to your neighbors, then you are one of them. For example, if apple looks more similar to peach, pear, and cherry (fruits) than monkey, cat, or a rat (animals), then most likely apple is a fruit. How does a KNN Algorithm work? The k-nearest neighbor algorithm uses a very simple approach to perform classification. When tested with a new example, it looks through the training data and finds the k training examples that are closest to the new example. It then assigns the most common class label (among those k-training examples) to the test example. What does ‘k’ in the KNN Algorithm represent? k in the KNN algorithm represents the number of nearest neighbor points that are voting for the new test data class. If k=1, then test examples are given the same label as the closest example in the training set. If k=3, the labels of the three closest classes are checked and the most common (i.e., occurring at least twice) label is assigned, and so on for larger ks. KNN Algorithm Manual Implementation Let’s consider this example, Suppose we have height and weight and its corresponding Tshirt size of several customers. Your task is to predict the T-shirt size of Anna, whose height is 161cm and her weight is 61kg. Step1: Calculate the Euclidean distance between the new point and the existing points For example, Euclidean distance between point P1(1,1) and P2(5,4) is: Step 2: Choose the value of K and select K neighbor's closet to the new point. In this case, select the top 5 parameters having least Euclidean distance Step 3: Count the votes of all the K neighbors / Predicting Values Since for K = 5, we have 4 Tshirts of size M, therefore according to the kNN Algorithm, Anna of height 161 cm and weight, 61kg will fit into a Tshirt of size M. Implementation of kNN Algorithm using Python Handling the data Calculate the distance Find k nearest point Predict the class Check the accuracy “Don’t just read it, practice it!” Step 1: Handling the data The very first step will be handling the pen the dataset using the iris dataset. O open function and read the data lines with the reader function available under the csv module. import csv with open(r'C:UsersAtul HarshaDocumentsiris.data.txt') as csvfile: lines = csv.reader(csvfile) for row in lines: print (', '.join(row)) Now you need to split the data into a training dataset (for making the prediction) and a testing dataset (for evaluating the accuracy of the model). Before you continue, convert the flower measures loaded as strings to numbers. Next, randomly split the dataset into the train and test dataset. Generally, a standard ratio of 67/33 is used for test/train split Adding it all, let’s define a function handleDataset which will load the CSV when provided with the exact filename and splits it randomly into train and test datasets using the provided split ratio. import csv import random def handleDataset(filename, split, trainingSet=[] , testSet=[]): with open(filename, 'r') as csvfile: lines = csv.reader(csvfile) dataset = list(lines) for x in range(len(dataset)-1): for y in range(4): dataset[x][y] = float(dataset[x][y]) if random.random() < split: trainingSet.append(dataset[x]) else: testSet.append(dataset[x]) Let’s check the above function and see if it is working fine, Testing handleDataset function trainingSet=[] testSet=[] handleDataset(r'iris.data.', 0.66, trainingSet, testSet) print ('Train: ' + repr(len(trainingSet))) print ('Test: ' + repr(len(testSet))) Step 2: Calculate the distance In order to make any predictions, you have to calculate the distance between the new point and the existing points, as you will be needing k closest points. In this case for calculating the distance, we will use the Euclidean distance. This is defined as the square root of the sum of the squared differences between the two arrays of numbers Specifically, we need only the first 4 attributes(features) for distance calculation as the last attribute is a class label. So for one of the approaches is to limit the Euclidean distance to a fixed length, thereby ignoring the final dimension. Summing it up let’s define euclideanDistance function as follows: import math def euclideanDistance(instance1, instance2, length): distance = 0 for x in range(length): distance += pow((instance1[x] - instance2[x]), 2) return math.sqrt(distance) Testing the euclideanDistance function, data1 = [2, 2, 2, 'a'] data2 = [4, 4, 4, 'b'] distance = euclideanDistance(data1, data2, 3) print ('Distance: ' + repr(distance)) Step 3: Find k nearest point Now that you have calculated the distance from each point, we can use it to collect the k most similar points/instances for the given test data/instance. This is a straightforward process: Calculate the distance wrt all the instance and select the subset having the smallest Euclidean distance. Let’s create a getKNeighbors function that returns k most similar neighbors from the training set for a given test instance import operator def getKNeighbors(trainingSet, testInstance, k): distances = [] length = len(testInstance)-1 for x in range(len(trainingSet)): dist = euclideanDistance(testInstance, trainingSet[x], length) distances.append((trainingSet[x], dist)) distances.sort(key=operator.itemgetter(1)) neighbors = [] for x in range(k): neighbors.append(distances[x][0]) return neighbors Testing getKNeighbors function trainSet = [[2, 2, 2, 'a'], [4, 4, 4, 'b']] testInstance = [5, 5, 5] k = 1 neighbors = getNeighbors(trainSet, testInstance, 1) print(neighbors) Step 4: Predict the class Now that you have the k nearest points/neighbors for the given test instance, the next task is to predicted response based on those neighbors You can do this by allowing each neighbor to vote for their class attribute, and take the majority vote as the prediction. Let’s create a getResponse function for getting the majority voted response from a number of neighbors. import operator def getResponse(neighbors): classVotes = {} for x in range(len(neighbors)): response = neighbors[x][-1] if response in classVotes: classVotes[response] += 1 else: classVotes[response] = 1 sortedVotes = sorted(classVotes.items(), key=operator.itemgetter(1), reverse=True) return sortedVotes[0][0] Testing getResponse function neighbors = [[1,1,1,'a'], [2,2,2,'a'], [3,3,3,'b']] print(getResponse(neighbors)) Step 5: Check the accuracy Now that we have all of the pieces of the kNN algorithm in place. Let’s check how accurate our prediction is! An easy way to evaluate the accuracy of the model is to calculate a ratio of the total correct predictions out of all predictions made. Let’s create a getAccuracy function that sums the total correct predictions and returns the accuracy as a percentage of correct classifications. def getAccuracy(testSet, predictions): correct = 0 for x in range(len(testSet)): if testSet[x][-1] is predictions[x]: correct += 1 return (correct/float(len(testSet))) * 100.0 Testing getAccuracy function testSet = [[1,1,1,'a'], [2,2,2,'a'], [3,3,3,'b']] predictions = ['a', 'a', 'a'] accuracy = getAccuracy(testSet, predictions) print(accuracy) Since we have created all the pieces of the KNN algorithm, let’s tie them up using the main function. # Example of kNN implemented from Scratch in Python import csv import random import math import operator def handleDataset(filename, split, trainingSet=[] , testSet=[]): with open(filename, 'rb') as csvfile: lines = csv.reader(csvfile) dataset = list(lines) for x in range(len(dataset)-1): for y in range(4): dataset[x][y] = float(dataset[x][y]) if random.random() < split: trainingSet.append(dataset[x]) else: testSet.append(dataset[x]) def euclideanDistance(instance1, instance2, length): distance = 0 for x in range(length): distance += pow((instance1[x] - instance2[x]), 2) return math.sqrt(distance) def getNeighbors(trainingSet, testInstance, k): distances = [] length = len(testInstance)-1 for x in range(len(trainingSet)): dist = euclideanDistance(testInstance, trainingSet[x], length) distances.append((trainingSet[x], dist)) distances.sort(key=operator.itemgetter(1)) neighbors = [] for x in range(k): neighbors.append(distances[x][0]) return neighbors def getResponse(neighbors): classVotes = {} for x in range(len(neighbors)): response = neighbors[x][-1] if response in classVotes: classVotes[response] += 1 else: classVotes[response] = 1 sortedVotes = sorted(classVotes.iteritems(), key=operator.itemgetter(1), reverse=True) return sortedVotes[0][0] def getAccuracy(testSet, predictions): correct = 0 for x in range(len(testSet)): if testSet[x][-1] == predictions[x]: correct += 1 return (correct/float(len(testSet))) * 100.0 def main(): # prepare data trainingSet=[] testSet=[] split = 0.67 loadDataset('iris.data', split, trainingSet, testSet) print 'Train set: ' + repr(len(trainingSet)) print 'Test set: ' + repr(len(testSet)) # generate predictions predictions=[] k = 3 for x in range(len(testSet)): neighbors = getNeighbors(trainingSet, testSet[x], k) result = getResponse(neighbors) predictions.append(result) print('> predicted=' + repr(result) + ', actual=' + repr(testSet[x][-1])) accuracy = getAccuracy(testSet, predictions) print('Accuracy: ' + repr(accuracy) + '%') main() This was all about the kNN Algorithm using python. With this, we come to the end of this article. If you have any queries regarding this topic, please leave a comment below and we’ll get back to you. If you wish to check out more articles on the market’s most trending technologies like Python, DevOps, Ethical Hacking, then you can refer to Edureka’s official site. Do look out for other articles in this series which will explain the various other aspects of Data Science.
https://medium.com/edureka/k-nearest-neighbors-algorithm-b87ee824b860
['Sahiti Kappagantula']
2020-09-28 13:27:41.499000+00:00
['Machine Learning', 'Python', 'Data Science', 'Knn Algorithm', 'Programming']
The Best New App for Entrepreneurs to Record Podcasts
Author: Jason Parks / Source: Entrepreneur It was a goal of mine to get a digital marketing podcast published on the iTunes and Android store. The issue was that I didn’t want to have to go through the hassle of creating an XML feed to host the podcast on my site. Like many entrepreneurs and business owners, the XML feed deterred me from my ambition of creating my own podcast. I know I could have figured this out or asked one of the developers at our marketing agency to help. At the end of the day though, the XML feed was a hurdle and because of this, I didn’t pursue my podcasting ambition. I know that there are a lot of entrepreneurs like me who have yet to record a podcast due to the hoops that you’d have to jump through with this XML feed. That’s about to change! You are my Anchor. I was recently introduced to Anchor, an app that allows you to broadcast your voice, music and conversations, all for free. Gary Vaynerchuk, one of the most popular marketers in the world right now, had Anchor’s CEO on one of his podcasts. When I heard that Anchor allows you to get your podcasts pushed to Apple and Google, I immediately downloaded the app. According to The Verge, the app can now publish recordings to podcast platforms, sending them to both Apple and Google’s collection of shows for even more people to find. While podcasting has always been open to anyone, Anchor is trying to remove the last few hoops that people had to jump through, like setup, hosting and distributing an RSS feed so that listeners can subscribe to the podcast. “Unfamiliar with RSS?” Anchor wrote in a blog post. “Cool, let’s keep it that way.” BOOM. I finally had a place to record my podcasts and easily distribute them to Apple and Google. And 99.6 percent of new smartphones run Android or iOS. If you want to easily get your podcast distributed, Anchor is your go-to… Click here to read more
https://medium.com/oneqube/the-best-new-app-for-entrepreneurs-to-record-podcasts-73d39136e55b
[]
2018-03-01 17:05:05.428000+00:00
['Podcast', 'Digital Marketing', 'Entrepreneurship']
It’s Time for Self-Help Gurus to Sit Down
It’s Time for Self-Help Gurus to Sit Down Positivity at the expense of reality is destructive. Photo by Dollar Gill on Unsplash Suffice it to say I’ve had quite an eventful couple of years. Since 2018, I suffered a series of losses, each challenging on their own, but all arduous as a whole. I’ve been dealing with trauma and the chronic physical pain that often accompanies it. I’ve tried meditating, exercising, stretching, talking, crying, and yelling the pain out of me. I’ve seen a psychotherapist specializing in grief, a psychotherapist specializing in somatic therapy, my medical doctor on multiple occasions, a chiropractor, a physiotherapist specializing in vertigo, a physiotherapist trained in dry needling, and a massage therapist. And while a lot of the aforementioned therapies and approaches have helped, none of them singlehandedly “cured” me of my pain. And they didn’t erase my grief, either. They simply provided me with better tools for coping with it. Now that some time has passed, and after a lot of processing on my end, I’m a bit better equipped at navigating grief, as well as all the other unfortunate events that have ensued. But this experience has allowed me to see beyond the veil, to recognize a lot of my own privilege, and to contend with the fact that so many people are suffering every day for things they have no control over. I’d just never really noticed, that is – not until it had happened to me. With this new knowledge brought the recognition of just how many people in the personal growth community are grossly ill-equipped at dealing with trauma and suffering. I’ve discovered that behind the idea of “manifesting” is an industry that profits off white privilege and the systemic inequalities that perpetuate it. I’ve witnessed self-proclaimed “gurus,” “lightworkers,” “spiritual coaches” and the likes, selling the notion of transcending one’s emotions and traumas, while directly perpetuating the use of spiritual bypassing. I’ve had my own emotions dismissed, downplayed, and disregarded by so-called “experts” who charge fees for their services and believe themselves more evolved than the rest of us. And I’m here to tell you why this form of toxic spirituality is harmful, exploitative, and long past its expiration date. Manifesting is a facet of privilege. In order for you to believe you have complete control over your environment, to the extent that you can think something into existence by persistently wishing for it, you must live a life in which you haven’t yet been proven wrong. Which means you’re privileged. As my university stats professor drilled into my head a decade ago, correlation does not equal causation. Just because things go right for you, doesn’t mean you caused them to. Being privileged doesn’t mean you’ve never had something unfavorable happen to you, it just means the things you long for were probably always at your fingertips, you just hadn’t realized it. More than likely, the odds were already skewed in your favor. According to The Law of Attraction’s website, manifesting “is where your thoughts and your energy can create your reality,” and so if you think and act positively, then you’ll attract favorable circumstances. The flip side of this is that when things go wrong, it’s also because of you. Which is glorified victim-blaming, and discounts so many factors outside of our control that prohibit or hinder people’s successes. There are significant boundaries to manifesting; for instance, racial inequality. So manifesting implies that you can attract wealth, fame, and career success by changing your thoughts. But how does this notion translate to systemic racial inequalities? Let’s first look at a bit of history for why we have a racial wealth divide today. According to a 2019 article in the Center for American Progress, the US federal government has directly contributed to racist housing policies. After The Great Depression, the Home Owners’ Loan Corporation and the Federal Housing Administration (FHA) promoted residential segregation by keeping middle-class neighborhoods white and making it difficult for black people to qualify for mortgages. This, in turn, led white people to earn more equity, allowed them access better to education (due to tax-funded schooling), and enabled them to afford certain opportunities for their children, such as extracurricular activities and college tuition, that were less accessible in more impoverished communities. Of course, the opposite happens for black people. “African Americans face systematic challenges in narrowing the wealth gap with whites,” reports the Center for American Progress. “The wealth gap persists regardless of households’ education, marital status, age, or income.” With less access to wealth and equity comes a lack of opportunity for higher education, which inevitably impacts employment prospects. Further to that, according to the Stanford Centre on Poverty & Inequality, a huge barrier preventing people from achieving financial and employment success can be reduced to the spelling of their name. People with “white-sounding” names are more likely to get callbacks for interviews, making them more likely to get the job, and leaving them under the false impression that it’s their skills that earned it. Equally-qualified, educated, and skilled black people — those beating the odds of systemic inequality raised against them — can still be turned down from jobs they apply to, simply because of the fact that their names sound “black.” And there are many other barriers to success outside of racism that manifesting doesn't account for. According to the Centre for Disease Control and Prevention, the CDC-Kaiser Permanente Adverse Childhood Experiences (ACE) Study revealed that childhood abuse or neglect increases a person’s risk of developing negative outcomes, such as depression, anxiety, substance abuse, cancer, diabetes, and even suicide, as an adult. Barriers like the gender pay gap, suffering from mental health conditions, growing up with neglect and abuse, being raised in poverty: these are all factors that are outside of a person’s control and can greatly impact their ability to “think their way” into success. The personal growth community believes you are your only roadblock. “The truth is that financial success starts in the mind and the number one thing holding many people back is their belief system concerning wealth and money,” says Jack Canfield, motivational speaker, and corporate trainer. On his website, Canfield boasts a subscriber’s list of 2.5 million people and has allegedly sold more than 500 million books worldwide, many of which earned him the title of New York Times’ Bestseller. “I am successful because I have never once believed my dreams were someone else’s to manage,” writes motivational speaker and author Rachel Hollis in her book Girl Wash Your Face. Founder of The Hollis Company alongside her soon-to-be ex-husband Dave, Rachel’s company focuses on personal growth and motivational seminars. According to the company’s LinkedIn page, they are based on six core values, one of which is centered on the belief that “our only competition is who we were yesterday.” And how can I even discuss toxic spirituality without discussing the eponymous face for the personal growth movement, motivational speaker, and author Tony Robbins? On his website, Robbins boasts the ability to help you master every area of your life, and he sells anything from training programs to supplements and retreats. Despite the fact that he’s been accused of berating abuse victims, subjecting his followers to dangerous techniques, and sexual harassment, his personal and professional development program is said to be the #1 of all time, with more than four million people in attendance to date. “The only thing that’s keeping you from getting what you want is the story you keep telling yourself,” says Robbins. Now I beg to differ. What do these three personal growth “gurus” have in common? They’re all white, they’re all rich, and they’ve all ignored systemic inequalities as a possible barrier to achieving success. And if they were to admit that there are other reasons someone may not land their dream job or make six figures, it would dismantle their entire platform and raison d’etre. Consequently, by ignoring these barriers, they’re also directly profiting off them. Spiritual bypassing is a self-righteous form of avoidance coping. “Once you attend the motivational workshop I went to last weekend, you won’t sink to the level of getting angry over this,” a friend of mine said to me recently, efficiently undermining my emotions, suggesting that anger wasn’t a healthy, nor appropriate, response. I didn’t say anything more to her, mostly because the entire purpose of her spiritual bypassing was to circumvent my experience with self-righteous “holier than thou” spiritual rhetoric, and in doing so, creating a more comfortable reality for her. At that moment, though, I vowed to always be someone who was comfortable feeling anger. Spiritual bypassing, by its very definition, is harmful. Author and psychotherapist John Welwood, his 2000 book “Toward a Psychology of Awakening,” described spiritual bypassing as “the use of spirituality, spiritual beliefs, spiritual practices, and spiritual life to avoid experiencing the emotional pain of working through psychological issues.” Spiritual bypassing is the act of avoiding feelings deemed to be “negative,” blaming unfortunate circumstances as “vibrating at a lower frequency,” and claiming that transcending one’s emotional reactions is a goal to strive for. It is the belief that everything has a “higher purpose,” and that difficult circumstances are no more than lessons in disguise. And it’s an efficient form of avoidance coping, which a 2011 article in the Journal of Personality defines as “attempting to evade a problem and deal with it indirectly.” Changing its name but not its practice is just as futile. Autor Michael Beckwith decided that it might sound better to call it “spiritual shapeshifting” and talks about how he “shapeshifted” the energy from his healthy knee to the energy of his injured knee, holding a “higher, purer vibration” and somehow magically ridding himself of pain and inflammation. Now I’m not exactly sure which frequency I’m vibrating at, but I can tell you that promoting the notion that your thoughts can cure your pain is a dangerous narrative to spin, particularly for those suffering from a debilitating disease or chronic pain. Claiming that an injury, whether physical, emotional or otherwise, being reduced to no more than “low frequency” implies that it’s entirely under your control. Does this line of thinking translate to cancer sufferers? Can someone with multiple sclerosis think themselves out of symptoms? Is it our fault if we’re diagnosed with a life-threatening illness? Can you see how this rhetoric is inherently damaging? Anyone who’s ever experienced acute suffering and trauma can tell you that while positive thinking as a concept can be helpful at times, positivity at the expense of reality is utterly offensive and harmful. It may make other people feel good to respond to your pain with “love and light,” but it entirely disregards the very real suffering you’re going through. And it efficiently undermines your experience. “Spiritual laws offer an elegant solution to the problem of unfairness,” writes author Kate Bowler. “They create a Newtonian universe in which the chaos of the world seems reducible to simple cause and effect. The stories of people’s lives can be plotted by whether or not they follow the rules. In this world there is no such thing as undeserved pain.” But this is not the world most of us are living in. And if you still live in this world, count yourself lucky that you haven’t been kicked out yet. Those who spiritually-bypass live in their own world, and are uncomfortable with yours. When you suffer in a world not governed by spiritual laws, you are not required to find a silver lining. The notion that “everything happens for a reason” is not true, and it’s very harmful. The universe isn’t a sentient being out there to teach you a lesson by causing the death of a loved one or unleashing a pandemic. Your personal growth is not the focus of the entire universe. It may help some people feel better about themselves to spiritually bypass your experience because it gives them a false sense of control, and prevents them from having to empathize with (and thus, acknowledge) your tangible fear. By avoiding the reality of your painful experience, spiritual bypassing enables people to separate themselves from believing it could also happen to them. The truth is, every single day, horrible things happen to people who don’t deserve them, by absolutely no fault of their own. I’ve seen it, I’ve witnessed it, and I’ve lived it. Failing to acknowledge this fact is a gross disservice to ourselves and to others. It’s not about transcending your emotions and always remaining positive, it’s about processing your life and adapting to its ups and downs. People who are suffering don’t need to be victim-shamed or feel at fault for their circumstances. Talk about adding insult to injury. Those entitled enough to preach toxic spiritual rhetoric to vulnerable people need to take inventory of their own lives and process their unresolved traumas. Anyone feeling comfortable profiting off others with their MLM essential oils or motivational seminars should do us all a favor and sit down. And we should all take stock of our privilege; while it may be invisible to us, it’s certainly obvious to others. Life is hard enough as it is. Let’s not make it any harder.
https://medium.com/swlh/its-time-for-self-help-gurus-to-sit-down-c1f2693d0239
['Shannon Leigh']
2020-10-23 22:02:47.850000+00:00
['Personal Growth', 'Self', 'Psychology', 'Trauma', 'Self Improvement']
UI設計筆記工具篇:Flow-神器!動態做好Code 就完成!
Design, code and collaborate. Framer is the best way to create interactive designs from start to finish. Design from scratch and then easily turn…
https://medium.com/as-a-product-designer/ui%E8%A8%AD%E8%A8%88%E7%AD%86%E8%A8%98%E5%B7%A5%E5%85%B7%E7%AF%87-flow-%E7%A5%9E%E5%99%A8-%E5%8B%95%E6%85%8B%E5%81%9A%E5%A5%BDcode-%E5%B0%B1%E5%AE%8C%E6%88%90-5d4e456a8b8f
['Laura Lin']
2018-08-22 02:42:43.466000+00:00
['Animation', 'Design', 'Flow', 'Sketch', 'iOS']
The Top 5 Deep Learning Libraries And Frameworks
In this article, I’m going to cover the top 5 Deep Learning Libraries & Frameworks. Here you go — Keras Developed by François Chollet, a researcher at Google, Keras is a Python framework for deep learning. Keras has been used at organizations like Google, CERN, Yelp, Square, Netflix, and Uber. The advantage of Keras is that it uses the same Python code to run on CPU or GPU. Keras models accept three types of inputs: NumPy arrays , just like Scikit-Learn and many other Python-based libraries. This is a good option if your data fits in memory. , just like Scikit-Learn and many other Python-based libraries. This is a good option if your data fits in memory. TensorFlow Dataset objects . This is a high-performance option that is more suitable for datasets that do not fit in memory and that are streamed from disk or from a distributed filesystem. . This is a high-performance option that is more suitable for datasets that do not fit in memory and that are streamed from disk or from a distributed filesystem. Python generators that yield batches of data (custom subclasses of the keras.utils.Sequence class). Keras features a range of utilities to help you turn raw data on disk into a Dataset: tf.keras.preprocessing.image_dataset_from_directory — It turns image files sorted into class-specific folders into a labeled dataset of image tensors. — It turns image files sorted into class-specific folders into a labeled dataset of image tensors. tf.keras.preprocessing.text_dataset_from_directory — It turns text files sorted into class-specific folders into a labeled dataset of text tensors. In Keras, layers are simple input-output transformations. For the preprocessing layers it includes: Vectorizing raw strings of text via the TextVectorization layer layer Feature normalization via the Normalization layer layer Image rescaling, cropping, or image data augmentation Example —
https://medium.com/ai-in-plain-english/the-top-5-deep-learning-libraries-and-frameworks-dfc7f3a492ec
['Priyesh Sinha']
2020-09-25 13:59:19.253000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Programming', 'Deep Learning']
Stupid Punctuation Rules
Stupid Punctuation Rules This is only a sampling of stupidity, as we all know. Photo by Dan Meyers on Unsplash Yes, it’s true: Even a grammar fan like me can recognize that some of the rules demanded by style guides and grammar authoritarians are just plain DUMB. As the old saying goes, you have to know the rules before you can break them. So, here are some stupid or arcane rules of punctuation that should probably be disposed of (though, of course, if your publisher demands that you conform to a particular style guides’ dictates, then you’ll probably want to do so in order to get paid.) Stupid Rule 1. Periods and commas go inside of parentheses and quotation marks, but other punctuation goes outside. What? This rule makes no earthly sense. Why make an exception for two marks to a rule that applies to every other punctuation mark? There’s nothing in this dictum that makes comprehension clearer. Look at these examples: Abyssinians, or “Abbys,” are a very athletic breed of cat. Abyssinians, or “Abbys”, are a very athletic breed of cat. People don’t “own” cats; people are part of a cat’s “staff.” People don’t “own” cats; people are part of a cat’s “staff”. In neither of these two examples does moving the comma or the period to the outside of the quotation marks change the meaning of the sentence. The Brits don’t have this rule, just American English. Time for it to go. Stupid Rule 2. The names of plays are encased in quotation marks — unless they become movies or books, at which point those same names are changed to italics. For pity’s sake, why? Does this make any difference: “Hamilton: An American Musical” by Lin-Manuel Miranda (musical) Hamilton: The Revolution, by Jeremy McCarter and Lin-Manuel Miranda (book) By convention, we understand that italics indicate a title — though many don’t realize the specifics. Since they don’t, why keep doing it? A rule is really only useful if people know about it and use it. Stupid Rule 3. The first word of a sentence that follows a question mark is capitalized — except when it isn’t. Okay, I wrote that a bit snarky, but it’s true: A writer is allowed to change this rule as they see fit for stylistic purposes. So, look at this example: Will Wendy fly away with Peter? Will John? Will Michael? Will Wendy fly away with Peter? will John? will Michael? Both of these examples are acceptable. The second set of sentences (without the capital letters after the question mark) is a style choice some writers like. If it’s a choice, then it’s not a “rule,” right? Why make it confusing? Either capitalize or don’t — pick a side! Stupid Rule 4. When deciding on the possessive of a common noun that ends in ‘s,’ you can decide either to add an apostrophe plus another ‘s,’ or to simply add an apostrophe to the ‘s’ at the end of the word. Huh? Here’s another of those “you get to choose” rules. I can write either the cyclops’ eye OR the cyclops’s eye — and both are okay. Why make a rule then?! Stupid Rule 5. If the word is a proper noun, then add an apostrophe plus ‘s’ to make the possessive…if you want to. I swear I’m going to scream with these “if-ya-wanna” rules! Now, we’re being told that if the cyclops happens to be named “Jones,” then you could make the plural by adding an apostrophe and another ‘s’ when talking about his eye: the cyclops Jones’s eye. Why would anyone want to make things even more confusing? Stop it already! There are other stupid rules that I’m sure you can think of…if-ya-wanna. Here’s your choice of my free guides: writing, personal improvement, or the environment. DRM is an award-winning author and the publisher of What to do About…Everything, and Boomer: Unfiltered. She writes in science, writing, mental health, and the environment on A Writer’s Mind.
https://medium.com/what-to-do-about-everything/stupid-punctuation-rules-5e1b095f60d1
[]
2020-09-06 06:13:51.875000+00:00
['Work', 'Humor', 'Writing Tips', 'Grammar', 'Writing']
Rick and Morty writer’s room
I wanted to figure out how does the process of writing a Rick and Morty episode look like, and here’s the information I could assemble. They have 3 boards in the writer’s room: 1. Brainstorming board They start by brainstorming episode ideas and writing them down on the board. These are general high concept story ideas that look like this: 2. Story Circle board Then they work on a rough story outline. They use Dan Harmon’s story circle that represents the simplified version of Campbell’s Hero’s journey, and looks like this: You can read more about it in the awesome series of articles about the story structure that he wrote, here you can read about the process of breaking a story in more details, and here Dan explains this process using a Community episode as an example. Sometimes, there are several circles for different characters or storylines. If you look closely, you can see that they also have a print out of Campbell’s monomyth on their board. Here’s how the breakdown of the “Close Rick-Counters” episode look like:
https://medium.com/fictionhub/rick-and-morty-writers-room-c2b79d6fe43c
['Ray Alez']
2016-07-17 01:52:07.539000+00:00
['Comedy', 'Television', 'Rick And Morty', 'Cartoon', 'Writing']
Develop and Deploy a Scalable RESTful API using Node.js & Mongo.
Develop and Deploy a Scalable RESTful API using Node.js & Mongo. Create a full and robust API with Get, Post, Put, and Delete capabilities. Table of Contents: Introduction to APIs Getting Software Installed Creating a Remote Mongo Database (AWS) Creating the Node Server Deploying your API (Heroku) 1. Introduction to APIs: What is an API? An API, or Application Programming Interface, can be simply defined as an interaction between various software components. When a user clicks a button to see a list of their Facebook friends, likes on Instagram, or emails within an inbox, data is generally being exchanged in one way or another through a web service API. There are a number of different types of web service APIs such as: SOAP — Simple Object Access Protocol XML-RPC — XML Format for Data Transfer JSON-RPC — JSON Format for Data Transfer REST — Representational State Transfer For the purposes of this article, we will be focusing specifically on the utilization of REST as an architectural principle. How do APIs work? APIs function as methods allowing for the transfer of data from the client-side, to an API server which interacts with a database of some sort. Data is generally retrieved based on a set of conditions, and then returned back to the client. Image by Author One of the most commonly used examples for an API generally involve the stock market, companies and daily prices. While keeping the diagram above in mind, a user (client) may request the price history data for a particular company. The client-side would make an API call specifying a specific company, and the API server would retrieve that information from the database, and return it to the user. Note that the user never interacts with the database, only the API server. What is an API Call? An API call is an action in which an endpoint of a server is specified and request, and the server responds with the requested content. Take for example the process of logging into a social media website. A user would fill out their login information, and then click “login”. Upon clicking this button, a number of API calls are made to authenticate the user and then retrieve the user’s data. This in of itself is an example of an API call. Let us take a look at a simple example of a URL in which stock price data is being requested for a specific company. Image by Author In the above example, you see the URL request broken down by section. We begin with an HTTPs protocol, followed by the name of a particular domain of interest. Following the best practices of a RESTful API, we then specify the path of the api as well as the version. We then specify the exact end point of interest. Notice that the endpoint follows a particular “funnel down” approach in which we narrow down the scope of the data as we go from the left to right. We first start with all of the companies, and then specifically selection Apple as the company of interest, and then specifically request the prices from that one company. This is a classic RESTful architectural pattern. When interacting with an API server, there are four main methods one can take: GET, POST, PUT, and DELETE. For a more detailed explanation, please visit my last article in which in discuss this in more detail in a Python setting. Image by Author 2. Getting Software Installed: Installing Node and Mongo : In the following steps, we will be developing a Node.js API using Mongo as our database. Let us go ahead and get started. You can download Node.js directly from their website. Select the LTS option which is recommended for most users, and install that locally on your machine. You can confirm the installation by running “node” on your command line. (base) C:\Users\alkhalifas> node …and you should see a welcome message from nodejs. If you do not, please re-run the installation of node. On the other hand the MongoDB Community Edition can be downloaded using this link. Select the version appropriate for your OS, and click download and install. You can confirm the installation of mongo by running either “mongo” to access the client, or “mongod” to start the database server. (base) C:\Users\alkhalifas> mongo (base) C:\Users\alkhalifas> mongod 3. Creating a Remote Mongo Database (AWS): There are a number of remote database options ranging from scalable relational database to object databases depending on the specific usecase. For the purposes of this tutorial, we will be deploying our API to an object database known as Mongo. A Mongo database can be used via the CLI we installed in the previous section, or via a GUI. To get started, you will need to navigate the to https://www.mongodb.com/. Create a new account by selecting the “Start Free” option in the main page. once you have signed up for a new account, navigate to the clusters section of the website and select “Create a New Cluster”. Image from https://www.mongodb.com/ For the Cloud Provider & Region, select AWS and change the region closest to you. When selecting a region, be sure to select one with with “Free Tier” option. Image from https://www.mongodb.com/ Image from https://www.mongodb.com/ Keep all other settings such as Cluster Tier & Additional Settings as their defaults. Feel free to change the cluster name. Finally, click “Create Cluster”. You will need to wait a few minutes while the cluster gets provisioned. Follow the tutorial that will auto-start at the end of the provisioning. This will walk you through creating the clusters credentials. Be sure to note the username, password, and collection. We will need those in a few minutes. Back on the main cluster page, find your new cluster and click the “Connect” button: Image from https://www.mongodb.com/ A new menu will appear with three options. Select the option called “Connect to MongoDB Compass”. Image from https://www.mongodb.com/ Clicking this option will bring up another menu consisting of two steps. Step 1 is to download Compass which is a GUI interface allowing you to interact with the database. The alternative to this is using the CLI we installed in the previous section, however, it is highly recommended you use the GUI. Step 2 is to copy the connection string. Be sure to substitute <password> for your actual password you created in the tutorial. Save this connection string, we will need this in a few moments. With Atlas installed, you can now open the GUI, connected to your database using the connection string, and create new data for your database. Creating a Remote Node Server: Getting Started with Node.js: We can get stared by creating a new directory and then initializing npm (which was installed with Node). You will be asked to fill out a few items for the project such as the name, author, etc. These are completely optional, but recommended as best practice. Keep most items as their default values. (base) C:\Users\alkhalifas> mkdir node-api-books (base) C:\Users\alkhalifas> mkdir cd node-api-books (base) C:\Users\alkhalifas ode-api-books> npm init At this point, the project is created and ready for development. I would recommend you open the directory with your favorite IDE such as IntelliJ, WebStorm, or VisualCode — all will function the same. We will be creating a few files within the project, each of which server a very specific function. Theoretically, we could group the contents of all of these files into server.js, however, decoupling the contents based on function is considered best practice when developing a node API, which is what we will complete here today. We beign with server.js which is the main file any incoming request will interact with. This then prompts the controller which determines the specific function to carry out based on the endpoint. The service then communications with the DAO (Data Access Object), which carries out a specific function in conjunction with the model and schema of our database — both of which will be governed using the mongoose library. Image by Author To get stared, let us go ahead and install two libraries we will need: (1) Express to handle the backend web application framework of the API, and mongoose to help with the database interaction with Mongo. npm install express --save npm install mongoose --save server.js The server.js file acts as the main ‘gatekeeper’ for all API calls. There are three main functions we will set up within this file: Mange access for incoming calls Guide traffic to the controller Connecting to the Mongo database Managing access is commonly done through what is known as a CORS policy. This can be configured using the CORS library which allows you to specify the domains that you do and do not wish to grant access to. For the purposes of this tutorial, we will grant access to all domains by listing the “*” in the code below. app.use(function (req, res, next) { res.header('Access-Control-Allow-Origin', '*'); res.header('Access-Control-Allow-Headers', 'Content-Type, X-Requested-With, Origin'); res.header('Access-Control-Allow-Methods', 'GET, POST, PUT, PATCH, DELETE, OPTIONS'); next(); Once access is determined and granted, the file then passes the API request to controller which will determine the type of request by determining the endpoint. In order to access the controller, we must ‘require’ it. require("./controllers/book-controller")(app); Finally, we will connect to the remote Mongo database using the Mongoose library. Paste the connection from the previous section that was copied from the MongoDB website. mongoose.connect('<connection string>', {useNewUrlParser: true, useUnifiedTopology: true}); book-controller.js The controller is responsible for outlining the endpoints of your API that users can navigate to. For example, one particular endpoint we can create is www.website.com/api/v1/books which should return a list of all books. app.get('/api/v1/books/', (req, res) => bookService.findAllBooks() .then(allBooks => res.json(allBooks))); Perhaps we could even narrow down the scope and return a single book by specifying the book’s ID. For that, we could use an endpoint of www.website.com/api/v1/books/123, in which 123 is the book’s ID or primary key. We could also set up an endpoint to query by author. app.get('/api/v1/books/:bid', (req, res) => bookService.findBooksById(req.params['bid']) .then(book => res.json(book))); app.get('/api/v1/author/:aname', (req, res) => bookService.findBookByAuthor(req.params['aname']) .then(books => res.json(books))); In addition to retrieving values, we can also specify other CRUD operations here such as deleting, posting, and updating our database entries. app.delete('/api/v1/books/:bid', (req, res) => bookService.deleteBookById(req.params['bid']) .then(status => res.send(status))); app.post('/api/v1/books/', (req, res) => bookService.addNewBook(req.body) .then(newBook => res.json(newBook))); app.put('/api/v1/books/:bid', (req, res) => bookService.updateBook(req.params['bid'], req.body) .then(status => res.send(status))); book-service.js The book-service file is responsible for connecting the book-controller to the book-dao (Database Access Object). Within this file we simply create a connection by creating a new constant and connecting it to the dao function. In the case of finding all books: const findAllBooks = () => bookDao.findAllBooks() book-dao.js The book-dao file is responsible for connecting the data access object to the database object’s model. It is here that we can now use Mongoose operations such as find() and findById(id). For example, we can find all book values using: const findAllBooks = () => bookModel.find() book-schema.js The book-model file is responsible for handling the model and its association to the predefined schema in the file book-schema. One of the benefits of defining a schema and connecting it to the model is ensuring that all entries will be identify in their format. Within this file we define the attributes of our Book object in the sense that a book can have a title and author which are of the type String. A publication date which is of the type Date. An edition in the form of an integer. And finally a type which must be of one of three available options of type String. const bookSchema = mongoose.Schema({ title: String, author: String, pubDate: Date, edition: Number, type: { type: String, enum: ['HARD_COVER', 'SOFT_COVER', 'E_BOOK']}, }, {collection: 'books'}); Running the Node Server: Depending on your IDE of choice, there may be some built in capabilities to run your server directly from your menu bar. If not, you can always run this via the command line. Navigate to the folder containing the server.js file. (base) C:\Users\alkhalifas ode-api-books> node server.js In your web browser, navigate to http://localhost:3000/api/v1/books which should now display a list of books in JSON format. Image belongs to Author. With that, the server is now ready for deployment on Heroku. Go ahead and add and commit all your code thus far to GitHub. 5. Deploying your API to Heroku There are many servers where one can deploy their API, one of which is Heroku. If you have not done so already, navigate to www.heroku.com and create a free new account. Within the apps menu, click on “New” and then “Create new app”. Image from https://www.heroku.com/ Assign the application a name and region, and click “create”. You will then be guided to the application’s menu. Image from https://www.heroku.com/ Within the application, navigate to the “Deploy” section. Image from https://www.heroku.com/ Next, select your deployment method of choice. You are free to deploy your API via the Heroku CLI, or via the GitHub integration. Image from https://www.heroku.com/ Once the application has been deployed, click on the “Open App” option at the top of the menu and you will be directed to the API where you can now navigate to the API endpoints we prepared. Conclusion Within this article we covered the basic concepts of RESTful APIs including their characteristics, structure, and architecture. We then provisioned and deployed a mongo database using MongoDB and AWS. Next, we prepared a best-practice Node server with full CRUD operations to manage our database of books. Finally, we deployed the server on Heroku allowing us to access our API from anywhere. The code for this project can be cloned from by GitHub account.
https://towardsdatascience.com/develop-and-deploy-a-scalable-restful-api-using-nodejs-mongo-232ad79e0f6c
['Saleh Alkhalifa']
2020-12-27 15:47:56.058000+00:00
['Nodejs', 'Heroku', 'Mongodb', 'AWS', 'API']
Yes, There’s Still Money in the Writing Industry
Making Progress In the meeting I mentioned earlier, my elders attempted to convince me to focus my efforts on attaining a creative writing degree. Teachers really do love university. I considered it for some time, but on reflection, it would’ve been a bad idea. Now, I’m not going to sit here and try to convince you that writing degrees are pointless. I don’t have one, and therefore I don’t know. But what I do have is a long list of credentials and publications that have proven to be far more valuable than a degree would’ve been. The truth is, in many cases, employers of creative industries don’t care about your degree. At Mind Cafe, we had forty-five applicants for an internship opportunity we opened last summer. Plenty of people attempted to sell themselves with their marketing degree, but I wasn’t particularly interested in that. I wanted to know where they’d worked, what they’d achieved, and how they would improve my company’s operations. And most employers want to know the same thing. It’s the same in the world of freelance writing. A degree doesn’t demonstrate excellent writing abilities. It demonstrates other qualities, like organization, a strong work ethic, and the capacity to process/understand information. All valuable skills that have little to do with the written word. How, then, should a beginner writer go about proving themselves to those they seek to receive payments from? Well, the proof is in the pudding. Or rather, the publication. A publication is a collection of articles that are displayed to a particular audience. Different publications have different sized audiences. Mind Cafe, my own publication, receives a couple of million views each month. Others, like HuffPost and The Atlantic, have audiences several times larger than ours. Each publication has its own set of rules. There’s a minimum bar of entry. If your writing meets a particular standard, that publication will be happy to feature your work to its vast expanse of readers. If a writer has been featured in The Guardian, VICE, and the New York Times, that says a lot about their skills as an author. It communicates value. That makes a potential employer’s decision-making process a lot easier, since they can instantly gauge how good that individual’s writing is likely to be and therefore how much it's worth to them. A degree doesn’t always serve this purpose. For that reason, my advice to any aspiring writer would be to forget about standardized qualifications, at least for a little while, and to focus on having their work published by reputable sources. Doing so isn’t easy, but it’ll work wonders when it comes to finding work. The day I was published in Thrive Global was the day my rate leaped from $0.10 per word to $0.12 per word. The day Mind Cafe hit a million monthly readers and started publishing people like Nir Eyal, Brianna Wiest, and Jeff Goins was the day that rate increased to $1 per word. What constitutes a credential will differ from industry to industry. In some spheres, a qualification is a credential. In others, gross annual turnover is a credential. In the writing industry, it’s all about where you’ve been published.
https://medium.com/the-post-grad-survival-guide/yes-theres-still-money-in-the-writing-industry-a66e1da16844
['Adrian Drew']
2020-12-01 19:22:40.864000+00:00
['Work', 'Freelancing', 'Advice', 'Writing Tips', 'Writing']
What is Top 3 and How to Write For Us?
Top 3 is a new publication where Medium writers support other Medium writers by promoting each other’s work. Medium members are encouraged to post three stories from other writers that they enjoyed reading. Take a look at some of our writer’s stories to get a better idea of how it works: Write for us! If you interested in writing for us, the process is very easy. Simply ask to join by commenting on this article. Submission Guidelines We aim to keep everything as simple as possible with this publication, but there are few basic rules we ask you to follow.
https://medium.com/top-3/what-is-top-3-and-how-to-write-for-us-9031305b502e
['Daryl Bruce']
2019-08-26 21:23:25.160000+00:00
['Medium', 'Publication', 'Community', 'Writer', 'Writing']
Do Your Vue Components Communicate Correctly?
Do Your Vue Components Communicate Correctly? Seven patterns, pick the right one Photo by Hanny Naibaho on Unsplash As your application matures, there will be an increased communication between your components so it’s crucial to pick the right communication pattern. Picking the right communication pattern not only simplifies debugging but also aids in rapid feature development as patterns are repeated and the codebase becomes predictable. In this article, we cover seven communication patterns to add to your arsenal when passing information between your components.
https://medium.com/better-programming/do-your-vue-components-communicate-correctly-9239c30cc495
['Seifeldin Mahjoub']
2020-03-13 16:29:06.758000+00:00
['Programming', 'JavaScript', 'Vuejs', 'Front End Development', 'Nodejs']
How Did I Get Started With Machine Learning?
How Did I Get Started With Machine Learning? Moeedlodhi Follow Sep 12 · 5 min read My experience on How I got started and gradually learned the basics. Photo by Kevin Ku on Unsplash “The best way to learn Machine Learning is by DOING IT”.If you understand what I mean by this statement, then there is probably no need to go through the rest of the article. When I was starting, I didn’t know where to begin with. Too much information, too many courses, and just too many different varying opinions on what to do and what NOT to do. I will be honest, when I googled How to start learning machine learning(or something like that)for the first time back in 2019, I was bombarded with a plethora of varying opinions and different ways to get started. Some said to learn R, Some said to start with Python, Others recommended getting a Masters degree, while a few downright told me to take a different career path. Yea, I know, varying opinions. All of this led to an Information overload and left me more confused than ever on where to begin with. BUT I knew I had to start somewhere and I did. Made a ton of mistakes and still make them to this day but mistakes are good if you learn from them and make yourself better. It’s an iterative process which makes you better along the way. So without further delay, here are is my step by step guide on how to get started with Machine Learning: Start with Statistics Photo by Stephen Dawson on Unsplash Learn Statistics, Clear your concepts around statistics especially pertaining to machine learning algorithms. When I was starting out, I thought my job would be confined to “programming” only and that there was no need to dive deep into the mathematics as I had Libraries to take care of that. Clean the data, fit a Linear Regression Model, and job well done. Well, I was WRONG. A clear understanding of Statistics is a MUST and I would like to give an example of why it is. When I was practicing with Linear Regression, I read somewhere that Outliers can pose a problem to it but I did not know how to detect and deal with them. That’s where I found out that z-score and Inter-Quartile Range, two very important statistic concepts, are used exactly for that, to detect Outliers. And in another example, I learned How P-values and null hypothesis are extremely important to detect insignificant variables present in our dataset. And this is just scratching the surface. So in short, get good with statistics and the book I used to learn the basic concepts is : Introduction to Statistical Learning. Get comfortable with Python Photo by Hitesh Choudhary on Unsplash Learn Python, Simple as that. There are a ton of resources out there that provide extensive content when it comes to Python. A lot of people recommend R as well but the thing is, You can’t learn everything and there is no need to learn different tools which perform the same function. What R can do, Python can do just as well if not better. So instead of dedicating significant time and effort into learning two languages, Focus on mastering one and down the road when you do get the time, Learn R too if you feel like it. A great place to start learning Python is this tutorial. Start practicing on Datasets Photo by Markus Spiske on Unsplash Once you have extensively worked on the first two tips, then move onto Kaggle and start implementing what you have learned. I cannot emphasize this point enough. There is no point in learning a concept if you ‘re not going to implement it. Use your newly learned coding skills to clean the data, create wonderful visualizations, and fit Machine Learning models. Understand the data and the behavior of the model. Like for example You know, Outliers are bad. Well, Why are they bad? What effects do Outliers have on our model? Be an investigator and be curious, very curious. Recommnded dataset to get started with: Classification :Titanic Dataset, Iris Dataset Regression: Boston House pricing , Auto Dataset Get real life experience Photo by Ian Schneider on Unsplash So you now know the basics of statistics, you can code and you have a few Kaggle projects on your portfolio as well. congratulations, you’ve made it. Good job, Well done. BUT I hate to break it to you that You are just getting started. I already went over this point extensively in one of my previous Articles where I mentioned how an internship was an eye-opener for me. Where I for the first time got the chance to provide Real Value in terms of business to Real People. Real Life job experience will humble you, pressurize you, break you BUT will teach you A LOT if you are the type who is hungry for learning. Conclusion I am no different than anyone reading this article, a student of the field of machine learning who is hungry to get better every single day. As of now, I am working as a full time internee which is quite humbling. I am still learning and my journey has just started. NOTE: If you like my writing and the content I post, feel free to share it with your friends and family for that helps me a lot. Thank you :)
https://medium.com/swlh/how-did-i-learn-machine-learning-e72eb151afd3
[]
2020-09-21 01:08:49.044000+00:00
['Machine Learning', 'Data Science', 'Technology', 'Careers', 'Artificial Intelligence']
This Anti-Aging Injection Might Actually Work
This Anti-Aging Injection Might Actually Work Stem cells are overhyped as a cure for everything. But they‘re finally showing promise in making elderly people stronger. Illustration by Thomas Hedger Phillip George golfs regularly, works out on a treadmill, and lifts a few weights — “not body building, just toning,” he says. Now in his 70s, the retired plastic surgeon used to get achy if he overdid it at the gym or the golf course, and often popped a few Advil or Tylenol. But George cut way back on the painkillers two years ago, after becoming one of the first patients to try a new tactic to slow aging. As part of a clinical trial led by University of Miami cardiologist Joshua Hare, George got an infusion of millions of someone else’s stem cells, the kind of multi-purpose cells that can form other types of cells. “The pain diminished dramatically,” George says. There are a lot of doctors who promise cures from stem cells. Hare isn’t one of them. At least not yet. Nevertheless, after spending his career investigating what stem cells can do for the body, Hare thinks he’s hit upon a way to reduce frailty, boost the immune system and heart, and maybe even fight off Alzheimer’s: by giving patients large doses of a certain kind of stem cell. His approach hasn’t been proven to work yet, but a review of clinical trials and discussions with scientists suggest that Hare is closer than anyone else to using stem cells to address problems caused by aging. Although stem cells are considered an incredibly promising area of research, the procedures that have been definitively shown to work are ones that have existed for decades, such as bone marrow transplants and similar interventions for cancer and some blood diseases. No new treatments using the cells have been proven to be medically effective. It’s not for a lack of trying: the National Library of Medicine lists about 1,700 ongoing studies involving stem cell treatments. But companies that sell treatments for everything from knee pain to autism and heart disease to Parkinson’s disease do not yet have scientific proof in humans. On top of that, stem cell therapy can be risky when sloppily delivered. Three older women were blinded by a treatment intended to reverse their vision loss, according to a report last year in the New England Journal of Medicine. Many others, including professional athletes and celebrities, have spent tens of thousands of dollars for treatments that have not been shown to be any more effective than placebo. In stepping up its enforcement against unscrupulous clinics last year, the U.S. Food and Drug Administration warned consumers to avoid stem cell therapies that are not FDA-approved, unless they are part of a registered research trial. “You can really consider them to be these miniature drug-delivery factories.” That was the case when Phillip George got his infusion two years ago. He was in the first clinical trial to test these cells as a treatment for aging-related frailty. He says the infusion wasn’t painful, and he doesn’t think he took any kind of a risk, even though the procedure had been tried on only about 100 people before him. George also had known Hare for 11 years, since he was on the hiring team that brought the younger doctor to Miami. So far, so good, George says: “Two years into the process, I don’t see or recognize any negatives from it.” Regeneration Your supply of stem cells naturally falls with age. This makes it harder for the body to repair damage, and can lead to inflammation. Inflammation, in turn, undergirds many of the problems we associate with old age, including frailty, heart disease, immune weakness, and Alzheimer’s, says Anthony Oliva, senior scientist at Longeveron, the privately held Miami company that Hare started to bring his ideas to market. Joshua Hare Longeveron uses mesenchymal stem cells, which come from bone marrow. They are known to be involved in regulating and sometimes reducing inflammation, as well as in helping to boost repair mechanisms for blood vessels. They may also prompt the body’s own stem cells to get more active in beneficial ways. There are currently 817 studies worldwide investigating the use of mesenchymal stem cells in people, for everything from knee injuries to ulcerative colitis, according to the federal government’s ClinicalTrials.gov. Mesenchymal stem cells are being tested in cats to see if they can reduce feline inflammation. “You can really consider them to be these miniature drug-delivery factories,” Oliva says, describing mesenchymal stem cells as almost miraculous. “They can last for many months inside the recipient. They’re homing to the site of inflammation and damage. They are decreasing inflammation. They’re promoting improvement of the vasculature. They’re stimulating intrinsic stem cells to repair and regenerate over months.” Hare’s research has cataloged how an objective measure of inflammation, the cell-signaling protein TNF-alpha, falls after an infusion of mesenchymal stem cells. In both animals and people, Hare says, the protein level remains lower for six to 12 months. The cells also appear to be safe. “Working in medicine 30 years, I’ve never seen anything this well tolerated,” Hare says. The immune system doesn’t react to these donor stem cells because it essentially doesn’t see them, Oliva says. Most cells in the body express a protein called MHC class II on their surface. The protein acts like a flag to alert the body to something foreign. In an organ transplant, the patient’s MHC class II has to match the donor’s to reduce chances of rejection. But mesenchymal stem cells don’t have these flags. Longeveron uses cells from young donors, who are paid a small stipend and have been screened three times for diseases like HIV, hepatitis, and Zika, Oliva says. Their mesenchymal stem cells are extracted in a far less painful way than you might imagine, with just local anesthesia, a single needle stick, and a Band-Aid to cover the spot. Longeveron then cultures the cells—putting them in a fluid to make them proliferate—before infusing them into a patient. For now, one donor can supply tens of patients, but Hare hopes to improve the culturing process so that one donor can provide enough cells for several hundred doses. If the company’s trials persuade regulators to approve the therapy, “we’re going to have a very large market and a very significant need,” he says. A mysterious result For its frailty research, Longeveron is now running a phase 2b trial — roughly the midpoint of the clinical trial process. It has tested its cells in about 20 out of 120 patients so far, says Suzanne Page, the chief operating officer. Longeveron also has an early trial comparing the donor cells versus a placebo in a small number of patients with Alzheimer’s disease; another trial looking at frail patients’ resistance to the flu virus after receiving the donor cells compared to placebo; and a fourth looking at metabolic syndrome. Hare is also conducting a study checking back with patients who received stem cell treatments years ago for heart conditions. To treat organ damage, like a heart attack that leaves a scar in the heart muscle, stem cells probably have to be injected directly into the organ, Hare says. But for a systemic, aging-related condition like frailty, the cells can be infused into the bloodstream because they tend to “traffic to inflamed areas,” he says. If further trials confirm Longeveron’s approach, “we’re going to have a very large market,” Hare says. Longeveron’s first published study on frailty showed that patients such as George, who received 100 million mesenchymal stem cells, showed “remarkable improvements in physical performance measures and inflammatory biomarkers.” The average age of the patients was 75.5. Oddly, the patients who received a double dose of 200 million donor cells did not see any benefit at all. And that result fuels the concerns that molecular biologist Andrew Mendelsohn has about the research. Mesenchymal stem cells have been the subject of hundreds of clinical trials, notes Mendelsohn, who wrote a commentary about Longeveron’s experiment in the journal Rejuvenation Research. Some studies show improvements, some don’t. When the studies are repeated, their results are inconsistent. “If you do the experiment in mice, you see all kinds of great results. When you get to people you get all kinds of mixed results,” says Mendelsohn, director of molecular biology for the Panorama Research Institute, a privately owned biomedical research and development holding company in Sunnyvale, California. Mendelsohn says he thinks Hare’s research has promise. But he won’t be convinced, he says, until he sees clear evidence in repeated clinical trials. “I have a strong belief that eventually it can be engineered to work out, even if it’s not nearly as effective as we might like it to be,” Mendelsohn says. Paul Knoepfler, a biologist and stem cell scientist at the UC Davis School of Medicine, says he too is troubled by the inconsistent results of stem cell studies, and by the therapy’s relatively modest benefits. Other research suggests to him that donor mesenchymal stem cells don’t stay in the body for very long, contrary to what Oliva says about how they last for months. Knoepfler is not sure how patients could have a long-term benefit if the body clears out the cells within a week or so. Furthermore, frailty and other age-related processes are complicated. They’re not caused by one trigger and they probably can’t be fixed by one type of treatment, Knoepfler says. Even so, he considers Hare’s work worthwhile and says it’s being conducted responsibly. “I don’t see any red flags,” Knoepfler says. “It’s just sort of early days.” Phillip George acknowledges that the benefits he saw might have been caused by a placebo effect. But he’s still signed up to get a second infusion of the cells, part of a test of whether it can be given multiple times without harm. “I’m not a Pollyanna. I’m not jumping at the first of everything,” George says. But if he has the opportunity to benefit from a potentially exciting new treatment, if there’s no apparent downside, and if he can benefit science in the process, then why not try?
https://medium.com/neodotlife/longeveron-stem-cells-for-frailty-aging-d94d8c9f2de6
['Karen Weintraub']
2020-10-22 23:06:16.688000+00:00
['Technology', 'Longevity', 'Aging', 'Stem Cell Research', 'Biotechnology']
Is a Traditional Publisher For You?
image: Freddie Marriage for Unsplash Even in an age of self-publishing and Amazon publishing gone mad, my editing clients still have the same question: What are my chances of having my book published by a traditional publisher? I have to be honest with you here. It looks hard, and it’s harder than it looks. Everyone who has written a book feels as though they’ve done their job and now it’s time to sit back and wait for the bidding wars as the book is auctioned off to a major publisher. Reality is very different. There are three ways of getting published in the conventional sense of the word (although stay tuned: more are emerging): traditional publishing, self-publishing, and using a subsidy publisher; and each uses a different method. How does the traditional model work? There are a couple of options: Submitting your work directly to a publisher. This is known as “over the transom,” since manuscripts used to be tossed into an editor’s office in precisely that manner. There are resources available to help you, notably Penguin/Random House’s Writers Market and Information Today’s Literary Market Place as well as Poets & Writers. These sites will tell you exactly what each publisher is looking for, and what each publisher wants in the way of contact (query letter, book proposal, entire manuscript, etc.). This is known as “over the transom,” since manuscripts used to be tossed into an editor’s office in precisely that manner. There are resources available to help you, notably Penguin/Random House’s Writers Market and Information Today’s Literary Market Place as well as Poets & Writers. These sites will tell you exactly what each publisher is looking for, and what each publisher wants in the way of contact (query letter, book proposal, entire manuscript, etc.). Sending your work to a publisher through a referral. While an agent or a publisher might be willing to accept a recommendation from someone they know and respect (author, MFA professor, the editor of a literary journal, etc.), it is not proper etiquette for you to contact someone who doesn’t already know you in order to ask for a referral unless such a person has already indicated interest in your work and willingness to help. While an agent or a publisher might be willing to accept a recommendation from someone they know and respect (author, MFA professor, the editor of a literary journal, etc.), it is not proper etiquette for you to contact someone who doesn’t already know you in order to ask for a referral unless such a person has already indicated interest in your work and willingness to help. Having your book accepted by a literary agent who will then submit it to publishers on your behalf and for a percentage of the book’s sales. Some publishers will only work with agents. Why? Because it makes their job easier! The agent can match projects with specific editors, decide if something is publishable as is or if it needs more work, and provide some feedback to the author. Publishers, on the other hand, will rarely offer feedback. Frustrating as this is, it’s simply not practicable to tell hundreds of authors a day why their manuscripts are being rejected. In general, what you will receive is a form letter (these days, an email) telling you your manuscript does not meet the publisher’s current needs, and wishing you the best of luck elsewhere. Most of the time, you won’t know if it was rejected because it wasn’t “good enough” — whatever that might mean — or merely because the editor was having a bad day. The end result, sadly, is the same. Perseverance pays off. So does working and reworking your manuscript. Sometimes putting it aside for a year (as it makes the rounds of publishers and gets rejected every few months) can be useful: if you look at it again with fresh eyes, there’s a good chance you’ll find ways of improving it. Many, many well-known authors have known rejection. (If you don’t believe this to be true, take a look at André Bernard’s wonderful Rotten Rejections, filled with letters editors and publishers wish they’d never sent.) And the odds are stacked against today’s author even more than in the past: no longer can an acquisitions editor make the decision to purchase a manuscript alone. These days, a whole team — including representatives of the publisher’s marketing department — decides on the project’s financial viability. A rejection may therefore have nothing to do with the literary value of any work. I wish the news were better. I wish all of my clients could get published easily and painlessly. I just want everyone to be prepared for a long journey and a possible negative outcome. It has been said that only about 400 people in the United States make their livings entirely on the proceeds from their novels. In other words — no matter how good you are, don’t quit your day job quite yet!
https://jeannettedebeauvoir.medium.com/is-a-traditional-publisher-for-you-325d02c494e7
[]
2020-02-17 15:17:16.292000+00:00
['Publishing', 'Traditional Publishing', 'Literary Agents', 'Authors', 'Writing']
A look at the cons of Icons
Needless to say, icons continue to be in use to easily communicate the meaning and to serve the aesthetics of a design. And while it’s true that icons make good targets their use is almost invariably not done right. Here’s a quick test for you- Take a look at these icons from Gmail’s home page and see if you can correctly identify their meaning. Scroll down for the answers (Hey, No cheating!). Now even if you identified all of them correctly you’d agree that not most people can. That’s because our icons have some cons associated with them. Icons are rarely universal Sure, they save space and they are visually pleasing but the biggest problem with icons is that they can convey a thousand different meanings. There are no standards in icons except a few as the likes of home, camera, print, shopping cart and magnifying glass for search. Universal Icons for Home and Search used in Netflix android app Outside of these few instances, you’ll see that Icons are mostly ambiguous I mean just take a look at these icons from Microsoft Outlook toolbar and try to guess what they are supposed to mean (Scroll down for the answer). Ridiculous, right? They all look similar, especially the first two. Icons in a design continue to be ambiguous in their meaning. For abstract functions icons rarely work well. Also sometimes they can have different meanings on different interfaces which is why Icons can be conflicting Notice how Google Chrome and WhatsApp use star icon for bookmarking while Medium uses a flag for the same. Thus we see that there’s no standard icon for some actions over all the interfaces. Google Chrome and WhatsApp both use star icon for bookmarking Medium uses a flag icon for bookmarking But hold on this gets even more confusing where you’ll see that WordPress has both the star and the flag icons! And yes, you guessed it right, they both represent different actions. WordPress uses both the star and flag icons Here flag icon is used for bookmarking while the star icon is for liking the posts of fellow bloggers. And as we know, when it comes to Instagram the same liking action is represented using a heart icon, not a star… there’s no star. Some may say that no matter what we can always adapt to these differences across various platforms. And sure you may but we still aren’t short of cons regarding icons. Here comes another- Icons are hard to memorize Many studies show that we remember things spatially i.e. we rely on our ‘Positional Memory’. This is the reason we seldom customize the position of apps on our screens. Who can remember what each icon means? Not me. — Don Norman To learn how people remember icons UIE conducted two experiments wherein first, they changed the imagery of the icons in a toolbar but kept them in the same place. Users easily adapted to this change. Next, they kept the original imagery for icons but shuffled their positions on the toolbar. This time though users really struggled with locating the icons and couldn’t complete common tasks… whoever said icons enhance usability. We might not remember the graphical representation of a function but we do remember its location. This article explains this issue with a great example. So then how do we deal with these blemishes of our dear icons? Well, there are two ways as common sense would dictate. First, add a hover text that will reveal the name or function of the icon when the cursor is positioned over it. And while it’s a neat solution which won’t take up space for labels and does the work, it still isn’t perfect. Not only does it fail to translate well on touch devices but also the user still has to do the task to figure out the meaning which costs us something called the interaction cost. So here goes the second way, add labels along with the icons. This works because there’s zero interaction cost and the icons are no longer ambiguous. So coming back to our guess games from above, this is what the ‘Icon + Label’ design looks like: Painless, isn’t it? But one may also wonder that if icons are so problematic why don’t we just get rid of them? Or as some like to say — The best icon is a text label. But here’s a catch too. Us humans! We are both emotional and rational beings. A ‘Label only’ design functions very well as it provides rational benefit but it feels cold and less humane and that doesn’t bode well with us. We like the fun, the simplicity and the beauty — the emotional benefit that the icons provide.
https://medium.com/design-bootcamp/a-look-at-the-cons-of-icons-3b6e0909047e
['Darshan Kanade']
2020-12-22 03:07:37.976000+00:00
['UI Design', 'Design', 'UX Design', 'UX', 'Icons']
Use This One Simple Technique to Kill Procrastination
How to Kill Procrastination If your goal is to climb a mountain and you know the why and how of it; and what will be the path,” you will be confident. If you know someone has done it before, you will be even more hopeful, which will drive you further. This level of clarity is useful, but not enough to keep you moving in your day-to-day actions. So ask yourself these two questions: 1. How can not doing something bring you more “pain” than doing it? In my case, I wasn’t taking any step to grow myself, and my pain got severe over time, which forced me to take action. However, if there is no pain and you are in a comfort zone, you can induce it. For instance: How ugly will I appear in the next two years if I don’t exercise daily? How badly will competitors swipe the market share if I don’t adapt to a new business model? How will I pay the EMI for a newly bought land-rover defender if I don’t upgrade my skills or change my job? This technique will work if the pain of no action is more than the pain of action. You can make it effective with the power of visualization. Like when you “vividly see yourself as fatso” or see something scary. Excessive pain may also trigger fear and that might kill your desire, so use it prudently. 2. How can doing something bring you more “pleasure” than not doing it? After the painful kickoff, I aimed to work on a few projects, and one of them was to explore non-fictional writing. Initially, I was checking the responses to my published articles every hour. How engaging was my article? How many readers liked it? It wasn’t much different from people addicted to social media. But I wanted to quit this endless cycle to check. I discovered an activity, (in one of my journal sessions), that made me happier. I created a visual report for my published articles, and looked forward to reading those who performed better. By analyzing article reading data and comparing between them, I gained more pleasure than watching several hits. This cycle of feedback gave me joy, visible progress, and hope to reach my goal. A few other examples to understand it better. What if I use my Netflix watching time to lose my fat, train my muscles, and look leaner? What if I use a new business model and have a monopoly on the market? What if I do a part-time MBA and get promoted to be an area sales manager in my company? Find out what you can relate to with pleasure and visualize it. “Look at the lean and fit version of you, sweating on the beats with the routine at the gym.” This will pull you like a magnet and help you get rid of less pleasurable tasks.
https://medium.com/illumination/use-this-one-simple-technique-to-kill-procrastination-b6338bc95b87
['Kapil Goel']
2020-12-26 07:04:25.374000+00:00
['Goals', 'Self-awareness', 'Self', 'Procrastination', 'Advice']
How to Create a Youtube downloader with Javascript
Youtube is the most famous video streaming website now days even 98% of people in this world are using youtube, even the android mobile has built-in youtube installed from google. I think that’s the fact youtube is the most popular product of Google, Let move to our today’s topic, Youtube Downloader in JavaScript. So we are going to walk through that how to develop a youtube downloader that downloads any youtube video or playlist on your computer, we can use any programming language for this python, C++ even java. But I love JavaScript so I will go with it, well without wasting time let’s jump into the things we need to start the coding.We need NodeJs to compile the JavaScript code, if you already had NodeJs installed that good you can move to the library installation part or not just download the latest version NodeJs and install it on your windows, Linux, or any operating system you had. Next we need a JavaScript module name youtube-dl, copy that below command and paste it in the command prompt. npm i youtube-dl Coding Part Create a youtube.js file and open it in your code editor, let start with importing the library of youtube-dl in our script. first-line we are loading the fs module in the const variable fs, that module is basically used for reading and writing video files, In the next line, we are loading our main module in const variable youtube-dl. After we loading the required modules we just use youtubedl() function in which we are passing youtube video URL, and format. Note we are setting the format number to 18 which is 320p mp4 format, if you want to set HD format or any format you loved in youtube videos visit the youtube_format_number. Let move to another part of the program is downloading the video check the code below. If you noticed in line 4 we save the youtubedl() in the const video variable and on line 10 we used that video variable to call the function info. Next line using the javascript console.log function to printing out the youtube information on the command prompt. last we used the fs module to write our file in mp4 format, if you see we use video.pipe function and fs.createWrtierStream together to download our video. Video.pipe is just giving the video data in encoded form and the fs function will convert the video in mp4 format and saved it to your PC. Downloading Playlist The first part is really easy to develop, the next challenge our downloader will face will be downloading a playlist with youtube_dl. Let start developing the javascript code that downloads the whole playlist, Again we will use the same modules to check the code below. As you saw the code we just create a function name playlist and passing the parameter “URL”, now that’s URL will be the playlist URL that we gonna pass, next we using youtubedl() function storing it in video variable, and on the next line we define an error handler using video.on() function ( which is actually youtubedl function ) in that function, if our program got any error or bug during runtime, the program will return console.log function saying error and telling the type of error occurred. From line 14 to 19 if you have seen we are using video.on() function in which we are storing the size info of the playlist in size variable and then path.join(__dirname + ‘/’, size + ‘.mp4’) will define a size.mp4 video size will replace by the size info we get from .size() and next lines we using fs stream module to output our playlist video in the directory in which our javascript file is placed. From line 21 to 31 we are actually printing something on the command line, that’s right we are making the moving bar the same like you saw in some software installation in which the bar is moving to the right telling them what percentage of installation is done, but in this case, we use it to tell how much data of the videos is downloaded. We using stdout() to make that bar which is a built-in function of javascript. Getting Video information We going to explore the youtube_dl library a little more, let develop a javascript code that will print out the information of video on the command prompt. First thing first requires (“youtube_dl”) loading the library in our js file next line we use optional arguments ( you can ignore that if you don't want ) this argument hold your login details in the youtube. next line new function of youtube_dl we are using the getInfo() that function takes URL, option argument, a function which indicating an error message, if(err) throw err that javascript condiction in which we are checking that if the error occurs simply throw it and stop the program otherwise move to the if condition body which using console.log function to printing our the video information. Using Proxies: Some time a youtube video is limited to specific countries, so for bypassing that, we will use the optional argument of proxies in youtubedl() function, just pass the proxy and port number as an argument with the youtube video URL. You will be more if you check the below code. Downloading Subtitles Subtitles are very useful when you are not a native speaker of that particular language, youtube videos have multiple subtitles edit by video uploader, that’s not good that we download just the video, not with its subtitles, well the youtube_dl modules have some function for downloading the video subtitles. let breakdown the code and understand what is happening in it, as usual, the first step to load the module in the Js file next we create a URL const variable passing a youtube video URL in it, and then we made options variable that holds the optional arguments for the youtubedl() function and if you notice I added comment after every argument in it and if you check format is ttml you can set it vtt, both ar ethe format of subtitles files next we can define the dirname ( you can add directory path here) and in the end, we call the youtubedl() function getsubs() passing all the arguments. That its, We just create an awesome javascript youtube downloader, you can use that code in a web app or any javascript-based app or on your website also. I hope this article will help you someday. Feel free to share your opinion.
https://medium.com/dev-genius/creating-youtube-downloader-in-javascript-5b7e20215271
['Haider Imtiaz']
2020-11-10 16:25:29.784000+00:00
['JavaScript', 'Development', 'YouTube', 'Nodejs', 'Programming']
Dreaming Of Bugs Right Now? You’re Not Alone
By Elizabeth Gulino PHOTO: VISHAL BHATNAGAR/NURPHOTO/GETTY IMAGES. Vivid, nightmare-ish dreams tend to happen when one is feeling stressed and anxious — two very common emotions as we navigate life in a pandemic. And the most common theme of the vivid dreams we’ve been having? Bug attacks. Deirdre Barrett, PhD, a Harvard psychologist and dream researcher who launched an international survey about pandemic-related dreams, told WebMD that the most common dreams reported just so happen to include “swarms of wasps, flies, and gnats to armies of roaches and wiggly worms.” Apparently bug attacks are “by far the most common metaphor” out of 8,000 responses. “I think part of it traces to the slang use of the word; we say we have a bug to mean we have a virus,” Barrett continued. “Dreams can be kind of pun-like in using visual images for words.” Vivid dreams happen during the rapid eye movement phase of our sleep cycles. If your ZZZs end up being disturbed because of stress and anxiety, or even if you find yourself sleeping more because of any quarantine-related schedule changes, you may be spending more time in REM sleep, leading to those intense dreams. “One of the hard parts about dreams is that they don’t speak our language,” Lisa Harrison, MA, psychotherapist in private practice and a candidate training as a Jungian analyst, previously told Refinery29. Being chased by something in a dream — even by bugs — can mean you’re avoiding something. “The degree to which you are being chased gives a clear indication of the degree to which you are avoiding an issue that needs to be addressed,” psychotherapist Matthew Bowes previously told Refinery29. “Perhaps you’ve been risk-avoidant, or you’ve held back on confronting something which is uncomfortable or frightening.” When asked specifically about bug dreams, Bowes revealed that being overwhelmed by tiny monsters like spiders and worms “may indicate that little irritations or worries are creeping up on you.” To stop these creepy crawlies from appearing in your dreams each night, it might be good for you to confront whatever is in your life that’s annoying you right now — no matter how small. After all, waking up in a cold sweat after being surrounded by swarms of bugs might just be worse.
https://medium.com/refinery29/dreaming-of-bugs-right-now-youre-not-alone-21d2adf7aa54
[]
2020-07-19 15:36:00.840000+00:00
['Sleep', 'Metaphor', 'Psychology', 'Dreams', 'Dream Meanings']
Running Cassandra in Kubernetes: challenges and solutions
We regularly deal with the Apache Cassandra and the need to operate it as a part of a Kubernetes-based infrastructure (for example, for Jaeger installations). In this article, we will share our vision of the necessary steps, criteria, and existing solutions for migrating Cassandra to K8s. “The one who can control a woman can cope with the country as well” What is Cassandra? Cassandra is a distributed NoSQL DBMS designed to handle large amounts of data. It provides high availability with no single point of failure. The project hardly needs a detailed introduction, so I will only remind you of those main features of Cassandra that are important in the context of this article. Cassandra is written in Java. The topology of Cassandra includes several layers: Node — the single deployed instance of Cassandra; Rack — the collection of Cassandra’s instances grouped by some attribute and located in the same data center; Datacenter — the combination of all Racks located in the same data center; Cluster — the collection of all Datacenters. Cassandra uses an IP address to identify a node. Cassandra stores part of the data in RAM to speed up reading and writing. And now, let’s get to the actual migration to Kubernetes. Migration check-list What we’re not going to discuss here is the reason for migrating Cassandra to Kubernetes. For us as a company maintaining a lot of K8s clusters, it’s all about managing it in a more convenient way. So, we’d rather focus on what we need to make this transition possible and what tools can help us with this. 1. Data storage As we mentioned above, Cassandra stores some of its data in RAM, in the form of MemTable. There is another part of the data that is saved to disk as SSTable. Cassandra also has a unique instance called Commit Log that keeps records of all transactions and is stored on disk. Scheme of write transactions in Cassandra In Kubernetes, there is a PersistentVolume for storing data. You can use this mechanism effortlessly since it is already well developed. We will provide a personal PersistentVolume to every pod that contains Cassandra We wish to emphasize that Cassandra itself has built-in mechanisms for data replication. Therefore, there is no need to use distributed data storage systems like Ceph or GlusterFS in the case of a Cassandra cluster comprising a large number of nodes. In this situation, the most appropriate solution is to store data on the node’s disk via local persistent volumes or by mounting hostPath . On the other hand, what if you want to create a separate environment for developers for each feature branch? In this case, the correct approach is to create a single Cassandra node and store data in the distributed storage (i.e., Ceph and GlusterFS become an option). This way, you can ensure the safety of the test data even if one of the nodes of the Kubernetes cluster fails. 2. Monitoring We consider Prometheus the best tool available for event monitoring in Kubernetes. Does Cassandra support exporters for Prometheus metrics? And (that is even more important) does it support matching dashboards for Grafana? An example of Grafana charts for Cassandra There are only two exporters: jmx_exporter and cassandra_exporter. We prefer the first one, because: JMX Exporter is growing and developing steadily, whereas Cassandra Exporter was unable to gain the support of the community (Cassandra Exporter still does not support most of Cassandra’s versions). You can run JMX Exporter as a javaagent by adding the following flag: -javaagent:<plugin-dir-name>/cassandra-exporter.jar=--listen=:9180 . JMX Exporter has a neat dashboard, incompatible with Cassandra Exporter. 3. Choosing Kubernetes primitives Let’s try to convert the structure of the Cassandra cluster (outlined above) to Kubernetes resources: Cassandra Node → Pod Cassandra Rack → StatefulSet Cassandra Datacenter → pool of StatefulSets Cassandra Cluster →??? It looks like we need some additional Kubernetes resource that corresponds to the Cassandra Cluster. Thanks to the Kubernetes mechanism for defining custom resources (CRDs), we can easily create the missing resource. Declaring custom resources for logs and alerts But the Custom Resource isn’t enough — you need a corresponding controller. You even may have to use a Kubernetes operator. 4. Identifying pods Previously we decided that each Cassandra node corresponds to one pod in Kubernetes. As you know, Cassandra identifies pods by their IP addresses. However, the IP address of the pod will be different each time. It looks like that after each pod deletion, you will have to add a new node to the Cassandra cluster. Still, there is a way out… more like several of them: 1. We can track pods by host identifiers (UUIDs that explicitly identify Cassandra instances) or by IP addresses and save all that info into some structure/table. However, this approach has two main disadvantages: Risk of racing conditions in the case of simultaneous failure of two or more pods. After the redeployment, Cassandra nodes will attempt to get an IP address from the table at the same time, competing for the same resource. It will not be possible to identify the Cassandra node that has lost its data. 2. The second solution looks like a small, innocent hack: we can create a Service with the ClusterIP for each Cassandra node. Such an approach has the following disadvantages: If the Cassandra cluster has a considerable amount of nodes, we will have to create a large number of Services. The ClusterIP access mode is based on iptables. This could be a problem if the Cassandra cluster has a large number of nodes (1000 or even 100). The IPVS-based in-cluster load balancing can solve it, though. 3. The third solution is to use a node network for Cassandra nodes in place of a dedicated pod network by setting hostNetwork: true . This approach has certain limitations: Replacing nodes. The new node has to have the same IP address as the previous one (this behavior is almost impossible to implement in the clouds such as AWS, GCP); By using the network of cluster nodes, we start to compete for network resources. Therefore, it will be rather difficult to deploy more than one pod containing Cassandra to the same cluster node. 5. Backups What if we want to backup all data from some Cassandra node according to the schedule? Kubernetes provides an excellent tool for such tasks, CronJob. However, Cassandra’s specific nature prevents us from doing this. Let me remind you that Cassandra stores parts of the data in memory. To do a full backup, you have to flush in-memory data (Memtables) to the disk (SSTables). In Cassandra’s terminology, that process is called a “node drain”: it makes a Cassandra node stop receiving connections and become unreachable — an unwanted behaviour in most cases. Then the node backs up the data (by saving a snapshot) and saves the scheme (keyspace). However, there is one problem: the backup itself isn’t sufficient. We also have to preserve data identifiers (in the form of dedicated tokens) for which the Cassandra node is responsible. Tokens help us to find out what data belongs to which Cassandra node Here you may find an example of a Google-made script for backing up files from Cassandra cluster in Kubernetes. This script is good except for it doesn’t flush data to the node before making a snapshot. That is, the backup is performed not for a current but a little earlier state. However, this approach preserves the node’s availability, which seems quite logical and beneficial. An example of a bash script for backing up files on a single Cassandra node: set -eu if [[ -z "$1" ]]; then info "Please provide a keyspace" exit 1 fi KEYSPACE="$1" result=$(nodetool snapshot "${KEYSPACE}") if [[ $? -ne 0 ]]; then echo "Error while making snapshot" exit 1 fi timestamp=$(echo "$result" | awk '/Snapshot directory: / { print $3 }') mkdir -p /tmp/backup for path in $(find "/var/lib/cassandra/data/${KEYSPACE}" -name $timestamp); do table=$(echo "${path}" | awk -F "[/-]" '{print $7}') mkdir /tmp/backup/$table mv $path /tmp/backup/$table done tar -zcf /tmp/backup.tar.gz -C /tmp/backup . nodetool clearsnapshot "${KEYSPACE}" Ready-made Cassandra solutions for Kubernetes What tools help to deploy Cassandra to the Kubernetes cluster? Which of those are the most suitable for the given requirements? 1. StatefulSet or Helm-chart-based solutions Using standard StatefulSets functionality for deploying Cassandra cluster is a great idea. By using Helm charts and Go templates, you can provide the user with a flexible interface for deploying Cassandra. Normally, this method works just fine — until something unexpected happens (e.g., failure of a node). The standard Kubernetes tools cannot handle all the above nuances. Also, this approach isn’t scalable enough for more complicated uses: replacing nodes, backing up data, restoring, monitoring, etc. Examples: Both charts are equally good but suffer from the issues described above. 2. Solutions based on Kubernetes Operator These tools are preferable since they provide extensive cluster management capabilities. Here is the general pattern for designing an operator for Cassandra (as well as for any other DBMS): Sidecar <-> Controller <-> CRD. Scheme of managing nodes in the correctly designed Cassandra operator Let’s review the available operators: 2.1. Cassandra-operator by instaclustr GitHub Maturity: Alpha License: Apache 2.0 Written in: Java A surprisingly promising and rapidly developing tool made by the company that offers Cassandra managed deployments. It uses a sidecar container that gets commands via HTTP. Cassandra-operator is written in Java and sometimes lacks advanced functionality of the client-go library. The operator doesn’t support different Racks for the same Datacenter. However, Cassandra-operator has several pros such as support for monitoring, high-level cluster management via CRD, or even detailed instructions on making a backup. 2.2. Navigator by Jetstack GitHub Maturity: Alpha License: Apache 2.0 Written in: Go Navigator is a Kubernetes extension for implementing DB-as-a-Service. Currently, it supports Elasticsearch and Cassandra databases. The operator possesses several intriguing concepts, such as access control to the database via RBAC (for this, a separate navigator-apiserver is started). Overall, this project is worth a closer look. Unfortunately, the latest commit was over 18 months ago, and that fact severely reduces its potential. 2.3. Cassandra-operator by vgkowski GitHub Maturity: Alpha License: Apache 2.0 Written in: Go We haven’t seriously considered this operator because the latest commit was over a year ago. The operator is abandoned: the latest supported version of Kubernetes is 1.9. 2.4. Cassandra-operator by Rook GitHub Maturity: Alpha License: Apache 2.0 Written in: Go This operator isn’t developing as rapidly as we would like. It has a well-thought-out CRD structure for managing cluster and solves the problem of node identification by implementing a Service with a ClusterIP (the hack we’ve mentioned above), but that’s all for now. It doesn’t support monitoring and backing up out-of-the-box (though we are working on the monitoring part currently). The interesting point is that Cassandra-operator also works with ScyllaDB. NB: We have used this operator (with a little tweaking) in one of our projects. There were no problems during the entire period of operation (~4 months). 2.5. CassKop by Orange GitHub Maturity: Alpha License: Apache 2.0 Written in: Go CassKop is one of the youngest operator on our list. Its main difference from other operators is the support for CassKop plugin, which is written in Python and used to communicate between Cassandra nodes. The very first commit was on May 23, 2019. However, CassKop has already implemented a large number of features from our wish list (you can learn more about them in the project’s repository). This operator is based on the popular operator-sdk framework and supports monitoring right out-of-the-box. 2.6. Cass-operator by DataStax (ADDED in June’20) GitHub Maturity: Beta License: Apache 2.0 Written in: Go This operator, introduced to the Open Source community quite recently (in May 2020), was developed by DataStax. Its main objective is automating the process of deploying and managing Apache Cassandra. Cass Operator naturally integrates many popular DataStax tools, such as metric-collector for aggregating Cassandra metrics (bundled with Grafana dashboards) and cass-config-builder for generating Cassandra configs. The operator interacts with Cassandra using the management-api sidecar. Thus, you will have to use specialized Docker images to deploy Cassandra. The emergence of the operator is an excellent opportunity to familiarize yourself with DataStax recipes for “cooking” Cassandra. Takeaways A large number of approaches and a variety of migration options suggest the existing demand for moving Cassandra to Kubernetes. Currently, you can try any of the above methods at your own risk: none of the developers guarantees the smooth running of their brainchildren in the production environment. Yet many projects already look promising and are ripe for testing. Maybe that Cassandra isn’t so cursed even in Kubernetes after all… This article has been originally written by our engineer Maksim Nabokikh. Follow our blog to get new excellent content from Flant!
https://medium.com/flant-com/running-cassandra-in-kubernetes-challenges-and-solutions-9082045a7d93
['Flant Staff']
2020-06-12 09:16:28.907000+00:00
['Database Administration', 'Kubernetes Operator', 'DevOps', 'Kubernetes', 'Cassandra']
Neural Networks Demystified
Neural Networks Demystified Computers are now simulating how the brain works- how? Introduction Imagine you were in an exam for a computer science class. You’re soon to apply for universities,, and therefore are determined to get excellent grades on all of your work. You’re completely fluent in three programming languages and know all the course material, so you’re pretty sure you’ll do fine. Exams — Often a Tale of Dread and Procrastination You’re halfway through the coding problems, and you then read: *^@%!???? The entire test is out of 40, and half of those marks come from this beyond difficult question. I mean, how would you even approach whipping up something capable of sorting thousands of images in a couple hours? This is where machine learning comes into play, an emerging technology capable of tasks like recognizing butterflies on their own. In fact, way more, like even detecting cancer or driving cars. Uh, ok? So how does any of that work, or for that matter, make sense? Background Well, the basis for machine learning, a subset of artificial intelligence, is neural networks. As the name suggests, they take inspiration from how the human brain operates. A Model of a Neural Network Artificial neurons simulate real neurons, and the entire network is capable of making predictions and drawing conclusions through processing inputted data. Today, I’ll be discussing feedforward neural networks — where inputs and outputs strictly move to the right. This’ll make more sense in a bit. Neurons Neurons are the fundamental building block of neural networks. Think cells for organisms or atoms for matter. They are necessary for your model. They produce values that represent predictions/decisions through computing inputs. Inputs, Weights and Biases Before talking about the types of neurons, it’s important to discuss the important concepts that accompany them. Inputs, weights and biases are what make a group of neurons an actual network. They are all necessary for a neuron to compute and then output something. Inputs (x) Inputs are simply data points fed into a neuron. Because neural networks are mathematical models at their core, inputs are represented by a numerical value. For example, the pixel values of a greyscale butterfly photo. Weights (w) Then, weights come into play. Essentially, they represent the importance of an input, and multiply the input by a select number. When a data point fed into a neuron is very important, they’re in a sense heavily weighted (a high weight value). Thresholds and Bias (b) So far, we have the data we’re inputting into the neuron and weight, what we multiply the former by. However, there is one more thing to learn. Assuming we’re using perceptron neurons (more on that later), our neurons look something like this. Output 0 if wx < threshold Output 1 if wx ≥ threshold The threshold refers to the number necessary for our weighted input(s) to be equal or over to for the perceptron to output a 1. For clarification, perceptrons output 1’s when confident in a prediction and 0’s when not. If we were working with watching a football game, our input could be how much the user likes the teams playing from 1–5. This seems like an obvious indicator of whether they should watch or not, so we’d give it a high weight of 2. And, the perceptron should only output “1”, AKA you should watch, if their favourite teams are playing. As such, the threshold is 6 (need to like the team > 3). This would look like this: Output 0 if 2x < 6 Output 1 if 2x ≥ 6 However, this inequality can be simplified. Introducing biases! By moving thresholds to the other side and switching them to positive/negative (rule of simplifying equations) values, we now have a value called bias. This can be shortly explained as a value representing how easy it is to get above zero. Biases adjust the weighted sum of the inputs as an added parameter. So in our example, we’d now have: Output 0 if 2x — 6 < 0 Output 1 if 2x — 6 ≥ 0 With our weight being 2 and the bias -6. And as you may see, the inputs, weights and biases can be modelled with the equation wx + b. This is the same form as a linear equation! Once you get more than one input, you get a new weight as well. Multiple inputs are often used in neural networks. The output of a neural network before it’s been activated… keep reading! Activation Functions However, outputs of a neuron can get more complicated than a 1 or a 0. This happens when an activation is applied. Activations alter the output of a neuron. We’ll go into two common types of activation functions, and each has their own type of neuron. Perceptrons An outdated but important type of neuron is the perceptron. As aforementioned, after weighing and computing inputted data, it’s able to produce a 0 or 1 value. The function used here is the step function. There’s little curvature, and if the value of x after applied weight and bias is under 0, as you can see on the graph, the output will be 0. Whereas if the value of x after the bias and weight is 0 or above, you can see that the output will be a 1. We consider x = 0 as 1 for simplicity. Sigmoid Neurons Sigmoid neurons work a little differently, and provide a value between 0 and 1 inclusive. As the name suggests, the function used is the sigmoid function (however, also called the logistic regression function). As presented in the photo, the graph is much more smoothed out than the step function. This is how we’ll get values in between 0 and 1. Sigmoid neurons are used because we can get more descriptive outputs than 0 or 1. Perceptrons are harsh by nature — despite how close you may be, you could get a 0 outputted because the weighted input was 0.00002 less than 0. Because of this binary system, where everything is strictly constrained with 0 and 1, extremely close equations would be bundled with equations that are massively off due to the fact that all of their perceptrons would produce 0. Sigmoid gives us more accuracy, so we can see just from the number output whether a weighted input was close or not (0.213, 0.967, etc.). Basic Architecture The structure of a neural network is quite simple. They are comprised of three different types of layers — input, hidden and output. Input Layer The input layer refers to the leftmost column or group of neurons (circles) in the photo. They’re the first layer, in that data first starts here. This is because the data are “inputs”, and the input layer is responsible for feeding it to other neurons in the first hidden layer. Hidden Layers That brings us to the next type — hidden layers. Hidden layers sound mysterious, but really are just layers between the input and output layers. They don’t take in the initial inputted data/produce the final output. Output Layers And finally, the output layers. Here, the official outputs of the neural networks are released, and their numerical value represents something. These outputs are all stored in neurons. Typically, each neuron in the output layer is used as an individual component, representing something. So back to Problem #20, if you were classifying butterflies with perceptrons, each output neuron would correspond to the confidence the network has that the initial photo is a certain species. Let’s say Monarch, Painted Lady, Cabbage White and Red Admiral. If the neural net was saying “we think this image is a Monarch butterfly”, the output neuron for the monarch species would have a value of 1 and the other species’ 0. Species found through a quick google search.. I’m no expert! This would be a feedforward neural network — photos are initially fed into the input layer on the left, and then the output layer on the right produces final values.
https://medium.com/datadriveninvestor/neural-networks-demystified-34bee0c45fb7
['Joshua Payne']
2019-12-02 06:12:34.537000+00:00
['Machine Learning', 'Artificial Intelligence']
A Guide to Creating a Hotel Booking App
Search Functionality There are multiple elements to consider when designing and developing the search experience for your hotel app. To develop a successful search experience for your guests, it’s important to firstly determine the needs and wants of the user and the way they search. Data-driven research is critical to implement into your UX strategy to help identify how users are searching. Airbnb Auto-suggestions Search Available through their App Auto-complete and auto-suggestions are great features to implement within the search, helping users to recall hotels based on the letter’s the user types into the search, saving them time and reducing inaccuracies if the user cannot remember or spell the name of the location or hotel correctly. Auto-suggest offers virtually endless options and ideas within the context of the mobile app, which is related to the characters the user has typed into the search. Auto-suggest can provide alternatives to the user, which they may not have even considered. Travel platform Airbnb use auto-suggestions within the search on their mobile app, enabling users to not only look for destinations but homes, accommodation, experiences, and adventures within the context of the search input. Click Reduction A recent study by Koddi found that guests are 14% more likely to complete a booking if there are 4.6 clicks or less during the booking process. Minimising the number of actions a user has to take between browsing a hotel and booking a room, saves guests valuable time, allowing them to manage their booking in just a few simple taps. Limiting the number of fields a user has to fill out during the booking process is also an important factor to consider when designing a hotel app. ‘Forms with fewer fields have a 65% higher conversion rate on mobile’. — Koddi Removing unnecessary pages, designing simple call-to-actions and smooth interactions can help guide the user onto the next stage of the booking process. Creating a simple and seamless payment process for users is one of the biggest ways to increase conversion rates. There are a number of ways to optimise the purchase process, implementing postcode auto-complete, Apple and Android Pay and credit card OCR are all great ways to speed up the booking process and encourage conversion. Offline Functionality One of the most significant advantages a mobile app can offer a guest compared to other devices is offline functionality. This is particularly beneficial to travellers where users may not have access to wifi, high roaming charges or limited internet access. Offline functionality can provide your travellers with a host of features such as access to maps, guides and tourist information, removing limitations and encouraging guests to use your app and features again. Membership Programmes easyHotel’s Membership Scheme clubBedzzz which is Available through their New Booking App Offering a membership scheme to your guests through your hotel’s mobile app is a great way to encourage direct bookings. A report found that 71% of consumers who are members of loyalty programs say memberships are a meaningful part of their relationship with a brand. Budget hotel chain, easyHotel has integrated their clubBedzzz membership scheme into their new mobile booking app, providing users with access to discounts and incentives such as free WIFI and late check-out. A Multi-Channel Booking Experience Today’s modern consumers move seamlessly from platform to platform, no longer tied to a specific medium. Whether it be desktop, mobile, tablet or smartwatch, individuals will utilise whatever device suits their needs at any given time, meaning that hotel apps must be available across all devices. Allowing users to browse hotels on their desktop, begin the booking process on their tablet, purchase a hotel room on their phone and access their booking reference through their smartwatch, provides guests with a convenient way to manage their booking and increases customer value. Personalised Notifications Mobile applications can provide brands with a range of insights into their consumers’ shopping trends, allowing brands to see how users interact with their app, their shopping habits, and drop-off points. These vital insights equip hotel brands to offer enhanced personalised services to their guests such as discounts and exclusive offers. A recent study by Access found that 76% of customers felt that receiving personalised discounts and offers based on their purchase history was important, while 56% of customers are more likely to buy from brands that offer personalised discounts. Notifications are also a great way to enhance the guest experience during their stay. Notifying guests of breakfast options, classes and activities going on during their stay and places to visit are great ways to enhance the guest experience. Concierge Chat W Hotels Concierge Chat Providing a concierge chat feature within a hotel app is a great way to instantly understand the needs of your users and provide a truly bespoke experience for your guests. A study by Moxie found that 35% of guests would not have purchased a room if concierge chat was not offered to them. Providing guests with the means to book a taxi, order room service, dry cleaning, answer queries they have or additional extras through the app, helps your hotel brand to fulfil their needs with just a few simple taps of their smartphone. Differentiating your Hotel App As the travel industry becomes increasingly competitive, more and more companies are implementing mobile applications into their digital strategy to further enhance their customer experience. A recent study found that 60% of travel brands have stated that they are looking to invest or replace their mobile app within the next year. With an increasing number of hotel brands offering mobile applications, it’s vital to differentiate your mobile app and offer even more value to your customers. Luxury hotel brand, The Ritz-Carlton offers additional features within their mobile app, implementing Quick Response Codes to provide guests with information about the art, furniture, and areas of interest featured in their hotels. Reflecting the Ritz-Carlton’s uniqueness and luxurious brand.
https://medium.com/universlabs/a-guide-to-creating-a-hotel-booking-app-6e972297274e
['Univers Labs']
2020-01-22 15:32:21.483000+00:00
['Mobile App Development', 'Mobile Ux', 'Hotel Mobile App', 'Hotel Booking App', 'Mobile Ui Design']
Tradeoffs In Design
What are tradeoffs When you are designing, you are making a design decision. Every time you are making a decision, you are compromising on something else. It is up to you to decide what is mandatory, what is optional, and what is to keep out. Every team in an organization is dealing with the tradeoff. Eg: Engineers are deciding internally how to improve the code. What they can change in a given period given the resources they have. Tradeoffs in design While designing, you are going through so many alternatives ideas, solutions and it can be difficult to decide which should you select and which should be left out and why. To make this decision easier, you are looking at different metrics to make a decision. In a project, the three most common metrics are time, scope, and cost. You need to negotiate on these metrics. Scope: Everything is possible but not everything should be in scope. You cannot fix all the problems at once and not all the problems need to be solved immediately. You need to make a decision on which problem to tackle first and how to scope that out for incremental updates. Time: Is the time sufficient to build this feature? If it is twice useful and it takes 10 times longer, is it worth it? Cost: How much you have to spend to build the feature? What is the investment and will it yield the return we are hoping to achieve? Tradeoffs also depends on priorities — it may be done to attain the business goal, sometimes for technical limitations, sometimes to put user needs above others. Tradeoffs are more than making a decision The one important aspect of deciding tradeoff is to reach consensus and alignment with the people you work with. You need to get stakeholders and cross-functional team members to agree on and prioritize various features of the project. How can you decide tradeoff There might be a lot of factors that might affect the project. Put different factors that might affect the project into themes and patterns. Now you choose a few and decided the priority based on these factors.
https://sumitnarangin.medium.com/tradeoffs-in-design-fb11b999753c
['Sumit Narang']
2020-09-06 14:27:36.149000+00:00
['Design', 'UX', 'Systems Thinking', 'Product Design', 'Design Process']
Watch for the Reformers
John F. Kennedy once observed, “when written in Chinese, crisis is composed of two characters. One represents disaster and the other opportunity.” Educational reformers have historically took advantage of crises to serve their capitalist aims. In a recent report, the Heritage Foundation observed, “sometimes it takes a natural disaster to catalyze meaningful education change. That’s what happened in New Orleans, where one of the nation’s most vibrant school choice districts has arisen from the wreckage of Hurricane Katrina.” In light of our current national reality, we must inquire how school reformers are positioning themselves to create lasting changes in the national educational landscape during the pandemic. I wouldn’t call the New Orleans reform “vibrant” — perhaps chaotic or ruinous. When reformers dismantled the large school district, charter schools, mostly created by charter management organizations and out-of-towners, proliferated without oversight and accountability. Teacher-flood-victims returned home without jobs — emerging scholarship has blamed the restructuring on the dismantling of the black middle-class. In recent years, many of the charter schools have been criticized for their punitive disciplinary measures. The effects linger today, more than a decade after the natural disaster — without a centralized district and database, the whereabouts of student enrollment remains unknown. Scholarship has called the response to the natural disaster “Disaster Reformation”. A similar pattern is slowly emerging in Puerto Rico following Hurricane Maria. Parents are enrolling students in cyber charter schools in record numbers this year. While enrollment may seem like an immediate solution during these uncertain times, we must weigh the long-term effects of these pursuits. Contemporary virtual charter schools were the vision of Ron Packard, formerly of K12 education. The so-called public schools profit by pocketing the “per pupil” expense cost determined by each district. Since the charter schools do not have to spend overhead expenses such as hiring certified teachers, maintaining a low teacher to student ratio, or paying for infrastructure expenses, they reap handsome monetary rewards — K12 has a market value well over a billion dollars. There is little data to support their academic gains and much-advertised individualized approach. At countless virtual charter schools, attendance was captured by a one-time login — not time logged in. A 2015 study by CREDO (Center for Research on Education Outcomes) at Stanford University, in collaboration with the Center on Reinventing Public Education at the University of Washington and Mathematica Policy Research, found students made no mathematics academic gains while enrolled. Ironically, as the Wall Street Journal reported upon the study’s release, the study was funded by Walton Family Foundation, which supports a variety of school privatization efforts. Long-term, charter schools pose the risks of diverting funding from urban school districts, many of which are already burdened by an unfair funding formula. A Study by Bryan Mann and David Baker in the American Journal of Education found “With the large movement of students, the mean amount of public funds transferred from residential districts in 2014 was about $800,000 (standard deviation about $3,100,000). With dubious academic benefits, districts with the lowest tax base lost significant revenue to cyber charter providers.” All the while, countless virtual charter schools have been found guilty of fraud and mismanagement of public funds, closures, and data manipulation. Sound familiar? There are bound to be long-term effects that emerge out of this year’s temporary educational decisions. If virtual charters with little demonstrated success have been able to profit during times of stability, imagine their potential to capitalize on our current crisis. As the new school year begins, we must be on the lookout for individuals who would co-opt this time to restructure schooling for their own financial gain and explore accountability practices that protect students and taxpayer’s hard-earned money.
https://medium.com/age-of-awareness/watch-for-the-reformers-37a3cabe70ed
['Lydia Kulina']
2020-08-21 13:01:57.615000+00:00
['Education', 'Schools', 'Coronavirus']
Testing In React, Part 2: React Testing Library
The headline for React Testing Library is, “Test functionality, not implementation”. The idea behind this concept is that implementation is constantly being iterated and improved upon, and therefore is unreliable as a means of testing a UX. For example, let’s say you are working on a small React component for which, early on, you write tests. The component handles a certain amount of logic and renders something to the DOM. As your application grows, so does this component, and eventually it makes sense to break it up into multiple components. As far as the user is concerned, nothing has changed — what is being rendered to the DOM is still the same. In other words, you have changed only the implementation, but not the functionality. If your initial tests were written to test implementation, they would break. But by testing the functionality — DOM nodes rather than rendered components — your tests continue to pass (unless you screw up the refactoring, of course). A quick note before we start: Jest and React Testing Library are used very much in conjunction with one another, and while I think it is helpful to focus on each separately, it is nearly impossible to talk about one without the other. You can think of Jest as doing the actual testing, while React Testing Library recreates the thing to be tested (be it an event, a node, etc.). Any Jest functionality referred to here will be covered in depth in my next post.
https://medium.com/javascript-in-plain-english/testing-in-react-part-2-react-testing-library-f32432b93c6c
['Bryn Bennett']
2020-09-28 16:56:07.339000+00:00
['React', 'Programming', 'Software Testing', 'JavaScript', 'Software Development']
Lessons Learned After Making the First 10 Commercial Apps in Flutter.
It’s been two years since we’ve started the development of our first commercial application in Flutter at LeanCode in July 2018. When I first learned about Flutter, although it was promising, I remained skeptical, mostly because of the negative experience from our recent investment into Xamarin. Since there is always some new and exciting technology our team wants to bring to the project, we challenged them and asked for the proofs on how this can bring real value to the client. This was an agricultural project, dealing with herd management. There is one interesting artifact, typical for this industry, which is widely used by the breeders to calculate the demand for the barns and our team felt that this is a great insight from the UX perspective. Within two days they proudly showed the Proof of Concept demonstrating how easy it is to build an animated wheel giving you a great and smooth experience. Eventually, this has evolved into the full-scale animation which you can see here: Example of the simple animations in Flutter in the Kedzia App project. With this delighter, I was convinced that Flutter is worth experimenting with. Initially, we didn’t want to commit ourselves 100% to Flutter, so we kept the React Native projects in parallel. When faced with writing our first implementations of Google Maps without official support from the Flutter team, I felt that this pessimism was justified. You can learn more about the experience of writing this first commercial application in Flutter and related difficulties here. Eventually what we have delivered was a relatively simple application, with less than 40 views and costs below 500 hours of Flutter development. Once we’ve delivered this first application and collected a five-star review from our client we thought that we should start to recommend Flutter more actively to our clients since the beginning of 2019. From May 2019, we decided that Flutter will be our no.1 choice when it comes to mobile technologies and we will cease our involvement in developing apps on different frameworks. Since then we’ve delivered more than 10 mobile products in Flutter and dozens of MVPs/PoCs. Now, it’s time to draw the conclusions. Flutter is quicker. And we are not talking here about the theoretical approach, although this is interesting as well (find the paper by Bran De Connick here). We had a unique chance to rewrite the apps from both Xamarin (client-facing mobile app) and ReactJS (restaurant manager facing web app) and results were comparable. It took us 67% of the time in comparison to Xamarin (667h vs 987h) and 69% of the time needed to create the app with ReactJS (486h vs 704h) for the very same scope using the same API on a backend side. Stop and think about those numbers for a moment. This is the ultimate answer to how to build the mobile application quicker and cheaper. With the economic downturn, it has never been more important to deliver new digital products on time and within budget. It can also mean that for the very same budget you can deliver a 50% bigger backlog. Imagine yourself as a product owner working on the priorities for your development team being able to move the budget barrier 50% further. Time spent on rewriting the project to Flutter from mobile Xamarin and web ReactJS versions. This will give a great boost to both the creativity of your team and the quality of the work they are delivering. For a detailed analysis of the GastroJob case have a look at our talk from Flutter Europe Conference here or check our case study here. 90% of the code on average is shared between iOS and Android. 90% of our code is not written twice for both native platforms. 90% of the time is saved in comparison to native app development and plenty of creativity is released due to the coherence and team being united around one goal instead of being divided into two native streams. Beyond sharing the business logic and user experience, we can use plenty of ready-to-go libraries, which bring additional benefits. Firstly, they can speed up the development process by providing commonly used logic for many different things used within the app (e.g. communication with the server (HTTP client), push notifications, secure storage, databases, animations, etc. ). Secondly, it is easier to integrate with many popular services (e.g. Firebase, Maps, Payments, Social Login, Analytics, Crash Reporting Services, etc.). Therefore you have to write code twice (separately for iOS and Android) only if you’re writing custom platform-specific code. Yet, even then, bridging between Dart and native code is fairly easy which is explained later in this article. What is more, there are even bigger savings when we take into account the quality factor which makes the app cheaper also to maintain in the long run. Matter of fact has inspired us to investigate all projects built in Xamarin, React Native and Flutter to search for the pattern and what we have discovered is that Flutter projects typically need 8–10% of the time spent on bugs in comparison to React Native with the range 7–14% and Xamarin 11–23%. Cooperation with UX/UIs has never been so good. Something clicks on the cooperation between UX/UI Designers and Devs during the Flutter projects. Might this be for the reason that they don’t need to make this tedious native adaptation and they set their creativity loose. Yet, the same would be expected from the React Native team experience, and this is not the case. When we dug a little deeper, we discovered that Flutter brings pure joy for devs who can write beautiful interfaces, which previously were associated with an extra burden that slowed down the pace. Therefore, they were more willing to cooperate and we’ve observed the pair programming sessions started to happen with designers making the live experiments hand-in-hand with devs. After several such interactions, thanks to the robust theming engine, teams were able to come up with an adaptable design language for the app which not only looks great in Figma or Adobe XD but also gives the best possible user experience and the feeling of coherence and proper design order. How this coherence is present over the lifetime of the project is also interesting. Previously when UX/UI Designers were reviewing the product on the demo session they had most of their comments at the end of the project, changing their minds or simplifying things after the hands-on experience. What is unique about Flutter is the fact that at the end of the project the involvement of designers is completely fading away, as they did their job early in the beginning during design loops of trials and errors. This also means that the refinement of subsequent sprints takes less time and this continuous cooperation is reflected in the stable scrum pace of next releases. Animations are easy and affordable. Not only is it easy to implement some static views in Flutter, but it also provides great new opportunities when it comes to animations. This brings this UX-DEVs cooperation to another level making nice transition effects accessible as never before. So far that was typical only for the big-budget projects. Nowadays, thanks to Flutter, this is accessible to all developers. It happens because Flutter renders on bare metal, directly on the canvas with full control over the drawing which enables us to create the pixel-perfect images on all platforms without additional conditional formatting as it is the case of other cross-platform frameworks. When drawing for example with React Native, you are based on the default views which can alter the appearance of your new controls, therefore, building a smelly code, which is platform dependent and directly in contradiction to the approach that the shared code should not take into the platform where it is deployed. Flutter apps are much lighter. This is worth considering when facing the PWA business choice which proves how easy it is to add the shortcut on your phone to save the website as if it was an app. Let’s not comment on the user experience, but only on this burden of downloading the app. Yes, this is not effortless in both cases. The best PWA websites according to the SimiCart Blog require users to download from 4.9MB up to 11.6MB on loading. This is far lower than the average size of our Xamarin apps which is 25MB, even lower than the average for our React Native 32MB apps, but very close to the Flutter average which is 11MB with the range of 9–14MB for all our Flutter apps (just pay attention that those numbers are not directly comparable, though they highlight the pattern). You have to admit this (11MB) is extremely low for the native application experience, for the smooth look and feel, rapid reactions, and all services typical of native apps like push notifications, etc. This means that there are no barriers for the user to download the app and start to use it as efficiently as possible with all plugins and integrations. This also means that apps are more performant because they can execute similar tasks with smaller code. This boost in performance translates directly into those milliseconds that give you the quicker experience in cold loading of an app, animations, CPU and memory usage as compared to other cross-platform frameworks (actually when Flutter can give the better cold app start even in comparison to Swift/Kotlin native apps). Native code is accessible when needed. What is great about Flutter is the fact that the mobile team is more eager to go down to the native code and write some Kotlin/Swift packages as they can have a full control over the native implementation, which was not the case for example in Xamarin where the final code was generated in an isolated black box. Bridges to the native code are also more powerful because they are completely transparent and therefore more friendly for developers who have transferred from the native environments. Thanks to this approach this is relatively easy to implement specific features like local payment providers or some niche complex library. What is more, even advanced features requiring biometric algorithms for face recognition or fingerprints check are smoothly running on Flutter as it was showcased in the banking app developed by ING for Business in Flutter presented during Flutter Warsaw Meetup by Jakub Biliński (link). Proof of Concept in Flutter is easy. The ease of integrating with the native code brings additional benefits when we need to build the Proof of Concept in order to check the Riskiest Assumption Test. This means that before the client decides to sign the contract for the whole project we can build the smallest possible app which answers the most critical business or technical question. This is the point, where we cannot overestimate Flutter capabilities. Every time we are timeboxing such initiatives to two days of development, trying to find out what can be achievable in such a short period. So far we were experimenting with a variety of PoCs ranging from AR supported image detection systems (below), Proof of Concept Example accomplished within 2 days with Flutter showcasing AR. through whiteboard drawings and advanced animations. Building a rapid PoC not only enables us to showcase the speed of development but also helps us to provide more accurate estimates for the final project. DEVs are happy. From the perspective of building the internal team, Flutter proved to be a good choice. Initially, there were few Flutter developers because nobody had professional experience. Yet, what was different in comparison to for example Xamarin where developers had a C# background, in case of Flutter all candidates were already mobile developers transferring from the native, mostly Android, background. As Flutter became more and more popular and thanks to the very active community, which is organizing regular meetups and webinars, the pool of available candidates grew exponentially and nowadays there is a substantial number of professionals looking for a job in Flutter projects who are willing to change sides after years of native app development. Thanks to the well-documented Flutter code and the availability of additional libraries that are driven by the community it is fairly easy to make such a transfer. Therefore some companies, who were previously having their independent mobile teams are investing in aligning them around Flutter. At LeanCode we were even organizing Flutter bootcamps, three-day-long training programs taking place at the lakeside to give the hands-on experience, and select the best candidates for the intensive, two-month-long study program, where learning Flutter was accompanied by doing some non-commercial projects. We were surprised to notice that after 9-weeks of training developers were ready to work side-by-side with their colleagues who started coding in Flutter from the early days. Such a short learning cycle proves that making a choice to switch from the native app to Flutter from the business owner’s perspective is not a revolution but an evolution in which their internal team can take an important part.
https://medium.com/swlh/lessons-learned-after-making-the-first-10-commercial-apps-in-flutter-f420808048cd
[]
2020-07-31 12:54:23.120000+00:00
['Product Owner', 'Mobile App Development', 'Mobile Apps', 'Flutter', 'Flutter App Development']
Angular 10 in depth
Angular 10 in depth Angular 10, the latest major version of Angular has just been released. Time to discover what’s new! NEWS: Angular 11 is out. Check out my article to learn everything about it. In this article, I’ll go over (almost) everything noteworthy in this brand new release. I’ll also highlight what’s changed around Angular. If you want an helicopter view of what’s included, then check out the official Angular blog. Here, I’ll try to dig deeper into the release notes. Angular 10 is already here, just four months after version 9. Of course during this short time period, there’s not that much that has changed. Still, there are quite a few noteworthy features, in addition to the large number of bug fixes brought by this release. As a reminder, the Angular team tries to release two major versions per year, so Angular 11 should arrive this fall. Support for TypeScript 3.9.x What can I say? You know me, I love TypeScript. So the very first thing that makes me happy about this release of Angular is the fact that it supports TypeScript 3.9. I’ve already published an article about the new features of TS 3.9, so if you didn’t read it, go ahead and upgrade asap, it’s really worth it! I have also written another one about what’s coming with TypeScript 4.0. Note that Angular 10 has dropped support for TS 3.6, 3.7 and 3.8! I hope that it won’t hold you back. Thanks to its support for TS 3.9.x and other improvements in the compiler CLI, type-checking is faster than ever in Angular 10, which should be positive for most projects out there; especially larger ones. Aside from that, Angular 10 also upgraded to TSLib 2.0. For those who don’t know, TSLib is an official library providing TypeScript helper functions that can be used at runtime. TSLib works in combination with the importHelpers flag of “tsconfig.json”; when enabled, it allows the compiler to generate more condensed/readable code. Anyways, nothing to worry about; TSLib hasn’t changed much.. Optional stricter settings Strict mode for the win! Angular 10 brings the possibility to create stricter projects right at creation time, which is great and should certainly be used for all new projects. To create a project with stricter defaults, use: ng new --strict This will allow you to detect issues much sooner (finding out about bugs at build time is better than at runtime, right?). This new option enables TypeScript strict mode (which you should all enable on your projects!). Next to that, it also enables strict Angular template type checking, which I wrote about last week. It also lowers the budgets in “angular.json” quite drastically: This is good as it will encourage new users to pay attention to the bundle size of their applications (about that, I’m planning an article on how to analyze the bundle size of your apps). It also enforces a stricter TSLint configuration which bans “any” (“no-any” is set to true ), and also enables quite a few interesting rules provided by codelyzer. Note that even though strict, you can still go much further with TSLint. For instance, here’s the config of one of my projects, which you can use as starting point. I think that this new “strict” option is awesome, but am a bit sad that it isn’t the default rather than an optional flag. I feel like stricter means safer, so why make safer optional? I imagine that the rationale is that by being more lenient by default, Angular feels less scary at first? Anyways, if you do create a new project, please enable this and go even further; you’ll thank me later. New TypeScript configuration layout With this new release, the TypeScript configuration provided by default in new projects has changed. There’s now a “tsconfig.base.json” file in addition to “tsconfig.json”, “tsconfig.app.json” and “tsconfig.spec.json”. So why all these configuration files? To better support the way IDEs and build tools look up the types and compiler configuration. With the new setup, “tsconfig.json” simply contains TypeScript project references based on the so-called “solution style” (Visual Studio is back? :p) brought by TypeScript 3.9, which is great to improve compilation times and enforce a stricter separation between parts of the project: In this case, the separation is there to cleanly isolate application code (taken care of by “tsconfig.app.json”) from tests (handled by “tsconfig.spec.json”). If you look at the “tsconfig.base.json” file, then you’ll find the bulk of the TypeScript configuration: Note that this one was generated using the strict option discussed in the previous section. As you can see above, this file only configures TypeScript compiler and Angular compiler options; it doesn’t list/include/exclude files to compile. The answer is indeed in the “tsconfig.app.json” file, which lists the “main.ts” and “polyfills.ts”: If you have an existing project without this layout, then you should probably review your TypeScript configuration in order to stay aligned and benefit Ok ok, enough about the TypeScript config. NGCC In case you haven’t done this yet (this was already true with NG9), make sure that you have a postinstall script in your “package.json” file to execute NGCC right after an installation: Note that in this release, NGCC is more resilient. Previously, it couldn’t always recover when one of its worker processes crashed. So if you sometimes saw issues with NGCC hanging, this should now be fixed. There were also quite a lot of improvements made to NGCC, including performance-related ones, which is clearly my biggest pain point around NGCC ;-) New default browser configuration Web browsers move faster than ever. Angular follows course and now uses an updated browserslist file (.browserslistrc). As explained in the official blog post, the side effect of the new configuration is that ES5 builds are disabled by default for new projects. Of course, at this point it doesn’t make much sense anymore to generate ES5 code. Modern Web browsers support at the very least ES2015. If you still use Internet Explorer, then it’s clearly time to let go of the past! To get the exact list of supported Web browsers, just execute the following command in your project: npx browserslist The output is generated based on the contents of the “.browserslistrc” file at the root; by default it now looks as follows: You can find out more about this here. Bazel Sorry to disappoint, but did you know that Angular Bazel has left Angular labs? Alex Eagle wrote about it on dev.to a while ago. Basically, support for Bazel is not part of the Angular project anymore. Bazel will never be the default build tool in Angular CLI after all… I won’t go over the reasons here, but make sure to take a look at the article of Alex as it is very interesting (as usual). @angular-devkit/build-angular 0.1000.0) Behind this barbaric name (and version???!), hides an important piece of the way Angular apps are built. The newest version of this package brought us some cool new features. The coolest one (if you’re using SASS that is) is the fact that build-angular will now rebase relative paths to assets. As stated in the commit, previously, paths like url(./foo.png) referenced in stylesheets and imported in other stylesheets would retain the exact URL. This was problematic since it broke as soon as the importing stylesheet was not in the same folder. Now, all resources using relative paths will be found. Cool! Another hidden gem in that release is the fact that build-angular now dedupes duplicate modules that Webpack can’t handle. This is done through a custom Webpack resolve plugin. And more… Incremental template type checking In this release, the compiler CLI is now able to perform template type checking incrementally. Hopefully this will save quite a few trees (and maybe a laptop or two)! :) CanLoad Previously, CanLoad guards could only return booleans. Now, it’s possible to return a UrlTree . This matches the behavior of CanActivate guards. Note that this doesn’t affect preloading. I18N/L10N Previously, only one translation file was supported per locale. Now, it is possible to specify multiple files for each locale. All of those then get merged by message id. I can’t say much more about this since I’m only using ngx-translate & transloco these days… Check out this issue for more details. Service Workers The default SwRegistrationStrategy has been improved. Previously, there were cases where the Service Worker never registered (e.g., when there were long-running tasks like intervals and recurring timeouts). Again, I can’t say much more as I’m not using NGSW but Workbox. Angular Material As usual, Angular Material’s releases follow those of Angular, so Angular Material 10 is here, with many changes. I won’t go over these in this article as it is quite long already, so go check out the release notes if you’re interested! Bug fixes galore As mentioned a few weeks back, the Angular team has invested a lot of time and effort in bug fixing and backlog grooming. They’ve decreased their issue count by > 700 issues, which is quite impressive. If you were the victim of known bugs in previous versions of Angular, then it’s probably time to take a look around and see if those aren’t fixed by Angular 10. A funny one (to me that is) is the fact that enabling strict template type checking caused issues with routerLinks because their underlying type didn’t include null/undefined. Another one that was fixed is the KeyValuePipe , which didn’t play along well with the async pipe. While we’re on templates, note that the language service of Angular now supports more array-like objects such as ReadonlyArray and readonly property arrays for *ngFor loops. How cool is that? :) Deprecations and removals As stated in the official blog post, the ESM5/FESM5 bundles that were previously part of the Angular Package Format are now gone because the downleveling to ES5 is now done at the end of the build process. If you don’t use the Angular CLI to build your application/library and still need ES5 bundles (poor souls..), then you’ll need to downlevel the Angular code to es5 on your own. IE 9, 10 and Internet Explorer Mobile are not supported anymore. But again, if you ask me, you should just ditch IE altogether at this point. It’s nonsense to keep zombies around. There are quite a few deprecated elements such as ReflectiveInjector , CollectionChangeRecord , DefaultIterableDiffer , ReflectiveKey , RenderComponentType , ViewEncapsulation.Native , ngModel with Reactive Forms, preserveQueryParams , @angular/upgrade , defineInjectable , entryComponents , TestBed.get , etc. You can check out the full list here. Classes using Angular features without an Angular decorator are not supported anymore Up to version 9, it was okay to have a class using Angular features without specifying one of the decorators (@Component, @Directive, etc). With Angular 10, it is now mandatory to add an Angular decorator if a class uses Angular features. This change impacts all cases where you have components extending from a base class and one of the two (i.e., parent or child) is missing an Angular decorator. Why is this change mandatory? Simply put, because Ivy needs it! When there’s no Angular decorator on a class, the Angular compiler doesn’t add extra code for dependency injection. As stated in the official doc, when the decorator is missing from the parent class, the subclass will inherit a constructor from a class for which the compiler did not generate special constructor info (because it was not decorated as a directive). When Angular then tries to create the subclass, it doesn’t have the correct info to create it. In View Engine, the compiler has global knowledge, so it can look up the missing data. However, the Ivy compiler only processes each directive in isolation. This means that compilation can be faster, but the compiler can’t automatically infer the same information as before. Adding the @Directive() explicitly provides this information. When the child class is missing the decorator, the child class inherits from the parent class yet has no decorators of its own. Without a decorator, the compiler has no way of knowing that the class is a @Directive or @Component , so it doesn't generate the proper instructions for the directive. The nice thing about this change is that it brings more consistency into the Angular world (and consistency is good :p). Now things are simple: if you use Angular features, then you must add a decorator. To give you an example, the following code won’t compile with Ivy: To fix the issue, you need to add a decorator to the Base class. You can learn more about this change here. Mandatory generic type for ModuleWithProviders In previous releases, ModuleWithProviders already accepted a generic type, but it was not mandatory. With NG 10, the generic argument is required. It’s a good thing for type safety anyways, so hopefully you already had the parameter defined: If you stumble upon the following error because of a library that you’re using: error TS2314: Generic type 'ModuleWithProviders<T>' requires 1 type argument(s). Then you should contact the library author to get it fixed as ngcc can’t help there. A workaround there is to set skipLibChecks to false Other breaking changes Here are notable breaking changes: Resolvers behave differently; those that return EMPTY will now cancel navigation. If you want to allow navigation to continue, then you need to make sure that your resolvers emit a value; for instance using defaultIfEmpty(...) , of(...) and the like will now cancel navigation. If you want to allow navigation to continue, then you need to make sure that your resolvers emit a value; for instance using , and the like Service worker implementations that rely on resources with Vary headers will not work like they did previously. Vary headers will be ignored. The proposed “solution” is to avoid caching such resources as they tend to cause unpredictable behavior depending on the user agents. Because of this, resources may be retrieved even when their headers are different. Note that cache match options may now be configured in NGSW’s config file headers will not work like they did previously. Vary headers will be ignored. The proposed “solution” is to avoid caching such resources as they tend to cause unpredictable behavior depending on the user agents. Because of this, resources may be retrieved even when their headers are different. Note that cache match options may now be configured in NGSW’s config file Property bindings such as [foo]=(bar$ | async).fubar will not trigger change detection if the fubar value is the same as the previous one. The workaround if you rely on the previous behavior is to manually subscribe / force change detection or adapt the binding in order to make sure that the reference does change will not trigger change detection if the value is the same as the previous one. The workaround if you rely on the previous behavior is to manually subscribe / force change detection or adapt the binding in order to make sure that the reference does change The following format codes of formatDate() and DatePipe have changed; apparently the previous behavior was incorrect for day periods that crossed midnight and have changed; apparently the previous behavior was incorrect for day periods that crossed midnight The function that stands behind the UrlMatcher utility type (function alias) now correctly states that its return type may be null . If you have a custom Router or Recognizer class, then you need to adapt those utility type (function alias) now correctly states that its return type may be . If you have a custom Router or Recognizer class, then you need to adapt those Additional occurrences of ExpressionChangedAfterItHasBeenChecked can now be raised by Angular for errors that it didn’t detect before can now be raised by Angular for errors that it didn’t detect before Angular now logs at error level when it notices unknown elements / property bindings in your templates. These were previously warnings Reactive forms’s valueChanges had a bug with FormControls that were bound to inputs of type number (they fired twice since 2016! A first time after typing in the input field and a second time when the input field lost focus). Now, number inputs don’t listen to the change event anymore, but to the input event. Don’t forget to adapt your tests accordingly. Note that this breaks IE9 compatibility, but that’s not a problem for anyone.. right? ;-) had a bug with FormControls that were bound to inputs of type (they fired twice since 2016! A first time after typing in the input field and a second time when the input field lost focus). Now, number inputs don’t listen to the event anymore, but to the event. Don’t forget to adapt your tests accordingly. Note that this breaks IE9 compatibility, but that’s not a problem for anyone.. right? ;-) The minLength and maxLength validators now make sure that the associated form controls values have a numeric length property. If that’s not the case, then these won’t be validated. Previously, falsy values without a length property (e.g., false or 0 ) were triggering validation errors. If you rely on that behavior, then you should add other validators like min or requiredTrue Upgrading As usual, there’s a complete upgrade guide available and ng update will help you out: https://update.angular.io/#9.0:10.0l3 If you do the upgrade manually and still use Protractor (just in case), then don’t forget to update protractor to 7.0.0+ as previous versions had a vulnerability. Conclusion In this article, I’ve explored the new features of Angular 10, as well as the deprecations, removals and breaking changes. All in all, even if this isn’t an earth shattering release, it’s clearly a rock solid one with tons of bug fixes and a few gems. As usual, we can only be thankful for all the efforts made by the Angular team and the community that surrounds it! That’s it for today. Liked this article? If you want to learn tons of other cool things about software/Web development, TypeScript, Angular, React, Vue, Kotlin, Java, Docker/Kubernetes and other cool subjects, then don’t hesitate to grab a copy of my book and to subscribe to my newsletter!
https://medium.com/javascript-in-plain-english/angular-10-in-depth-a48a3a7dd1a7
['Sébastien Dubois.']
2020-11-15 14:36:10.663000+00:00
['Angular', 'JavaScript', 'Software Development', 'Typescript', 'Web Development']
SEGRON, beyond end-to-end test automation
SEGRON has secured €3M Series A Financing from Credo and OTB. We talk with Thomas Groissenberger, its CEO. PetaCrunch: How would you describe Segron in a single tweet? Thomas Groissenberger: SEGRON provides future proof Beyond End2End test automation solutions for communication networks. PC: How did it all start and why? TG: Working as a Consultant for big Telecom players, 25 years ago, I met Michael. We developed several projects together. After many years, we realised that it was a common need for our clients which was not fully covered by any solution available in the market. Then we decided to establish together Segron initially under the business model of the consulting firm, specialised in the field of network design, deployment and testing for the Telecom industry. Years later, Jari joined the company bringing a big contribution to the development of what is today our main product, SEGRON ATF (Automated Testing Framework) and Petri joined to the company already more than 3 years ago strengthening the entire Segron financial area. PC: What have you achieved so far? TG: Certainly, the biggest achievement and asset so far is our wonderful and committed team. Engineers and experts coming from the Telecom industry with invaluable know-how in the sector. Today, we are an international group of 50 people coming from more than 8 different countries. Each of us makes an invaluable contribution of culture, skill-sets, abilities and personalities that make us unique in the market. In terms of product development, SEGRON ATF, our main product has evolved enormously. Unlike competing solutions, the SEGRON ATF can orchestrate testing with real Out-of-the-Box end-user devices (phones, tablets, laptops, IoT devices, etc.) while providing full access to the systems under test, enabling real-time Signal Trace and system log analysis. After the introduction of SEGRON ATF and during its continued development, major telecommunications carriers and service operators have selected ATF to meet their current testing requirements. After several years of hard work, we have established a solid footprint across the DACH region. Our product has a presence now in more than 6 countries across Europe. PC: How will you use your recent funding round? TG: The funding is undoubtedly a great vote of confidence, already allocated to scale-up our team, innovate our R&D department, increase our communications and marketing efforts and reach more customers in Europe. PC: What do you plan to achieve in the next 2–3 years? TG: Our main goals for the upcoming years are:
https://medium.com/petacrunch/segron-beyond-end-to-end-test-automation-2d11bbb75fbc
['Kevin Hart']
2019-09-03 20:17:59.818000+00:00
['Software Engineering', 'Automation', 'Test Automation', 'Software Testing', 'Testing']
Python List Comprehension
Python List Comprehension A quick and easy introduction to Python list comprehension Photo by Amanda Jones on Unsplash Introduction In this article, I want to show you a very useful feature of the Python language: list comprehension. After reading, you will be able to write your code more efficiently and beautifully. List comprehension is an elegant way to define and create lists based on existing lists or iterative objects. Examples Basic usage Let’s imagine that we need to create the list of squared numbers from 0 to 10. I think if you are not familiar with list comprehension, you will do it like in the code below. But wait, we can use the power of Python language, and do it more elegant: If you check the arr values at the end of the programs you will see the same values. But with list comprehension, the code looks more compact and it’s easy to read. Structure The main structure of this operation have the next form: [some_processed_variable for some_variable in iterable_object] If condition Let’s go deeper and add additional functionality to this structure. Suppose we need to get a list of only odd numbers. Now we can do it like in the code below. We added an additional “if” condition to the end of the structure (of course we can do it by setting range argument, like range(1, 10, 2)). Nested loops We can write a nested loop in the list comprehension. For this, we write two loops one by one — second will be nested. Let’s check it with the common example — create a card deck where each card has suit and value. As you can see we have two loops: main which iterate over suits and nested which iterate over a list of values. We can write more nested loops, but if it becomes difficult to read and understand it is best not to. Another good example of a nested loop with comprehension is creating a multiplication table. Below we create a list with strings in the format “number1 x number2= number1 * number2”. Where numbers 1 and 2 are numbers from 1 to 9. Real example The final example I met in one of the Kaggle competitions. There is a table with a lot of columns. And some of them have the format “x<some_number>”. And I wanted to get all these column names for future processing. With list comprehension, it was so easy. There is a small example of this task and solution for it in the next code cell. Conclusions In this article, we focused on the list comprehension mechanism in Python. The main concept is it is easy for usage and understanding. But we need to be careful with adding more nested loops and difficult conditionals to our structure — it can start to be messy. Besides list comprehension, Python has dictionary comprehension and set comprehension. The main difference between them and the list comprehension is to use {} brackets instead of []. And for the dictionary, we need to specify the key with the value.
https://medium.com/quick-code/python-list-comprehension-b0dd894f776a
['Yaroslav Isaienkov']
2020-12-21 23:55:14.761000+00:00
['Python', 'Python3', 'Python List Comprehension', 'Tutorial', 'Python Programming']
“What’s Your Current Salary?” Is a Red Flag That You Don’t Want to Work There
The Dreaded Salary Question Here is the context: You are on the initial phone screen call with someone from human resources. For the past 30 minutes, you have been trying to explain succinctly what your life has been about in the last decade. Maybe you have been teased with tricky questions on how Git works internally, and you have done your best to answer correctly. You are starting to feel exhausted, but fortunately, the call seems to be coming to an end. Then suddenly, the conversation moves to one last thing: “And by the way, where are you right now in terms of salary and what are your salary expectations if you make this move?” You may feel the rush to address the second part of the question and skip the first part, but don’t. Let’s focus on the first part: “Where are you right now in terms of salary?” That question is not legitimate.
https://medium.com/better-programming/what-is-your-current-salary-is-a-red-flag-that-you-dont-want-to-work-there-8a4f19a91bf
['Jean-Michel Fayard']
2020-10-28 17:18:08.629000+00:00
['Startup', 'Money', 'Careers', 'Work', 'Programming']
Are You Real or Bogus?
When one’s core essence is not nourished and is in fact repudiated and put in exile, we desperately try to manufacture and grasp onto what is considered acceptable so as to feel adequate and ‘normal’. The fear of being real is accompanied by the fear of being marginalized. So we lie to ourselves, not even knowing we are living a lie as we are not even certain as to what we feel. We pretend and feign indifference to fit in. To plod on. That is pretty much how I lived my life, until I embarked on a process of reclaiming my true self. It was the push of despair and the pull of hope that finally ignited the pursuit of self. Getting a place to live, a job (actually three) enrolling in college and starting therapy were critical steps. If I was to have a ‘self’ to even save I had to be willing to responsibly engage in life and learn the proverbial ropes. I needed to take radical risks to heal and discover who I was. This was an especially grueling undertaking at first. I could barely allow for my therapist to help me formulate a cohesive narrative of my life. A chronological assessment of one’s history and memories is a trajectory to uncovering one’s constitution and character.By examining our experiences and our responses we begin to identify patterns and feelings, exhume trapped rage and grief, detect preferences, reveal strengths, weaknesses, and longings. Initially my therapist’s urging to tell my story resulted in anxious performing. I didn’t know how to receive his care. Slowly I approached my suffering and allowed my therapist in. I allowed for the mirroring I never got and I learned who I was and who I aspired to be beyond the superficial. My personal identity with all the trappings evolves over time, but what is closer to the core of my essence is inviolate and immutable. That unique me that consists of my humanity, my intellect, my essence is the guiding force behind who who I am. That is the self that knows what I want to be loved for, what my worth is, my insecurities, how I feel, what my values are and what my litany of preferences are. To live in that place and own those truths irregardless of trends, what is popular and what will set me apart, is what authenticity is. Having a core self I am now able to gauge how to use my discernment and sense of discrimination to weed out those who are bogus and those who are coming from a genuine place. Exhuming my real self unearthed an invaluable treasure.
https://medium.com/the-ascent/are-you-real-or-bogus-2755a3d2fc4c
['Rev. Sheri Heller']
2020-01-13 13:21:01.282000+00:00
['Identity', 'Mental Health', 'Self', 'Self Improvement', 'Authenticity']
Medium Feels Like An Empty Fridge Lately and it Makes Me Sad
We’re not supposed to write about Medium. They don’t like it. At least, that’s what the curation guide says. But I don’t know how else to express my frustration with some of the recent changes. Some small consolation — I’m not writing to tell you how to make money. lol. Enough of my brethren doing that. Don’t need to add another voice shouting into the void and pretending changes haven’t happened here. I don’t know what experience you’re having, but mine is some combination of doomscrolling and how to make money on Medium. It sucks. You know what doomscrolling is, right? It’s the act of consuming an endless procession of negative online news, to the detriment of the scroller’s mental wellness. Donald Trump, Donald Trump, Covid, more Covid, More Trump… And if it’s not that, then it’s how to make money on Medium. How much I earned on Medium, my first 2 months (6 months, 2 years) on Medium, why you aren’t getting views on Medium. That’s Medium in a nutshell, lately. Doomscrolling and makin’ bank on Medium. Oh. And self help. Because we all know that with millions unemployed because of a pandemic, if someone is struggling it’s their own damn fault, right? It’s not a widespread systemic problem at all, right? Just go be like Jeff Bezos. Anyone can start a corporation, you know. Pull yourself up by your bootstraps. And by the way, here’s how to be more likeable. It’s f — king exhausting. Pardon my french. Which is bull. If I was French I’d scream Tabernac! Loading the homepage lately is like the day before grocery shopping when you keep opening the fridge hoping something good will magically appear, but it never freaking does. Yup. Still empty. Damn. 1. Who the hell picked the top of the page content? If it was full of content from people I read and follow, that might work. But it’s not. Usually, it’s at least half “how to make money on medium” posts. Personally I find those to be disingenuous and demotivational. Seriously. Go to INC or any print publication site. Do you go to Forbes and read about how to make money on Forbes? I mean, even the Huffington post doesn’t do that. No how to make money on Huffpo on their site. No. That’s a unique to Medium. Look, I’m not judging. We all have our preferences. Write that stuff if you want. I realize it gets clicks. Like bringing candy to the playground. Read it if you want. I don’t care. But I don’t want to read it. So why is Medium shoving those in my face? Shouldn’t preference count for something? I mean, that’s what man invented algorithms for, isn’t it? I’ve tried everything I can think of to get rid of those. I’ve unfollowed entire publications and they still appear. Why is that? I mean, I guess I can start muting people. I don’t like doing that. It seems childish. Unprofessional. But it seems like the only option you’ve left us because your algorithm isn’t the brightest light on the dash. Moving on down the page… 2. Trending articles might be. Or not. Trending stories are supposed to be what they’re called. Right? The stories that are picking up momentum. And sometimes they are. Like that story by Chrissy Teigen or the one by Barack Obama. Okay. Fair enough. Those were probably trending, for real. But what about that “trending” story published 24 hours ago with 50 claps and no comments. How is that trending? According to who? I’ve written lots of stories that bombed and only got 50 claps 24 hours later. I can promise you, they didn’t appear in the trending section. So what’s that about? After watching awhile, seems to me trending might function like a curation tag. As if editors get to push a story by shoving it into the trending section. Look — if editors want to push stories, that’s their prerogative. It’s your site. But don’t call it trending. Words mean things for a reason. As David Ogilvy famously said — the consumer is not a moron. We are the consumer. We are not morons. 3. Those 8 little faces are starting to tick me off I liked them, at first. Because (at first) I was under the delusion that everyone I follow would rotate there. But nope. Doesn’t work that way. Yes, I did figure out that you have to click on one or it won’t change. But even still — it’s still predominantly the same faces. I could count about a dozen or two people that rotate in that spot. Consistently. I can think of at least half a dozen people I read that have never been there. I have to go looking for them. Some of the people I follow write a lot. And they’ve never been there. Ever. Like — not once. Why is that? But I promise you, the minute Ev has a new post, he WILL appear there. Even if other people I read have written 3 new posts that did not appear there. Which begs the question. Is it weighted somehow? Or how does that work? 4. Where are the people I come here to read? Yes, I know that thing Medium said about “how” people read. They said “most” people don’t read specific writers. They just want great titles. I don’t believe that for a minute. Know why? We wouldn’t “follow” people if we didn’t care who the writer was. If all we cared about was hot headlines, would be even be at Medium? Or would Fox News or CNN do, depending on which way we lean. Or hell, buzzfeed. They’ve even cleaned up their act a little. Medium was different. Being able to follow the writers we like was what made Medium different. Know what I mean? When I finally started searching names of people I haven’t seen in a while, why did I discover that some of them had written several articles that didn’t ever appear on my homepage or my feed? Several! Where are the people I come here to read? Should I actually need to go to the list of people I follow and scroll through to see who I haven’t heard boo from lately? You’ve talked about a new “relational” model, but I don’t know what that means because what I’m seeing doesn’t seem terribly relational. Could you explain how it works? Or fix it if it doesn’t? Why have views tanked for so many people? I know it’s not just me. I follow a bunch of Medium groups and even though I don’t post my links in the groups, I do see the posts in my FB feed. So many people saying their views have tanked. So many. There’s even talk that Medium is “suppressing” views for some writers because they’re trying to build their “brand.” Is that true? No freaking clue. I sure hope not. But the radio silence isn’t good. You know what a brand is, right? It’s what people say about you when you’re not in the room. Go ahead, ask me what McDonald’s is. Here’s the part that slays me. Most people like to do good. If they know what that means. Know what the number one management tip is? Catch people doing something right and tell them. This is not a comprehensive list I’ve seen tons of those. I won’t even discuss the horrible dark backgrounds and myspace inspired color schemes that make me shut the tab instantly. I already wrote about those. Point is, seems to me I’m not the only one that’s frustrated. Know where the frustration comes from? Lack of communication. Isn’t that always the problem? Look, you get to do whatever you want with the site. It’s your site. But communication goes a real long way. Talk to us. Tell us what you’re looking for so we can try to deliver more of that. Tell us what you don’t want so we can try to deliver less of that. Not everyone will listen. Some people have never read the curation guide. Some people plagiarize other people. But they’re the minority. Most of us? We just want an enjoyable experience here. To read the stuff we enjoy and to be read by the people who enjoy our writing in return. Should that really be so hard?
https://medium.com/linda-caroll/medium-feels-like-an-empty-fridge-lately-and-it-makes-me-sad-96f654a6430b
['Linda Caroll']
2020-11-20 07:28:18.049000+00:00
['Self', 'Advice', 'Opinion', 'Success', 'Writing']
My New Apple Watch is a Privacy Nightmare
Image courtesy of Techcrunch.com The Apple Watch comes equipped with some labor-saving apps that are designed to make life easier when using it with your paired iPhone. I can use my wrist to answer phone calls like a secret agent, I can take notes, talk to Siri, and even ping my phone when I’ve forgotten where I put it. For those of us who remember the 1990 movie with Warren Beatty and Madonna, or even the classic comic strip, there’s even a walkie-talkie app so we can live out our Dick Tracy dreams. It also comes equipped with two apps that are going to cause a lot of people headaches; a remote shutter release, and a way to activate the voice recorder remotely. I work in an industry where we are exposed to reams of confidential information, much of which we can’t even share with other parts of my organization. This combination of the Apple Watch and iPhone make surveillance and espionage not only unobtrusive, but also as easy as checking the time. These are the result of me “checking my watch” in a public parking lot Taken from my iPhone in my belt holster using the Apple Watch app. (Author) From the same position, this time using the zoom function. (Author) As far as anyone could tell, I was checking my watch…maybe answering a message, or reading a notification. There was no indication that I was taking a photograph, not even a shutter release noise. That school in the background? It’s actually almost an entire city block away, across a football field-sized grassy area. (Author) When it’s combined with the iPhone 11 Max Pro’s impressive zoom features, you can get clear pictures without ever moving your phone off your hip, or out of your pocket. When it’s combined with the ability to start a sound recording from my wrist, I could conceivably record an entire walk-through of a facility to be reviewed at a later point. Of course, with the ability to slip the bands off, it becomes really easy to hide the watch face in a pocket or even in a hand and take photos that way. What makes it a voyeur’s dream? The phone (and camera) could be dozens of feet away, and the remote shutter release still works, makes sense right? Except that if the phone is set on silent, it doesn’t make any noise. Still not getting it? Voyeur + Apple Watch + iPhone = This is our (empty) bathroom at home…taken while I was on the other side of the house. (Author) Consider the thought of a silent phone taking photos after being placed in the ceiling of a dressing room or bathroom. It gets worse I forgot to mention, the Apple Watch App provides a live view of what the camera is seeing, even if you don’t take a picture. Anyone with a watch connected to a phone can monitor the room where the phone is while they’re dozens of feet away. So there’s no evidence to prove what happened if the phone itself isn’t found in an incriminating place.
https://medium.com/swlh/my-new-apple-watch-is-a-privacy-nightmare-fcf6c84662c5
['Matthew Woodall']
2019-12-05 15:39:51.054000+00:00
['Privacy', 'Technology', 'iPhone', 'Apple', 'Apple Watch']
We Must Fight Face Surveillance to Protect Black Lives
AJL Family, We are holding space to grieve, to mourn, and we are also full of righteous anger. The murders of George Floyd, Breonna Taylor, Ahmaud Arbery, Nina Pop, and Tony McDade are only the latest in what feels like an endless chain of police and vigilante violence against Black men, women, children, trans folks, and nonbinary people. Lines from Joy Buolamwini’s poem “Pressure on the Neck” At the same time, we recognize the intense power and possibility in the massive wave of multiracial mobilizations that is sweeping the country, even in the midst of the pandemic and in the face of brutal police repression. People everywhere are organizing to demand structural transformation, investment in Black communities, and deep and meaningful changes to policing in the United States. We know that criminal justice reforms alone are not enough to transform our society after hundreds of years of slavery, segregation, overpolicing and mass incarceration, disinvestment, displacement, and cultural, economic, and political erasure, but they are an important piece of the puzzle. Yet even now, as we mobilize for Black lives, local, state, and federal police — as well as other agencies such as the Drug Enforcement Administration (DEA), Customs and Border Protection (CBP), Immigration and Customs Enforcement (ICE), and various U.S. military forces — are deploying a wide range of surveillance technologies to collect, share, and analyze information about protesters. Many people understand that mobile phones double as surveillance tools. But law enforcement agencies are also gathering photo and video documentation of protesters by filming protesters with body cameras, smartphones, video cameras, and drones; using both government and commercial software systems to scrape social media for photos and videos; gathering footage from CCTV systems; gathering footage from media coverage; and more. Many police departments, including the Minneapolis police, then analyze that footage — both in real time and in the days and weeks after protests — and use facial recognition technology to attempt to identify individuals. Since many of these systems have demonstrated racial bias with lower performance on darker skin, the burden of these harms will once again fall disproportionately on Black people. Face surveillance, the use of facial recognition technology for surveillance, thus gives the police a powerful tool that amplifies the targeting of Black lives. In addition, performance tests including the most recent gold-standard government study by the National Institute of Standards and Technology has found that many of these systems perform poorly on Black faces, echoing earlier findings from the Algorithmic Justice League. So not only are Black lives more subject to unwarranted, rights-violating surveillance, they are also more subject to false identification, giving the government new tools to target and misidentify individuals in connection with protest-related incidents. We are in the midst of an uprising of historic magnitude, with hundreds of thousands of people already participating and potentially millions taking part in the days and weeks to come. At these scales, even small error rates can result in large numbers of people mistakenly flagged and targeted for arrest. Since many of these systems have demonstrated racial bias with lower performance on darker skin, the burden of these harms will once again fall disproportionately on Black people, further compounding the problem of racist policing practices and a deeply flawed and harmful criminal justice system. Police are deploying these increasingly sophisticated surveillance systems against protesters with little to no accountability and too often in violation of our civil and human rights, including First Amendment freedom of expression, association, and assembly rights. These harms disproportionately impact the Black community based on historical patterns of discrimination and overpolicing by law enforcement. Such patterns already lead to more frequent stops and arrests on a lesser standard of reasonable suspicion, compounded with other forms of discrimination in the criminal justice system (by judges, prosecutors, and risk assessment tools, among others) against Black lives that further lead to higher incarceration rates, lost job and educational opportunities, loss of livelihood, and loss of life. The Algorithmic Justice League therefore urges that we rein in government surveillance of Black communities in general and police use of face surveillance technology specifically. We have gathered resources to help organizers include demands to halt police use of face surveillance technology within broader campaigns for racial justice at municipal, state, and federal levels. We are calling on every community, organization, and politician who is serious about racial justice to specifically include a halt on police use of face surveillance technology, among other broader and sorely needed transformational policies. This may seem daunting, but the tide is rising. In cities and states across the country, people have organized to successfully block the rollout of face surveillance technology: On the municipal level, San Francisco became the first city to ban government use of facial recognition technology in 2019 with Oakland and Berkeley following suit. In Massachusetts, Somerville, Brookline, Northampton, Cambridge, and Springfield have successfully halted government use of this technology. Next week, Boston is poised to join the list. This progression demonstrates the domino effect of successful advocacy in one city to influence change in neighboring communities. On the state level, the state of California has enacted a three-year moratorium prohibiting police from using facial recognition with body cameras, which went into effect in January 2020. The state of New York is currently considering similar legislation prohibiting facial recognition in connection with officer cameras and has also introduced legislation proposing a moratorium on all law enforcement use. The state of Massachusetts is actively considering a broader moratorium on all government use of facial recognition — which would cover police use in addition to other state agencies and officials. On the federal level, since May 2019 there have been three public hearings on facial recognition technology (linked below) that have taken place in front of the House Committee on Oversight and Reform to examine how facial recognition technology impacts our rights and emphasize its discriminatory impact on Black lives. At the first hearing, Neema Singh of the American Civil Liberties Union (ACLU), Claire Garvie of the Center on Privacy and Technology at Georgetown, David A. Clarke School of Law professor Andrew Ferguson, and former president of the National Organization of Black Law Enforcement Executives Dr. Cedric Alexander all testified along with Algorithmic Justice League founder Joy Buolamwini, who stated: These tools are too powerful, and the potential for grave shortcomings, including extreme demographic and phenotypic bias, is clear. We cannot afford to allow government agencies to adopt these tools and begin making decisions based on their outputs today and figure out later how to rein in misuses and abuses. Following this series of hearings, Congress is currently considering several initiatives that would limit the use of facial recognition technology, including legislation that would place limits on the use of face surveillance by federal law enforcement agencies. Now is the time to build on the momentum of these successful initiatives. Given the extent to which police power has been militarized and systematically weaponized against Black lives, it is more imperative than ever that we ensure that law enforcement cannot deploy face surveillance technology to suppress protests or infringe on civil rights and liberties. If you have a face, you have a place in this conversation. The people have a voice and a choice, and we choose to live in a society where your hue is not a cue for the dismissal of your humanity. We choose to live in a society that rejects suppressive surveillance. We choose to beat the drum for justice in solidarity with all who value Black lives. — Joy Buolamwini, Aaina Agarwal, Nicole Hughes, and Sasha Costanza-Chock for the Algorithmic Justice League Press contact: [email protected] Resources Educational materials Model legislation Congressional hearings on facial recognition technology Ongoing campaigns Press Pause on Face Surveillance campaign by the ACLU of Massachusetts Stop Facial Recognition on Campus campaign by Fight for the Future and Students for Sensible Drug Policies Organizing toolkits
https://onezero.medium.com/we-must-fight-face-surveillance-to-protect-black-lives-5ffcd0b4c28a
['Joy Buolamwini']
2020-06-04 15:07:35.858000+00:00
['Surveillance', 'Artificial Intelligence', 'Police Brutality', 'Facial Recognition', 'Racism']
Let Me Tell You
Let Me Tell You A nature force Exemplary like the sky, my beliefs wanting you to realize this soul and body is not meant to turn into ash of disbelief imagination attempting and admiring power to embrace forces I carry forward, to thirsty souls who need my light — support for I am meant to be present, dedicated spreading kindness just to optimize, and justify every breath, drop of blood — unrestrained
https://medium.com/haiku-hub/let-me-tell-you-4b855c290368
['Ashwini Dodani']
2020-02-05 14:19:37.632000+00:00
['Mental Health', 'Haiku', 'Nature', 'Life Lessons', 'Poetry']
The Ultimate Guide to Optimize Google Ads Search Campaigns
Do you reckon that channelizing your digital marketing efforts towards either of the owned or paid or earned media alone isn’t sufficient enough to succeed? Firstly, What are these owned, paid or earned media? To explain in brief, these are the different mediums available for digital marketing. Earned media is about social sharing, reviews, mentions, reposts, etc. Owned media is about your website, blogs, etc that you own and is distinctive to your brand. Lastly, paid media is about advertising (PPC, display, etc), retargeting, paid content promotions, etc. You cannot just put 100% efforts on optimizing your website for SEO and think you have done enough to reach your goals. The same applies when you just concentrate on getting your Ads up and running, ignoring the other factors. You succeed and reach your goals only when you concentrate on the usage of appropriate channels available in each of the owned, paid and earned media for your website. Now, let’s dig deeper into one of the channels of paid media which is advertising, advertisement of your products and services. Can there be a better platform than the Google search engine itself to advertise your products and services? Definitely not. You can assess by yourself how big of a platform is this, by going through these facts about the Google search engine. Being the most popular search engine, as per SEO Tribunal, on any given day, Google receives over 63000 searches per second. Also, the average person performs 3–4 searches every single day. Hubspot mentions that 85% of mobile traffic is captured by Google. I hope you are now getting a picture of how huge this opportunity is. The opportunity to present your business in front of your target audience at the right time and to convert them as valuable customers. As you know, Google Ads is where it all starts but is it just enough to create the Ads here, keep your fingers crossed and expect business growth? No, not enough. This is where optimization comes into the picture. For effective utilization of the marketing budget, optimization of Google Ads search campaign is crucial for higher Clickthrough Rates(CTR), lower Cost Per Click(CPC) and a higher Return On Investment(ROI). Here’s an ultimate guide to optimize Google Ads search campaigns. 1. Keywords Optimization Keywords are the key elements of your Ads. Knowing the right keywords can get you a higher CTR which is one of the factors considered for the Quality Score(QS). A poor Quality Score can cause damage to your CPC and the performance of your Ads. Until you put up relevant keywords in your Ads and run them you never know which of them are going to out-perform or under-perform. Once you Ads go live, here are a few things you can look for and take appropriate actions to optimize your keywords. a. Add Negative Keywords Consider a scenario, a baking products eCommerce company which is newly established and is trying to reach a larger set of audience by search Ad campaigns. They are using a broad match keywords for maximum audience reach. After their Ads start running, they see that they are paying for the clicks that aren’t adding any value to their business. For example, they used keywords like “baking”, “online baking” “baking products”. Since its a broad match, the Ads kept showing when people searched for keywords like “online baking classes”, “baking recipes”, etc. Google Ads provide a way to ignore these keywords by adding them to the negative keywords list. This is a great way to enhance the relevancy, in turn, the quality score and at the same time reach large audiences. To add negative keywords, login to Google Adwords account. Click on the campaign you want to edit. On the left-hand side panel, click on the “Keywords” link. Switch the tab to “Negative Keywords”. The sample screen is as shown below b. Research and Add New Keywords Keywords are ever-evolving with a variety of people searching online. A person looking to buy flowers can search “flower shop nearby” whereas another person with the same intent can search for something like this “buy flowers online”. If you are a florist with a brick and mortar as well as offer online service. Both of these are researches are prospective customers to your business. There may be hundreds and thousands of such customers looking for the same thing on special days. Constantly keep researching on what exact phrases are users using and try and incorporate the relevant ones into your search Ad campaign keywords list. c. Remove Under-Performing Keywords Reviewing keywords frequently does more good than harm. You can look for keywords that are not performing up to the expectations. Reason can be anything, either the keyword is considered irrelevant or there is a shift in the trend or it may be simply users are not using the word anymore. It is vital to remove such keywords as they impact the Quality Score. Go to the keywords list of your search Ad Campaign and delete them. 2. Display URL Optimization A display URL is a URL that your target audience will see when your search Ad is displayed. Display URL gives the prospective customers an idea of what they will be offered when they visit the landing page. So optimizing it is very important. The inclusion of keywords in the display URL is a good approach as the keyword gets bolden by the search engine if it matches the search phrase highlighting the relevancy. An example display URL is as below https://www.domainname.com/path1/path2 In the above URL, path1 and path2 can be used to include the keywords. Another way of optimizing the display URL is to make the URL more readable by including hyphens between the words. Display URLs can also be optimized to include a call to action. This encourages or compels the target audience to take action. You can add a call to action words like BookNow, JoinNow, RegisterNow, etc. A screenshot below shows the same. 3. Campaign/Ad Group Structure Optimization You can optimize the campaign or the Ad group structure by grouping similarly-themed Ads together. By doing this you can research and add the most relevant keywords to the corresponding Ad groups rather than mixing up all the keywords together affecting the relevancy and the performance of the Ads. Consider you offer digital marketing services and you are running search ads often, it may happen that you might miss out on the keywords that users are using to look for a specific service. If the user is looking for SEO services, even though you offer the service since your keyword list is missing the phrase, your Ad won’t be pulled up when the query runs, missing out on a lot of business. If you plan your campaign properly and create separate Ad Groups for each service offering the chances of missing out are reduced considerably. For example, One Ad Group focusing on Advertisement services, other on SEO services, the 3rd group on the Content Marketing services, etc 4. Optimize Ads for Ad Extensions Ad Extensions are a great addition to the Ads. On inclusion, these Ad Extensions not only maximize the performance of search Ads by providing useful information but also help with the improvement of the Quality Score by increasing the CTR. Also, the addition of these Ad Extensions doesn’t cost you anything extra. You will still be paying for the “Click”. When it’s a win-win situation then why not go for it. However, the addition of Ad Extensions doesn’t guarantee that they will be shown along with the Ads always. It depends on factors like the Ad Rank and Ad position. Here are some of the major Ad Extensions. Call Extensions By adding this Ad Extension, you are giving your prospective customers an option to call you by displaying your business contact number along with the text Ad. You can schedule this particular extension to appear during your business hours. You can also get a detailed report of the number of calls your Ad campaign has generated helping you to evaluate ROI. An example of Call Extension is as shown below: Callout Extensions Callout Extensions allow you to add more information in the form of text. This additional space to add more text can be used to highlight the unique features that the business offers and can be thought of as bullet points. An example of Callout Extension is as shown below: Sitelink Extensions Using this Extension, the user can be provided with multiple links that take them to the other important pages on your website. This Extension also helps to understand which pages are the customers most interested in. An example of Sitelink Extension is as shown below: Structured Snippet Extensions This Extension allows you to specify the various aspects of your services. The snippet has two parts, a header and a list of comma-separated values. An example of Structured Snippet Extension is as shown below: Review Extensions This particular Extension allows you to incorporate your business reviews in your Search Ad. By looking at the reviews, customers get an idea of how good your service offerings are and increase the chances of conversion drastically. An example of Review Extension is as shown below: Location Extensions This Extension displays the address to your business location by linking your Ad to Google My Business Page. This information helps customers to belong to the same location and have the intent to visit the store/office. So in order to use this extension you need to register your business to Google My Business Page. An example of Location Extension is as shown below: 5. Bid Optimization To start with bid optimization, you need to have an understanding of which keywords are getting you conversions and how much can you bid on each keyword. There is a bit of math involved here but there is no need to be intimidated. If you are new to Adwords and you don’t have reports to analyze and to optimize bid. You can start by bidding aggressively. This increases your opportunity to get the prime spots and hence grab you more clicks and conversions. Once you start getting the conversion data. You can then continue with the process as mentioned below. Firstly, you have to have a goal that describes the conversion. The goal can be either a phone call, filling a subscription form, purchasing a product, etc. A conversion tracking system should be in place to understand which of the keywords are causing conversions. For example, you decide to invest $10 for conversion as a conversion gets you a profit of 100$(seems fair enough to spend 10% of the profit for marketing). And you have analyzed that your keyword has a conversion rate of 10%. To calculate the CPC, multiply the conversion rate and the amount you have decided to invest. In our case, the CPC would be 1$ (10*10/100). Depending on the keywords, the conversion rates vary and so does the CPC for each keyword. Calculate the CPC for high-performing keywords and use them accordingly for maximum ROI and conversion rate optimization. This helps you to understand how much money you can spend per click to reach your goal considering your budget. You should constantly keep monitoring the performance and analyze the data to help you come up with the most effective bids. This is an attention-seeking and time-consuming process but who wouldn’t want to spend the money wisely and get optimal results out of it. 6. Optimize Website and Landing Pages If an Ad is doing well in getting clicks and the corresponding landing page in getting conversions, don’t just sit back and relax. You should be consistent and keep exploring new ideas to increase your ROI and not be satisfied if your Ad is just doing fine. Follow best practices for landing pages to design more than one landing page and assign them to different Ad groups. Compare between them and work towards the one which is getting you more conversions. When the visitors land on the page, make sure you have all the information that the visitor is looking for on the page. Relevancy is very important. Keep more images to make it more attractive and add videos that explain your services. If you are asking the user to fill out a form, don’t keep it too long and complex, you can miss out on a conversion. Also, show visitors the offers that they were promised in the Ad if any. 7. Optimize Ads for Geographics & Demographics Google Ads provides you with options for choosing the target locations and target audiences for your business. What is the point of spending money on Ads if they are not reaching the right places and the right audiences? When we talk about location targeting, you have the liberty to choose countries, states, cities, location groups, a radius around a location, etc. Think of a real estate business, which is all about, locations!! What’s the point in showing an Ad to the people who don’t belong to the location of your business. Picking up the right areas will help you reach the right audience hence increasing your ROI. To add locations, you have to log in to your account. Click on the campaign you want to add a location to. On the left-hand side panel, click on the “Locations” link. Below is a sample screenshot. Similar to geographic, demographic is also an important aspect while targeting the audience. You can choose the age range, gender, parental status, household income, etc. The inclusion of demographics guarantees that your paid search Ad is shown to only those visitors who fall in the demographic settings you have chosen for your Ads. Consider a dance training academy exclusively for women, running a search Ad campaign. In this scenario, showing up the Ad to women makes more sense than showing it up to everyone. To add demographics to your Ads, log in to your account. Click on the Campaign, choose the Ad group. On the left-hand side panel, click on demographics. You can choose the options that best describe the target audiences for your business. The sample screenshot is as shown below. 8. Schedule Search Ad Campaigns Scheduling the Search Ad Campaigns is making sure that your Ad shows up when the target audiences are most likely to visit the website. Scheduling increases the CTR as well as reduces the wastage of money by not displaying the Ads post business hours. For example, at-home beauty services company, need not show up their Ad for a call for booking post business hours. Also, once your Ads start running, you can analyze the report to understand what days of the week or what time of the day are the maximum conversions happening. If you are able to find a pattern, you can go ahead and increase the bids for those schedules for the best results. To schedule your Ad campaign, log in to your Google Adwords account. Click on the campaign you want to schedule. On the left-hand side panel, click on the “Ad schedule” link as shown below. Also post scheduling, below is a screenshot of bid adjustment of the “10%” increase on bid on Monday. You have to click on the “Bid adj” column beside the day where you want to increase or decrease the bid based on the report analysis. 9. Increase Quality Score Quality Score is Google’s way of informing the quality and relevance of the search Ads and keywords. Quality Score is used to determine the CPC and the Ad rank. Quality Score is reported between 1–10, 1 being the lowest score and 10 being the highest. Having a thorough understanding of the factors that affect the Quality Score is important because it directly influences the cost and the position of the Ad which are both critical for a search Ad Campaign. These factors include CTR, relevant keywords and Ad text in the Ad group, the relevance of the landing page. Work towards optimizing each of these factors to improve the Quality Score. To increase the CTR, write a unique, creative and compelling text. Make use of the Ad Extensions, make sure that the use of characters in the Ad adheres to Google’s guidelines. Likewise, optimize your keywords. Keep researching, revising the keywords list. Show relevancy between your Ad content and the landing pages content by using the relevant keywords. Handling these factors appropriately will automatically show a positive effect on the Quality Score. 10. Customize Ads to Include Countdown Timer Customizing the search Ads to include a countdown timer is another great way to optimize the search Ads. The thought of missing out on good offers compels the users to click the Ads and buy the products or services, it’s a common human behavior. That’s the whole point of running the paid search campaigns to generate business. You can create this sense of urgency by adding a countdown timer using Google Ads. Once you are done adding the Title and description to your Ad, you can insert the “{” symbol to include the timer. Once you type “{”, you will see an option to include the countdown timer as shown below. You need to also make sure that when the user clicks on the countdown Ad, the landing page should also talk about the same. This will increase the chances of conversion as you are giving the user what he was promised. If the Ad and the landing pages show no relation between them than that might cut down on the CTR significantly impacting the Quality Score badly. Conclusion When you come across the word optimization in Ad Campaigns, you need to keep in mind that this is an ongoing process. There will always be new stuff emerging every now and then be it new keywords, new strategies, new algorithms, and whatnot. You cannot put a period to your optimization efforts. Having an understanding of the above points will give you an idea of how frequently each of them needs attention and optimization. To maximize the Return On Investment(ROI), you need to make sure that you put constant efforts to work towards the optimization of the search Ad Campaigns.
https://medium.com/digital-ready-official/the-ultimate-guide-to-optimize-google-ads-search-campaigns-fcec88c14045
['Chiranjeevi Maddala']
2020-01-27 13:29:57.921000+00:00
['PPC', 'Marketing', 'Advertising']
How to Protect Your Mental Health During the COVID-19?
This project has won the 2nd Prize and the Most Valuable Dataset in the UC Davis MSBA COVID-19 Challenge. This is a collaborative work of Yijun Huang, Linyan Dai, Kayla Zou, and Haonan Wang. The step-by-step guidance of python coding has been shared at Github: https://github.com/jennnh/4G_COVID-19_Challenge Overview The COVID-19 pandemic has brought a great threat to people’s physical health and well-being. However, many mental illnesses were aroused simultaneously without being noticed. In the USA, every single person has been faced with a mental health issue in this COVID crisis, caused by jobs disappearing, businesses shutting down, and separation from friends and family. What’s worse, former congressman Patrick J Kennedy says calls at suicide hotlines have increased by 800 percent as resources shift to COVID-19 relief. In this case, this project aims to make detailed suggestions about activities to release people’s emotional strain during the quarantine period. In detail, our team first scraped Tweets with the hashtag #StayHome on Twitter in various regions and dates as the data sources. Second, we leveraged sentiment analysis and word clouds to understand and visualize the emotions of people. Third, we found 13 activity-related topics using topic modeling and across-analyzed them with other columns to identify factors contributing to people’s positive and negative emotions. In the end, we made suggestions to people about what activities they should participate in and what activities they should avoid to keep happy. Web Scraping First of all, we needed to scrape data from Twitter through web scraping. We didn’t use the free API of Twitter because it only provided Tweets within the last 7 days but we needed Tweets since one month ago. Therefore, we used a package called “GetOldTweets3”, which provided enough Tweets for our analysis. We wanted to select some representative cities to conduct our analysis. To that end, we selected 10 cities that have both a large population and a large number of confirmed cases based on data from John Hopkins University on April 13th, 2020 to scrape. The 10 cities are New York City (New York), Boston (Massachusetts), Chicago (Illinois), Detroit (Michigan), Los Angeles (California), Houston (Texas), Newark (New Jersey), Miami (Florida), Philadelphia (Pennsylvania), and New Orleans (Louisiana). Afterward, we scraped the Tweets that were posted nearby those cities. For each Tweet, we collected the user’s name, text, the date and time of the Tweet, the number of retweets, the number of favorites, and the HashTags. After scraping the data, we also did a stratified sampling based on the proportion of new users in each city. New users are defined by whether it is the first time that user posts about #StayHome. Exploratory Data Analysis (EDA) After getting the dataset, we started to do some exploratory analysis. The dataset that we have collected contains Tweets from March 5th, 2020 to April 11th, 2020, and it includes 11631 Tweets and 6741 unique users. The graph below shows the distribution of the number of Tweets in different cities across time. The Number of Tweets in Different Cities across Time According to the graph, in most cities, there is a sudden increase in Tweets number around late March. This is when most cities announced the shelter in place order. For example, the grey area increased greatly after the blue line, where New York just announced the order. This means that the shelter in place announcement increased people’s awareness and attention of COVID-19, so people started to attach great importance to “stay home” and talked about it more on Twitter. After several days, the number of Tweets decreased, which indicates that people started to pay less attention to that and they got accustomed to staying at home. Natural Language Processing Analysis Text Preprocessing After the EDA, we have done text preprocessing to improve the performance of NLP models, including: Making everything lower-case. Text decontraction. For instance, replace “won\’t” with “will not”, replace “\’ve” with “ have”, and replace “\’d” with “ would”. Removing strange symbols, URLs, emojis, and #hashtags Word lemmatization using NLTK stem. Lemmatization is the process of grouping together the different inflected forms of a word so they can be analyzed as a single item. For instance, after Lemmatization, “cries” would be “cry”. The comparison of a review between before text preprocessing and after text preprocessing is as follows: Before A sunny day haiku #quarantinepoetry for an old friend down in Texas summerdaenen THANK YOU!!! VENMO @randsomnotes for a poem of your very own! #quarantinelife #stayhome #summer @New York, New York https://www.instagram.com/p/B-29p1rp1la/?igshid=12yeaszfhp7o2 After A sunny day haiku for an old friend down in Texas summerdaenen THANK YOU!!! VENMO @randsomnotes for a poem of your very own! @New York, New York Sentiment Analysis We used the package “Textblob” to assign each Tweet a sentiment score called “polarity” to measure how positive or how negative the review is. Afterward, we classified sentiment into positive and negative ones with a threshold of 0. Tweets with a polarity score larger than 0 are positive ones, with a polarity below 0 are negative ones, and with a polarity equal to 0 are neutral ones. The proportion of Tweets with different emotions is as follows: Based on the graph, half of the Tweets are positive and only a small part of the Tweets are negative. It means that not a lot of people are doing bad while staying at home because of COVID-19. We also draw a line graph to see how sentiment changed as time went by, and the chart shows that people started to feel a little better after being suited to this situation. Average Sentiment of Tweets across Time World Cloud To understand what are the most common words in negative Tweets and positive Tweets, we created two word clouds separately in these two emotion categories. Left: Positive World Cloud / Right: Negative World Cloud After removing common words in both of these word clouds, we can see that Tweets in the positive ones often contain positive words like “safe”, “friend”, or “happy”. There are also words like “video”, “music” that imply positive activities. On the contrary, in the negative word cloud, we can infer that Tweets are mainly expressing worries about being sick, or complaining about being bored after staying at home for a long time. Plus, the word “game” also shows up in the negative word cloud surprisingly. Topic Modeling The third part of the NLP analysis is the topic modeling. First of all, we preprocessed texts further to improve the performance of topic modeling, including: Tokenization: Tokenizing each sentence into a list of words Removing punctuations. Removing stop words using the NLTK library. Stop words are the very common words like “if”, “but”, “we”, “he”, and “they”. We also added multiple new common words based on this corpus. After the preprocessing, we used Latent Dirichlet Allocation via Mallet (LdaMallet) to extract topics from all Tweets. First, we tried out different numbers of topics and used coherence values to choose the best number of topics. After determining to choose 30 topics, we got the keywords for each topic and filtered the 13 ones that were related to people’s activities for further analysis. Afterward, we combined the results from sentiment analysis and topic modeling to analyze which topics will contribute to people’s positive emotions and which will lead to negative ones. More specifically, we calculated the average polarity for each topic and visualized the result in the following Treemap. The Treemap Showing the Average Sentiment of 13 Activity-related Topics The area and color are determined by the polarity of each topic, which means the larger and the darker the area, the more positive the topics. Therefore, activities at the top-left are the most positive ones, such as watching videos, donating, and communicating through word and message. On the other hand, activities at the bottom-right are the least positive ones, such as cooking dinner and cleaning the house. Besides the sentiment of topics, we also analyzed what are the most popular activities across different cities. Therefore, we listed the top three activities in each city based on the number of Tweets related to the topic. The Top3 Activities across Different Cities According to the chart, we got several interesting findings: Watching videos, workouts, and delivery are popular activities in many cities. People in New York prefer staying with their friends and family, while people in Los Angeles most like watching shows and movies. Communication through words, messages, and pictures is very popular in Boston. People in Chicago especially love to say prayers. Besides topics, we want to know more about specific words that influence people’s feelings, so we picked a list of high-frequency words from these topics and calculated the average polarity of all Tweets containing each word. The bar chart ranks the words according to its polarity. The words colored light red are the most positive ones, and then the ones colored dark red and grey. The Average Polarity of All Tweets Containing a Keyword Consistent with the topic modeling result, watching videos, virtual connection, and going to the gym are related to extremely positive sentiments. Besides, activities like singing songs, receiving deliveries, drinking wine, and saying a prayer are also very positive. It is worth noting that the word “game” ranks the last in the bar chart, which disagrees with our knowledge that playing games helps people feel better. Recommendations Based on the analysis of topics and single words, we give the following suggestions to people who feel mentally discomfort: You can spend more time on these activities: listening to and singing songs, watching movies or TV series, workout, reading, and virtually connecting with your family and friends even though you are physically distant. Besides, these activities are surprisingly very beneficial to your emotions based on our analysis:
https://towardsdatascience.com/how-to-protect-your-mental-health-during-the-covid-19-3840e4015435
['Jenn Huang']
2020-06-14 02:51:37.160000+00:00
['Machine Learning', 'Data Science', 'NLP', 'Coronavirus', 'Covid 19']
Is My Husband Gay?
LGBTQIA Is My Husband Gay? Women search for “Is my husband gay” more than for “Is my husband having an affair?” Photo by Anthony Tran on Unsplash Sexual infidelity is often considered the ultimate betrayal; it disrupts ongoing, meaningful relationships. When a heterosexual couple experiences infidelity and the offense is committed with someone of the same sex, it turns worlds upside down. All relationships have rules. We expect that our partners will keep our interests in mind even when potential rewards tempt them to break the rules. Infidelity occurs in the context of both heterosexual and same-sex relationships, although expectations may be different. In either case, when expectations are violated, the wrongdoer will have to account for his or her behavior. Things suddenly shifted inside my head, and I went from thinking I was straight to knowing I am gay. As I wrote in Finally Out: Letting Go of Living Straight, I know something about breaking rules. I was married with two children when I unexpectedly fell in love with a man. There was no other way to explain what I was feeling. Until shortly before I came out to my wife, she had no idea about my conflicts about sexual orientation. “Kevin” is a man in his mid-fifties, married, with two children, one of whom is handicapped. His wife suspected Kevin’s interest in men, and she began to search for clues of his deception. She found his online user name and password for a gay chat room. She then began to send him emails as if she were a man interested in a “hook up.” Not knowing the messages were actually from his wife, Kevin arranged to meet “him” for coffee, and blowing up Kevin’s secret life.
https://lorenaolsonmd.medium.com/is-my-husband-gay-65a2abc817c2
['Loren A Olson Md']
2020-11-07 12:58:43.832000+00:00
['Love And Sex', 'LGBTQ', 'Love', 'Psychology', 'Women']
6 Ways That COVID-19 Has Affected Me As A Transgender Woman
6 Ways That COVID-19 Has Affected Me As A Transgender Woman The playing field actually seems a little bit more level. Photo by Edwin Hooper on Unsplash So how is the pandemic treating you? Has this coronavirus forced you and those in your life to change how you go about your daily lives? I have read and heard a lot about how lots of people are modifying their habits and routines to adjust for the additional stress of the pandemic, and the extra time needed to properly protect themselves, while interacting in a higher than normal risk environment. Honestly, this is scary for everyone. We’ve all likely witnessed other pandemics in our lifetimes like MERS or SARS, or even caught the annual flu and maybe took a long time to recover. But we do not know everything about the cause of the COVID-19 pandemic, which can lead to fear. Additionally, not everyone reacts to widely issued health guidelines in the same way. We see groups of people continuing their lives as if the pandemic does not exist. They party and congregate with no apparent care for the recommended distancing and protection measures. On the other hand, we see people stocking up on essentials and squirrelling themselves away in the protection of their homes, trying to avoid any possible contact with anyone who may be carrying the virus. Dealing with all of this is definitely not easy for most people. Mainly because of all the unknowns and all the changes. For many, what makes it even harder is seeing those who aren’t even trying to be serious about the potential risks of a global pandemic. For those of us who happen to be transgender, these times can present even more challenges… or, these times can potentially make the playing field seem just a little bit more level. “The truth is, as transgender people, this is what we have been forced into being prepared for our whole lives.” Honestly, it’s not all that bad of a situation… well, not really. The whole virus thing — with its health impacts, risk of death — none of that is good for anyone. But the day-to-day impacts on how we navigate things are not that bad. The truth is, as transgender people, this is what we have been forced into being prepared for our whole lives. In some ways, you could say we’ve been trained for this. Many of the aspects of daily life during this pandemic which cause disruption for the greater populations are, in fact, regular annoyances for transgender people. Annoyances we’ve learned to either live with or cope with. I’ll explain so you don’t think I’ve gone insane with all my squirrel friends… 1. Masks and ambiguity There has been a lot of discussion and pushback regarding the imposition of mandatory face masks. Some people have fears of masks, some have health issues which are exacerbated by wearing a mask. Others simply do not want to be told what to do. Face coverings place a barrier between people, making it difficult to read the other person’s expressions. There’s also concern over whether it’s possible to ascertain another person’s intentions or trustworthiness if their face is not fully visible. In many ways, transgender people have been used to these issues their whole lives. We have learned to neither rely on, nor trust the outward appearance of others. Social norms and discrimination have forced us to hide our own true selves, while maintaining an outward public expression of happiness living as our sex assigned at birth. We know that what one sees on the surface is not always the internal reality. Similarly, we experience others acting as if they are supportive and accepting, yet beneath the surface, despise our very existence. So in that way, face masks are the perfect equalizer; they place each and every person at the same level of disadvantage in trying to determine one another’s true intentions. We should, in fact, rely on action to assess trustworthiness; repetitive and consistent action demonstrates behavior far better than any facial expression, and trans people seem to know this lesson innately. 2. Being avoided There’s also something scary about putting on a face mask, as it seems there’s a connotation of medical care, or something toxic that we need protection from. Putting one on yourself might stir up feelings of fear or harm, and seeing others with masks on may reinforce how fearful you are (or “should” be). Before the pandemic, if I saw someone on the street wearing a mask, I would give them a wide berth. I would subconsciously think there was a reason I needed to stay away from that person. But nowadays, just about everyone gives everyone else a wide berth automatically. The thing is, as a transgender person I have been given these “wide berth” distances over and over again. It happened throughout my transition, and it happens throughout my daily life as a trans woman. Of course, this is only my interpretation of other people’s actions, but I have felt that these distances given to me are a direct result of others being unsure of who, or what I am… And I feel that, somehow, the further away I am from that person, the less unease they will feel in determining how to interact with me. Time will tell as to how this plays out post-pandemic. Will societies continue to maintain these distances as a way to protect against the unknowns carried by those physically intersecting our existence? Or will we return to the crowded pathways and establishments where “personal space” has no place? Will our future bring trans people like me into the norm, or will it return to a place where we continue to feel like outsiders — primarily because others push us away or avoid us? 3. Vocal issues I don’t know about you, but the pandemic has put some pressure on my throat, not from any kind of sickness, but simply from communicating with others. I would hazard a guess that the distances we’re keeping from each other have a contributing factor; we can’t simply whisper in a friend’s ear anymore. Even during this pandemic I have found myself a regular customer at my local coffee shop. For convenience and safety, I typically use the drive-through. But so do a lot of others seeking the same. I do end up ordering ahead and then go inside to pick up my cup of delicious caffeine-based heaven, but that causes its own problems. The staff end up asking customer names so as to see if their drinks are ready, but these questions (and their inevitable muffled responses) drum up a mix of potential outcomes for customers like me… that is, those of us who rely on some level of stealth to survive a life of relative normalcy. As I stand on one of the institutional floor dots demarking proper social distancing, staff members scan the waiting clientele and query those waiting in order to play a game of retail “match-em.” As they ask my name, I meekly respond, “Kat… erm, Kathryn,” to which the guaranteed, “sorry, I didn't get that” requires me to dig deeper and say, “KATHRYN.” Digging deeper is counter to the quiet, meek and hidden person I try to be, mirroring what I view as the norm of womanhood and femininity. However, I am unable to achieve this norm. Because simply stating my name for the barista with a mask across my face draws out a garbled — but distinctly masculine — voice, betraying my identity and my expression. Who knew that simply picking up my coffee would crush the hope and optimism I work with each day? 4. Fear of healthcare It’s interesting to consider that one of the greatest tools we have in overcoming this pandemic is our healthcare system. The better prepared and equipped they are, the better the chance we have of surviving any infection. The better prepared and equipped they are, the better the chance we have of gaining cures for this sickness. Our healthcare systems and their workers have truly been the unsung heroes of this pandemic. But what of those needing healthcare for issues other than the Coronavirus? I know a number people who avoided going to the hospital for fear of putting themselves at risk of getting infected. It seems it’s becoming a regular occurrence now to just “work through” something which previously would’ve necessitated an ER visit. For transgender people, this is common even when not in the middle of a pandemic. Many of us are fearful of healthcare. And many of us unfortunately have too many experiences reminding us that there can actually be more harm done by seeking help. Even though the type of harm I’m referring to isn’t like that of a disease or virus, it is no less potentially lethal. Being transgender within a healthcare system can result in a lack of treatment (or inappropriate treatments), irrelevant linkages to gender transitioning, harassment, discrimination, and so on. Even the basic fundamental right of having one’s gender identity respected is not guaranteed. Each of these outcomes are just as likely to lead to harm, or even loss of life, as becoming infected with COVID-19 would. It is no wonder that people in general avoid hospitals and doctors during a pandemic, as it’s easier to live with some pain and discomfort rather than face the potential of even greater harm. 5. Practicing patience It sounds like a broken record by now, but how many of you have heard someone ask, “when is this pandemic going to end? I want things to go back to normal.” I can understand why people ask this question; many fear the unknown, finding it difficult to cope or manage their lives without firm answers or any view of the future. While I can’t speak for all trans people, many of us do suffer from some mental health condition, which is only further exacerbated by also having gender dysphoria. Personally, I’ve had issues with anxiety — and a lack of information making it difficult to cope with unknowns only feeds the cycle. When it comes to learning patience, and an ability to simply accept that there will always be life-altering events, or decisions to make based on those events (such as pandemic response), transgender people have already been dealing with this reality for a long, long time. For those of you who don’t know, transition is typically the best way to address gender dysphoria, but transitioning is a long process (some would argue life-long), and it is subject to extensive gatekeeping by outsiders with their own agendas and biases. Most people who transition get frustrated at the beginning because of the slow nature of approvals, and the lack of understanding by decision makers. As time passes though, many of us accept that the only way to get through our transition is to just be patient, do what’s required, and acknowledge that we are not in control. This sounds a lot like what humanity must do in the middle of a global pandemic. Many people question why we must follow government directions, why rights are being restricted to improve safety, and even, who has the right to be making decisions. Perhaps it time for everyone to just sit down and be patient, like us trans people have been doing forever. 6. Isolation and guilt Ironically, isolation is both a required tool for combatting the pandemic as well as one of its worst side effects. While there are some people who enjoy personal time and solitude, many struggle with a long term lack of social interaction. The pandemic has robbed many people of these basic fundamental needs. I, for one, did not have a big social life beforehand, as most of my social needs were met through work-related activities. But a couple of months before the pandemic happened, I retired. With the resulting disappearance of face-to-face contact and social spontaneity, I felt a great sense of loneliness. And I very much internalized that feeling, which drove a sense of responsibility and guilt… that it was my fault that I was alone. Some of this comes simply from being transgender. Many put off coming out and transitioning for fear of losing family, friends, work, etc. The act of living authentically can, unfortunately, become the catalyst for a life lived alone and sad. When we’re at our lowest points, it’s often hard for trans people to not link their transition with their loneliness, and therefore, feel a sense of guilt… even though we’re decidedly not continuing to live a lie. While I don’t want anyone to feel lonely, the fact that the pandemic has forced so many people into isolation — and subsequently, longing for a chance to interact with friends and family — has actually helped me a lot. I didn’t secretly start seeing friends to fulfill my needs, but rather, I felt some consolation that others were experiencing what I have been feeling for a while. What has been helpful is that it became easier for me to accept that this fear of loneliness is not my fault, just like the isolation caused by the pandemic is not the fault of anyone else feeling lonely. So where does this leave us? I know I feel better right now than I did before the pandemic. Which is funny, since the pandemic is not over yet. Perhaps it’s more a sense of sharing this pain with my fellow human beings. Or that this pain we’re all sensing at present is unrelated to any individual personal characteristic… like being transgender. I feel like some of what I’ve experienced as a trans woman — transitioning at this point in my life — has prepared me better for this pandemic. I believe that the impacts have been softer on me than on many others simply because I have experienced similar impacts as a trans woman. At this point I actually feel that being transgender during a pandemic has left me in a better position than most cisgender people. I even feel a bit lucky to be transgender right now, with the other 99.5% of the species getting a small sense of the struggles and hurdles that we live with constantly. Right now, I have the advantage! It will end though, at one point, this feeling of “superiority.” The pandemic will end, humanity will return to some sense of normalcy, and I will return to the normalcy of not being “normal.” And when it ends, like many others, I will rejoice that we have beaten this disease, even though that means I’ll lose this temporary advantage. But until then, I’m living in the moment!
https://medium.com/gender-from-the-trenches/6-ways-that-covid-19-has-affected-me-as-a-trans-woman-ad00167e0944
['Kathryn Foss']
2020-11-06 19:38:29.721000+00:00
['Mental Health', 'LGBTQ', 'Culture', 'Transgender', 'Covid 19']
Judging Poetry
Judging poetry means choosing which baby contest baby gets first place. It’s like evicting a soul who’s still settling in. “Clarity and coherence” are not easy standards, staring into private darkness exorcising rot miming genius or wordifying to sound like you’re the origin of human experience and all commensurate metaphors. “Evocative imagery” calls on poetic licence for the writer. Funny: in contests, readers still steer. What right do I have to judge if your details and diction evoke enough? “Literary devices and figurative language” leave me longing for a poem so subtly armatured that I can’t help but hear ghosts, feel angels, and smell bones as if I’ve never sensed before. And please, please, young poet, don’t obscure what is already hidden in the labyrinth. Go ahead: expose; make fresh revelation for initiated overthought stress-addled hopefuls like me. Background by Annie Spratt. — Heather Burton ©2019 Addendum: I’m a teacher of language arts. My students are 6 to 16. Included in my job mandate is this line: Students will listen, speak, read, write, view and represent to explore thoughts, ideas, feelings and experiences. Judging poetry and other forms of creative expression, which is implicit in teaching the arts of language in a formal school setting and explicit in the evaluation of my teacherhood, is problematic at best and painful at its worst. The on-switch for Judging Poetry was being invited to adjudicate a poetry contest for first-year university students. I was eager to accept — I could capture a sense of what my younger writers might create someday. On my initial pass over the contest entries, I was dismayed. The mediocrity of the diction and the mundanity of the themes/topics/impressions left me wondering what to say. (Boyfriends still engender that much angst?) Rightly, and as repeatedly happens with my own students’ work, I shook myself back to center: creative writing is an offering of self, and self is inherently sacred. This is not to say sloppy verbiage gets praise or that I’ll find something personally meaningful in a text. This is also not to say that flattery or good job stickers are the order of the day. It is to say this: Laying down words — poetic, prosaic or otherwise — for the purpose of expressing self is an act worthy of respect. If done with even a speck of sincerity (because we all remember the de/motivation of marks) it is also an act of courage. From that footing, I could detect not just promise in the poems I judged, but craft on its way. Sure, it took creative composing on my part to respond to each entry honestly. I think it matters to writers to know they are read and not to be lied to. I hope to “meet” those young poets someday after they have taken my (and others’) questions and suggestions, juggled them a bit to test their utility and practised up their language arts. The general bits of feedback I gave? Keep steeping in artistic words. Tell less, show more. Make poetic choices (reading material, people-watching and other acts of high absorbability, listening intently, music, stillness, writing as therapy, relentless learning, meditation/prayer, etc.) Paint with words. Try thinking cinematically. Read your work aloud. Be honest. Hear your voice. Honor and develop it. Above all, keep writing. Thanks for reading, Heather
https://medium.com/publishous/judging-poetry-64013215e3ee
['Heather Burton']
2019-12-04 17:43:27.358000+00:00
['Poetry', 'Education', 'University', 'Writer', 'Writing']
The Macabre Wealth
The Macabre Wealth In 630, famine fell upon Arabia. By the time whole Arabic peninsula had been conquered by the Muslim armies and was living the most prosperous period it had ever seen. The supposedly intimidating conquerors kept to themselves. They relied on what they brought with themselves and were kind, upright and just. Now the time would test them again. The riches of the land couldn’t feed its denizens from the unruly famine. Wells dried up, food became a dream and cattle began to die. Aid was sought from wherever one could muster it. Appeals were made and promulgated to four corners of the Islamic empire to soul their souls. It was these times when the shoulders that carried the brunt of this burden, Caliph of the Muslims, Omar bin Khattab RA passed by a nomad’s tent. Therein sat an elderly woman who offered him the hospitality as was the custom of the good bygone days when people had some sense of fellowship. She carried on with her womanly ramblings about everything when she cursed the Caliph Omar bin Khattab RA for his irresponsibility. Apparently she thought it was his fault that he could not feed the poor of his jurisdiction. Despite his own ascetic living, this magnificent ruler in human history broke into tears. “How is he to know that someone living in such ruins need help?”, he asked solemnly. The woman looked at him not knowing who he was and retorted, “What kind of a Caliph is he if he doesn’t know how his subjects live under his reign?” He shuddered and never ate a full meal until everyone had it in his reign again. Such were the people who brought us the religion that we take for granted. A panacea that has been reduced to fit a column in a consensus form. Pakistan is the only nuclear power that was founded with the vision of having a homeland for the Muslims. Just like the battle of Badr, it divided brothers, uncles, sons, and daughters. The migration from darul kufr was made to a destitute land where people had to fight penury and woe to make a living for themselves. Guides turned out to be the con men and this time it didn’t take white foreigners to undertake the carnage. English had lived here long enough to breed an illegitimate dynasty of their own with a darker skin tone. From private torture cells to judicial conniving and oligarchy to divide and rule, they plundered and looted their own in cold blood. As literacy rate took a nose dive to bottom their own offspring went back to countries where their real fathers came from. Money dipping with peasant blood bought them chartered planes, finest meals and luxurious yachts. Haute couture and dining manners were taught lest they bring with them the incivility of the human waste they set upon eliminating hundreds of years ago through colonization. As they practiced their Machiavellian plots, the people became so desensitized and demoralized that they started accepting it as a pre-fated situation. Subtle words with delicate nuances were invented for the pleasure of these corrupt wolves. Sharjeel Memon, one from the same lineage recently returned home after two years of escapade from the law along with arguably the most corrupt man on earth, Asif Ali Zardari, ex-president of Pakistan. He is allegedly guilty of defrauding public exchequer of Rs 3 billion. Man gratuitously shared his pillage with the voracious judiciary who set him free. People waiting to earn brownie points came out on streets and coronated him with a gold crown calling it the victory of the veracity. While people are dying under his nose in Thar desert from drought, waiting for the death that they wish came earlier than the inexorable agony they live through each day he basks in the radiation of his golden crown. Islam came to this region with Sufis and merchants and erudite savants who literally took off the shirt off of their back to feed the poor and the heart-broken. It dignified the locals by talking them out of worshipping stones and slaughtering others to please innumerable gods. What a tragedy it is to see their kin ransack whatever they gave to this land. Haven’t they read or believed what Allah says in Quran?
https://medium.com/minhaajmusings/the-macabre-aid-b5b04b444bd0
['Minhaaj Rehman']
2017-04-04 19:22:42.342000+00:00
['Pakistan', 'Islam', 'Quran', 'Corruption', 'Politics']
Viewing the Earth’s Surface With the New and Improved Google Earth Timelapse
Viewing the Earth’s Surface With the New and Improved Google Earth Timelapse Two design leads unpack their approach to creating new map imagery and implementing Material Design best practices By Samadrita Das, User Experience Designer, and Tom Gebauer, Design Lead Editor’s note: In 2013, we launched Google Earth Timelapse, our most comprehensive picture of the Earth’s changing surface. In 2016, we added a few new years. Today we released a bunch of updates, including design upgrades and mobile support. Below, our design leads share how they approached redesigning Earth Timelapse. Chuquicamata Mine, Chile Google Earth Timelapse is a great example of how Google can take a massive amount of data, democratize it, and enlighten us — in this case about the planet we live on. There’s nothing like a living, moving image to show us how the world has changed over decades, as the zoomable video in Timelapse allows us to do for places like the Columbia Glacier or the Tibetan Plateau. Today we’ve improved Timelapse so that the site can tell richer stories about the planet, and more people can share these stories. Our mantra in getting this project done has been “Earth Day or bust” — and thanks to some scrappy workflow improvisations, we made it! Why we were up for the challenge This has been a 20% project for Tom and Sam. Tom is a design lead for Google’s Display & Video 360 project; Sam is a user experience designer on the Geo Enterprise team, primarily working on Google Maps Platform. For Tom, working on Timelapse was a way to do what Google does best: using data to create an amazing and focused experience. (Plus, when people ask Tom what he does at Google, he can say semi-seriously, “I work on time travel.”) What appealed most to Sam was the chance to contribute to a platform that helps people visualize how the Earth has changed, in a very digestible format. She was also up for the tight-timeline challenge: It exemplified how teams in an organization as big as Google can be agile if the need arises. With a scrappy team assembled, we focused on the problems to solve:
https://medium.com/google-design/redesigning-google-earth-timelapse-135a963cc35
['Google Earth']
2019-11-21 19:28:30.456000+00:00
['Redesign', 'Design', 'Google Earth Timelapse', 'Case Study', 'Earth Engine']
Top Automotive Industries Using Mobile apps
Top Automotive Industries Using Mobile apps Top Mobile Apps that are Using by Top industries in Default and let's get to know on top New developed Automobile Industrial app by the Top mobile App development Solutions In the current generation of digital technologies, the automobile industry has achieved tremendous growth in our daily life. Mobile applications play a key role in leading all formats of the digital world that have influenced the automotive sector and the Internet of Things, with GPS indications for locations. The following content will help you to understand how mobile is related to the automobile industry. Whoever said good things would come in small packages never talked about the auto industry. Do you know your car companies? Today a glimpse of the 10 largest global automakers reveals unknown compounds, automakers connecting with each other to share technology and costs, with more factories producing more vehicles. The automotive industry needs huge economies to cope with the research and development needed to create new models and stay competitive. After record sales in recent years, manufacturers are trying to realize slower sales and manage the transition to electric and automated vehicles. Here are the top 10 players based on their global sales volumes in 2019, as well as how they will cope with the current financial climate in 2020. Pinning down sales figures is tricky — different manufacturers report numbers differently — so the totals shown maybe automobiles and light trucks, but not heavy commercial vehicles. Volkswagen Group — 10.8 million Toyota — 10.5 million Renault–Nissan–Mitsubishi Alliance — 10.3 million Hyundai Motor Group — 7.5 million General Motors — 8.7 million Ford — 5.7 million Honda Motor Group — 5.2 million Fiat Chrysler Automobiles — 4.8 million Groupe PSA — 4.1 million Suzuki — 3.2 million Top Mobile Apps For Drivers And Car Lovers As a driver or car owner, you can move from one place to another without letting the spring of traffic spring surprises on you and this brings us to mobile apps or drivers. Some mobile apps provide information on traffic and alternate routes, while others act as guardian angels to help you find your way through unfamiliar terrain. In this article, you will see 10 mobile apps that drivers, vehicle owners, and or car lovers should have on their phones and mobile devices. Here are the top 10 apps for drivers and car lovers: 1. Google Assistant Google Assistant is available on Android Auto and Google Maps and you can start driving mode on any Android phone with the Assistant by saying “Hey Google, let’s drive”. This prompt is what the company calls a “thoughtfully designed dashboard” that contains driving-related activities. This app can surface a shortcut to reach your destination or play media where you left off. 2. Android Auto Android Auto takes all the features you like about your Android-powered smartphone and puts them directly on your car dashboard by replacing the local infotainment system. The information is displayed in a familiar, easy-to-use interface and with clear menus and large icons. The main feature of this app is Google Maps-based navigation system; It provides step-by-step directions and finds alternative routes if any traffic is detected. 3. Waze By using Waze, you become part of a community that shares real-time information about traffic conditions and road construction. Using this app allows you to share or receive traffic information such as accidents, police traps, blocked roads, weather conditions, and more. Waze can mine this data submitted by customers and translate it into bits that provide other road users the most convenient way to their destination 24 hours a day. It is an app created for drivers around the world and greatly enhances the driving experience. 4. DriveSafe.ly DriveSafe.ly is a mobile application that reads text (SMS) messages and emails aloud in real-time and responds automatically without drivers touching the mobile phone. The app was created to eliminate texting while driving. It comes with amazing features like hands-free, one-touch activation, Bluetooth, and radio transmitter compatible and optional autoresponder. This eliminates distractions while driving and helps keep your hands on the wheels and your eyes on the road. 5. Find my car Tired of looking at where your car is parked in the parking lot? Then download the ‘Find My Car’ app. It is a very resource application that does not require maps or network connection. It uses your phone’s GPS capabilities to navigate back to your car or any previously visited location. 6. Drive mode: Safe driving This app has a clean UI with bold, colorful buttons for use while driving. Its voice assistant is functional and will guide you across the dashboard if you do not want to take your eyes off the road. The app also has a great transparency mode that allows other services to work without interruption when you switch to navigation or music controls. 7. Car DashDroid Car DashDroid is a car home dock replacement that facilitates driving while providing the best infotainment in the Play Store. It connects and manages apps like WhatsApp, Telegram, and Facebook Messenger so you can listen and reply to messages without touching the device. The app is great for Uber or ride-sharing drivers who want to focus while driving. It comes with features like Google voice command support, speedometer, and intuitive music controls for many players. 8. HERE WeGo Maps This app is one in every of the strongest competitors to Google Maps within the navigation app space. it’s an easy but sleek interface with mapping options around the world. It shows you traffic information, transportation maps and you’ll customize it by saving places for quick directions. it’s liberated to use and includes a Map Creator app that enables you to form maps. 9. GPS speedometer and odometer GPS speedometer and odometer app to measures the cars and bikes speed accurately. it’s one in all the most effective GPS speedometer apps for measuring vehicle speed. Unlike many other speedometer apps, it works in offline mode and doesn’t take quite 20 seconds to attach you to GPS, it takes 2–3 minutes to try to so like other apps. Read for Top 20 GPS Tracking Apps For 2020 10. Google Maps This app helps you to navigate your world faster and easier. It has mapped over 220 countries and territories and has hundreds of millions of businesses and locations on the map. The app also gives you real-time GPS navigation, traffic, and transport information and allows you to explore the neighborhood by knowing where you eat, drink, and go — no matter where you are in the world. Top 5 Key Features For Successful Mobile App Listing Up The Top 10 Android App Development Services for the Automotive Industry With the growing dependence on smartphones and support apps for every task, it is imperative that companies seek expert help from top mobile app development companies. Vehicle owners also need support apps, not only to post them on current events but also a haven to search for, buy, or sell cars at affordable prices. This blog brings together 10 Android app development services designed specifically for the automotive industry. They have unlimited features that will benefit car lovers as well as car owners. 1. MYFUELLOG2-CAR MAINTENANCE With a 4.6 rating, the Android app MyFuelLog 2 is a must-have for any vehicle owner’s smartphone. Refueling and car cost tracking is conveniently done, the insertion is instantaneous. Both the time and location of the vehicle are well tracked, with the main screen, logo, and the specified vehicle name being customized. This app has the ability to manage backups, restore data to cloud servers, as well as other features such as Dropbox and SD card. It allows you to create and extract reports in PDF, Excel, or CSV format for further reference. 2. Car costs Oops !! 4.6-Star Rating ?? This Android app development service is worth a shot for virtually all vehicle owners. Car costs and diversification costs are calculated along with the add-on cost that comes with owning a car. Monthly and annual statistics are calculated and saved in the app itself. Also, statistics can be shared across other devices by first saving the data on the SD card or syncing it with the Google account and storing it as a backup. Also read: Top 12 must-have Android apps 3. Automatic application This is a valuable Android app if you are looking for advice when buying new vehicles. This app promises to change the whole journey, all of which are simple and quick to change. Also, smartphone users do not have to spend time trying to run dealers and are given financial guidance when making purchasing decisions. Timely news and the latest updates will keep you up to date on the automobile industry. 4. Zigwheels Multi-utility application, Zigwheels compares old and new cars. Experts are available to assist buyers in making the right purchase decision and to connect with a large number of dealers. 5. Cartrade Another mobile application development service designed specifically for vehicle owners. There is a list of about a thousand used and unused cars for new buyers. Thanks so much for the in-depth information about the latest or upcoming vehicles on the market and also sell your old car through the app. 6. Automobile Magazine News Apps One of the most popular mobile apps designed especially for car lovers. The app regularly publishes news and updates, videos, and press releases related to the car world. Users can also share images that have been tested or gooey before new cars are formally launched. 7. Vehicle logger Rated on the n Play Store with 4.2, Vehicle Lager is an Android app that offers the simplest and easiest way to create vehicle logbooks for mileage, fuel, and cost. One can create specific reports and upload them directly to Google Drive, Dropbox, OneDrive, and more. Reports can be exported as PDF or CSV data. 8. Vehicle Maintenance Guidance Although this particular app has only a 3.5 rating, it teaches all car and bike owners the basics of vehicle maintenance. Vehicle management knowledge saves you a lot of money and makes driving safer. Every part of the vehicle should be routinely inspected, making sure riders ride smoothly and safely at home. The app also includes guidelines and other tips for operating vehicles in good, working conditions. The app is perfect for getting advice on improved fuel efficiency, maintaining vehicle value, and ensuring safe travel. 9. Driver One of the most powerful car apps available for Android users, Drivo tracks everything. This app has multiple features like repair, maintenance, costs, miles run, gas mileage, and more. This app is well suited for those who drive to work on a daily basis. The app also tracks expenses for the tax season and allows you to set reminders to follow up on maintenance tasks. There is another option to choose cloud backup, data sync, data export, etc. 10. Android Auto Yours seek for the most effective automotive Android app will stop at Android Auto. Quick access to Google Maps, messaging, and music apps additionally as other utilities. There are literally some vehicles that have a built-in Android auto app. Such a useful app is on the market at no cost. Android Auto isn’t included in all told vehicles and automobiles but can be expected in automobiles within the future. CONCLUSION Google Play offers endless options to choose from along with the ten names mentioned here. These select automotive or car apps are the perfect choices to take up a little more space from your smartphone. Such specific service requirements have forced companies to hire dedicated mobile developers to share their experience and lend a helping hand in application building. So, if you are one of those looking for the right app build solution, consult one of the top Android app development companies before you start your journey. Find more Top mobile app development companies around the world on Fugenx Technologies.
https://fugenxmobileappdevelopment.medium.com/top-automotive-industries-using-mobile-apps-8a7d1b3cac9b
['Priyanka Patil']
2020-11-09 09:15:45.109000+00:00
['Automotive', 'App Development', 'Mobile App Development', 'Automobile', 'Automotive Industry']
CARTOON: Bookstore
Cartoonist’s Note №1 I don’t go to bookstores anymore. I’m allergic to potpourri. Cartoonist’s Note №2 Like this cartoon? You can now buy the print — or get it on a coffee mug, t-shirt, etc. About the Artist Rolli is the author of four books for adults and children, including The Sea-Wave and Kabungo. His cartoons and illustrations have appeared in Reader’s Digest, The Wall Street Journal, The Saturday Evening Post and other top venues. Visit Rolli’s website, follow him on Twitter, and subscribe to his new one-man Medium magazine, Rolli. HELP! Medium’s unexpected new change in the way it compensates members of its Partner Program rewards creators of long, padded articles, and essentially demonetizes shorter content. As nearly all of my posts are poetry, cartoons, or efficient fiction/essays, I’m faring very poorly under this new system. Since the change was implemented, my Partner Program earnings have plummeted by 90%, and hundreds of others are in the boat. If you want to help, send an email to Medium CEO Ev Williams ([email protected]) and/or to [email protected] expressing your displeasure with the changes. Slowly re-reading this post 100 times would also help enormously ;) So would buying me a coffee. If you enjoyed this cartoon, you might like these ones, too:
https://medium.com/pillowmint/cartoon-bookstore-82f0c9748ee1
['Rolli', 'Https', 'Ko-Fi.Com Rolliwrites']
2020-01-28 04:21:37.560000+00:00
['Humor', 'Life', 'Books', 'Comics', 'Cartoon']
Scroll to an element on click in React js
Today we will learn how to scroll to a particular element by clicking on a certain element. This is beneficial if you are creating a single page application and where users click on a particular menu item then he will be scrolled to that referring element. So let us see how to implement that in our React project. Requirments Lets first create a blank react project. I will use yarn as my package manager. yarn create react-app scroll-to-element-react-medium 2. Now open terminal inside your react js project and install react-scroll yarn add react-scroll 3. Go to your src folder and create a components folder. 4. Now inside your components folder create your first component with the name header.js 5. Open the header.js file in your code editor. I will create a simple menu inside that file so paste the below code inside the header.js file. I have also added some inline styling. 6. Again go to your components directory and create another component with the name middleSection.js. Inside that component, I will add some dummy content with a unique id referring to the menu items. 7. Now its time to use react scroll. Go again to your header.js file and import Link from react-scroll import {Link} from 'react-scroll' 8. Now wrap your menu items with the Link tag and pass some props like below. Props Used * activeClass - class applied when element is reached * to - target to scroll to * spy - make Link selected when scroll is at its targets position * smooth - animate the scrolling 9. Now import components to the app.js file and start the application to see changes. Below I have shared live and code and GitHub repository for reference.
https://medium.com/how-to-react/scroll-to-an-element-on-click-in-react-js-8424e478bb9
['Manish Mandal']
2020-09-17 18:09:45.339000+00:00
['Scroll', 'React', 'Reactjs']
How to make contribution in ZeroBank Dashboard
How to make contribution in ZeroBank Dashboard This post will explain the detailed guideline regarding our brand new Dashboard. See below for details. 1 — SIGN IN TO THE DASHBOARD ZeroBank Dashboard can be accessed via the “Sign In” button on the top right of ZeroBank Website at any stage of our crowdsale, or at the hyperlink: https://ico.zerobank.cash/ The sign-in interface of ZeroBank dashboard If you joined the first Bounty Program, you can use your Bounty account username and password to sign in. You may also use your email address as the username. If you have already registered for ZeroBank Whitelist or Newsletter, kindly check your email inbox and use the password ZeroBank sent you to sign in. Otherwise, sign up for a new account by clicking “Sign up” button. Once you finish registration, you will get a verification email to activate your account. 2 — JOIN KYC After signing in successfully, you will see the dashboard (illustration below). The Join Whitelist function is combined in the KYC function during the Whitelist stage. Unless you pass ZeroBank KYC, you will not be able to withdraw the ZB tokens. *Note: If you joined our Whitelist and already got the KYC Authentication Pass email, you do not need to follow this step. Here is the dashboard after you sign in To start your KYC procedure, click on “Submit KYC” button. The “KYC and Whitelist registration” form will show up (illustration below). You can register as an individual (“I’m a person” tab) or a corporation (“I’m a business” tab). All fields are obligatory. Email address: This field should already be filled out using your account registration email address. First name and Last name: Fill in your first name and last name exactly like shown in your identification document. Nationality: Choose your nationality. Residents, citizens, and registered persons of the United States of America are restricted from participating in ZeroBank ICO. Phone number: Input your current phone number. Please change the country code if needed. (This field may already be filled out if you signed up your new account on the Dashboard system.) Ethereum wallet address: Input your ERC223-compatible ETH wallet address. This wallet address is where you will receive ZeroBank tokens after the ICO. You should use your PERSONAL Ethereum wallet such as: MyEtherWallet, Metamask, Parity, Mist, imToken, Ledger (hardware wallet); DO NOT use wallets from any exchanges (e.g. Coinbase wallet) or free Ethereum wallet for Android. How much would you like to contribute? Input your contribution registration in ETH Please upload pictures of your document (both front and back): Upload one of the following documents: Driver’s License, Identity Card, Residence Permit and Passport. Selfie with Personal ID: Selfie picture should display your face clearly, your readable ID and hand written note with today date and “KYC for ZeroBank” phase. Please see our sample for your reference. The “I’m a business” form has the same process but different information and documents required. After you complete filling out the form, please double-check your data, if everything is correct, click “Submit”. The button then will be changed to “KYC Pending”. Please accept 7–10 working days for ZeroBank team to check and verify your KYC Application. ZeroBank will send the result of your KYC application via email or you can follow your KYC application status right on the dashboard. * KYC FAILED If ZeroBank sends you an email notifying that you have failed the KYC procedure, please return to your Dashboard, click on “KYC failed” red button to get back to the KYC form, then click on the “Resubmit KYC” link below the KYC button. This time, please pay more attention to the information you fill in the form. “KYC Failed” status in the main dashboard Here are some TIPS to make or break your KYC application Your identification document must be valid (not expired). It also shouldn’t be modified in any way. The scan/photo of the document should preferably be taken in natural sunlight or other bright area. Make sure you take photos without any glare. Make sure you capture your whole ID (no corner cutting) and that the photo is in focus. * KYC PASSED Congratulation! You are authorized to make your contribution. In the dashboard, you will see green button “KYC approved”. Our system will open for contributions on July 10th, 2018 at 09:00AM GMT+8. Mark the date! 3 — MAKE CONTRIBUTION ZeroBank system will be opened for Whitelist participants who passed KYC procedure to make your contribution between 09:00AM July 10th and 23:59 July 11th, 2018 GMT+8. Pre-ICO and ICO participants will be able to contribute from July 12th to 30th, and from August 1st to 25th respectively. *NOTE: Please make sure your KYC Application completed and passed before withdrawing ZB tokens. Complete filling in the form Choose the currency you’d like to contribute by by clicking on the ETH or BTC icon. You contribute: Input the value you’d like to contribute. Bonus level: Your bonus level will automatically be calculated for you with the result of total ZB token you will get. Promotion code: Fill in this field if you received a Bonus code after signing up for our newsletter from our previous events. Promotion codes are applicable to Pre-ICO and ICO contribution only. contribution only. Copy your ERC223-compatible ETH wallet address to the field “Your Ethereum Wallet Address”. This will be the wallet address where ZeroBank will send you ZB tokens later on. Click on “Buy ZB Tokens NOW” to move to the next steps. Complete your order by filling this form. Your order details: it will show the details of your order which you submitted in the previous step. Send contribution: after reviewing “Your order details”, make a transfer to a wallet address the system provides you in this section. Or you can scan the QR code to get ZeroBank wallet address. Proof of Contribution: once you’re done transferring your contribution, copy your transaction Txid to the field “Txid” and click on “CONTRIBUTE”. Well-done! You just completed making your contribution to ZeroBank! Now you only have to wait for ZeroBank to verify your transaction and send you an email notification. If you have any technical problems with the registration procedure, please contact our support team in the official Telegram chat at https://t.me/zerobank_cash or via email [email protected]. All support officers in Telegram have the “admin” tags! Beware of scammers and always check support officers’ usernames in their personal accounts if you intend to have personal chats with them. * ADDITIONAL NOTES FOR ZB TOKEN SALE Kindly follow these rules to keep your transactions safe: Always check the official ZeroBank website URL: https://zerobank.cash/. Always navigate directly to the site before you enter your information. Do not enter your information after clicking a link from a message or email. Always check the link to your personal Dashboard: https://ico.zerobank.cash/. Always make sure the URL bar has an encrypted connection (a lock icon in the URL bar must be locked and must not contain any exclamation mark). We never make invitation to contribute via email, Telegram, etc. Stay updated on our channels:
https://medium.com/zerobank-cash/how-to-use-zerobank-dashboard-5d5721aec6ad
['Zerobank - Your Local Currency']
2018-07-18 06:54:46.010000+00:00
['Tokens', 'Presale', 'Blockchain', 'Dashboard', 'Whitelist']
Joker, Woke Culture, and America
Why, despite wanting to like the film, I don’t recommend watching it. © Warner Bros I’m not the biggest Batman fan in the world, but of the DC pantheon, he’s probably one of my favorites, and so I’ve been looking towards the dark, gritty Joaquin Phoenix-powered Joker film since the trailer first dropped. That said, I skipped on reading reviews prior, and tried to go in to viewing it last night with an open mind. Sadly, I was disappointed. Don’t get me wrong, I actually think Phoenix’s acting is brilliantly captivating. He brings an empathy and terror to the character, and the transformation from Arthur Fleck into Joker was one where you can’t look away. But, I’m a viewer who cares about the themes of a film, and Joker ended up being too problematic for me to fully enjoy. If you’re someone who is blind to racism and sexism, angered by classism, and upset about the constraints of woke culture and just want to see a downtrodden white dude fight back against the system in an explosion of violence, you’ll probably love this film, and can stop reading now. Also, from here on out, spoilers abound. Can’t say I didn’t warn you. Here’s a photo to prevent those who don’t want to see the spoilers from seeing them. Photo by Daniel Lincoln on Unsplash So what’s the beef I have with Joker? I’ve got three. Problem #1. Mental Health. Joker has a few great lines about the state of mental health. It rightly calls out how highly stigmatized it is, and we watch as Arthur Fleck (Joker’s name in this film) struggle to get the medication he needs to stay stable as Gotham slashes funding to the city-funded counseling sessions he attends. As someone who has attended therapy myself, I am all too familiar about the costs of mental health care (not cheap), and the film rightly criticizes the lack of support for those who, ironically, need the extra support the most. This it makes clear. That said, we find that dark part inside all of us cheering as Arthur finds himself without the cloud of anti-psychotic medication and starts killing people who have wronged him. Both himself and his mother have been institutionalized, and both have committed grave acts of violence and abuse against people, deserving and undeserving. While I get that perhaps the film is trying to tell us we need to help those who need it, it also stigmatizes mental health as this completely explosive, dangerous condition that needs to be medicated away. In reality mental health issues affect about 1 in 5 of all Americans. There’s a big difference between having a mental illness, a serious mental illness, and being a threat to oneself or others, and this film doesn’t show any of that nuance, choosing to, as most films to, bask in the glory of violence. Problem #2: The Intersection of Racism and Sexism. There have been a lot of other articles, and I found this NY Times piece to say pretty much what I want to say, so I’ll just quote them here: “The fact that the Joker is a white man is central to the film’s plot. A black man in Gotham City (really, New York) in 1981 suffering from the same mysterious mental illnesses as Fleck would be homeless and invisible. He wouldn’t be turned into a public figure who could incite an entire city to rise up against the wealthy. Black men dealing with Fleck’s conditions are often cast aside by society, ending up on the streets or in jail, as studies have shown. And though Fleck says he often feels invisible, had he been black, he truly would have been — except, of course, when he came into contact with the police. They’d be sure to see him.” And while I could go on about the lack of diversity in Hollywood, when you have a film about a White antihero going around killing people, I’d rather not populate those side characters with Black actors. It’s increasingly notable that beyond the powerful White men who Arthur murders, he’s surrounded by Black women, all who are unnamed in the film, and all but one who are unnamed in the script. What’s worse is that while all the White men have actually made some transgression against him, the Black woman therapist he kills (or at last in implied to have killed) at the end of the film is genuinely trying to help the guy. Additionally, his neighbor (who he imagines he is in a relationship with but it turns out it was all a delusion), is implied dead as well by the ambulance sirens crying out as he leaves her apartment after he confronts her and the audience realizes their relationship was all in his head. I give the film credit that it spares us yet another Black woman being killed onscreen, but what does it say by having Arthur kill people who have been abusive towards him and Black women? Problem #3: The Woke Culture Issue I’ve seen a lot of discussion about “cancel culture” and “woke culture” and pretty much everyone who is complaining about curating expression in a way that is more sensitive to the traumas other people have experienced just rubs me the wrong way. “We can’t be funny anymore,” and “being PC is silencing people,” are just terrible excuses about for not being empathetic. Professional comedians and directors complaining about how they feel stifled because they can no longer make fun of disabled people, or minorities, or anyone else who has been historically marginalized really get no sympathy from me. Frankly, I believe that if you’re not skilled enough to write something that is funny and empathetic to other people’s experiences, then you shouldn’t be paid for comedy. But this is a review of Joker, and this diversion is important mainly due to who directed the film, Todd Phillips. Phillips, known as a comedy director, switched to the dark and gritty as he felt shackled by woke culture. He told Vanity Fair “There were articles written about why comedies don’t work anymore — I’ll tell you why, because all the fucking funny guys are like, ‘Fuck this shit, because I don’t want to offend you.’” I’d like to give Joker the benefit of the doubt that it was trying to say something profound about mental health, or be sensitive about it’s inclusion of women of color. However, the fact that the director seems to be unable to understand why crass comedies are no longer in favor doesn’t push me in that direction. Rather, I feel like he was trying to make sure people wouldn’t call the film racist by throwing in a bunch of Black women into unnamed roles.
https://rickkitagawa.medium.com/joker-woke-culture-and-america-44d6d09c502
['Rick Kitagawa']
2019-10-12 21:20:46.223000+00:00
['Sociology', 'Mental Health', 'Film Reviews', 'Race', 'Film']
How to Write When You Don’t Have Anything to Say
Photo by Amir La Lenia on Unsplash How to Write When You Don’t Have Anything to Say It happens. Three steps to see beyond. We might have days when there is nothing on our mind to say and share. I question if someone will want to hear what I want to say, so don’t say anything at all. The short answer is there’s a light inside of you — in all of us waiting to be sparked up. My three short and sweet mindful tactics zap me back into finding something. So here’s how to write when you don’t have anything to say because: Look out the window Nature is always there. Yes, you will find it is there to stop and stare. Other than that, it is also there to fuel your inspiration and spark your soul. When you allow nature to tap into your creativity, you might find yourself writing a poem about the trees or the leaves flowing in the trees. Whatever it may be, nature can nurture your mind from going to having nothing to say to have a lot to contemplate and curate. Research shows nature not only makes you more creative, but kinder and happier as well. So go outside and: Take a walk Walking outside for me is like taking a shower. It revitalises my senses and brings pack some food for thought. Simply a ten-minute walk will boost your mind and bring you back to now. When you don’t have anything to say, nature will lead the way. A refreshing breath of air will allow you to come back to your natural state of being there. Then writing won’t be so hard to hone in on! Call someone up Hearing someone’s voice is like a magic spell. I often forget how much a second or two can make a difference when speaking to family and friends from across the world. Sometimes I will my eyes and listen to them is — something I will never take for granted. It will bring you back to the present moment and remind you simply everything is OK. Starting a conversation with someone will get your juices flowing to write when you don’t have anything to say. Remember: There’s always something to say.
https://medium.com/inspired-writer/how-to-write-when-you-dont-have-anything-to-say-b9856fe0ea4b
[]
2020-12-28 08:29:11.250000+00:00
['Advice', 'Writing Tips', 'Write', 'Present', 'Writing']
Triangular Numbers
Solve the first example is simple, the target we want to reach is 6. All we need to do is take steps:1, 2, 3. That sums up to 6. But what if the target we want to reach is 11? Can we reach it in 4 steps? No way, even if all steps are toward the target, the maximum we can reach is 10, so we need more steps. How about 5 steps? Well, in this case, you can definitely walk past the target… So we need to think of a way to walk just enough to reach the target. Say, what if we take the first step backward? That’s still not enough, how about we take the second step backward instead? Ah, just right! So, how do we generalize this process? Well, one thing to notice is this: you see, whenever you decide to take a step backward, you are subtracting 2 times the size of that step from the triangular number. Therefore, the target can only be reached from a certain number of steps if the difference between the triangular number corresponding to that number of steps and the target is an even number! So this gives a very good criteria to test whether a number of steps (or a triangular number) can reach the target: the sum of the steps (triangular number) has to be at least as large as the target. the difference between the target and the sum of the steps has to be even. Now we can implement our algorithm: def min_steps(target): res = 0 sum = 0 while sum < target or (sum - target) % 2 == 1: res = res + 1 sum = sum + res return res min_steps(11) # 5 Works!
https://medium.com/swlh/triangular-numbers-7b830c195989
['Shuo Wang']
2020-11-18 05:48:39.492000+00:00
['Python', 'Leetcode', 'Numbers', 'Math', 'Programming']
The Power of Nostalgia
The Power of Nostalgia Combat Current Lows With Past Highs Photo by Dale de Vera on Unsplash You can’t live your life in the past — be in the present moment. But what if the present moment is not so pleasant? Granted, it is not healthy to live entirely in the past, but how about just vacationing there for a little while? Taking a trip down memory lane can be psychologically soothing, and given that we are currently living in some pretty intense times; it is no surprise that we turn to the past for comfort. Enter nostalgia. “Nostalgia is a sentimental longing or wistful affection for the past, typically for a period or place with happy personal associations.”-Oxford Languages You may have heard it from your parents and especially your grandparents, “well back in my day…” They talk about the ‘good ole days’ when things were simpler. When there were no computers, no cell phones, no social media. Of course, every generation has their hardships, but flash forward to today. The world around us is currently in a state of chaos with the global pandemic, economic crisis, social injustices, natural disasters, divisiveness and an overall sense of helplessness. It has taken a toll on the mental health of many. I am not a psychologist and have no authority to be giving any mental health advice, but I do listen to others that are. “It’s comforting to have a nostalgic feeling for the past that reminds us that although we don’t know what the future is going to bring, what we do know is that we know who we have been and who we really are.”-Krystine Batcho, Ph.D. and Professor of Psychology A short visit to those ‘better times’ can actually do wonders for your present mental state. I have felt the positive effect nostalgia can have, especially recently. The pandemic had us shut our doors to the outside world. Many of us turned to our television to pass the time and make the isolation more bearable. With streaming services giving us full range to binge the shows and movies of the past, never had nostalgia been so easy to access. I wrote this article after rewatching Game of Thrones. I convinced my sister, who had not seen it, that she was missing out on one of life’s essential experiences. After some intense resistance and several eye rolls, she finally agreed. Something extraordinary and unexpected happened to me in the process. Each time we pressed play and the ‘dun nun na na nun nun’ blared through the screen and Westeros and The Wall came into view, I was transported. Not to the seven kingdoms where dragons flew the skies and white walkers roamed the mountains, but to seven years ago in my apartment in Boston. Even though the show itself is filled with murder, mayhem and betrayal; it was the positive memories of watching it with friends and loved ones the first time that filled my head. It transported me to a simpler time, a happy time. I had a similar experience when I recently introduced my nephew to Save by the Bell. Watching Zack plan his schemes to get the girl and get out of the test instantly made me smile. I smiled reminiscing of when I was a kid and my friends and I were deciding whether we were Team Zack or Team Slater. It was before the drama filled shows needed to address serious issues like school shootings and teen pregnancy. When Lisa spraining her ankle and not being able to compete in the dance competition was the epitome of a huge problem. You see it everywhere these days — the throwbacks, the reunions, the remakes. We are grasping for reminders of those times. You could be a skeptic and say that we are just running out of fresh new ideas and simply rebranding old ones, but I would argue that we are seeking out something more. I believe we are seeking out a sense of safety, of security, of comfort. There is a reason that live-action versions of Disney Classics are coming to the screen, that Stranger Things was set in the 80’s, that Fuller House has been so successful, and TBT became such a wildly popular hashtag. We are all craving a bit of nostalgia. Obviously it is important to live presently and make the most out of our current situation. But every once in a while, I also believe it is important to give our mind and our soul a little break. There is a reason that classics are classics. It is not because they were the most epic movie, song, book or other contribution of that generation; it is because they instilled in us a frozen moment in time. They trigger memories and experiences that bring us comfort. “Nostalgia has been shown to counteract loneliness, boredom and anxiety. It makes people more generous to strangers and more tolerant of outsiders. Couples feel closer and look happier when they’re sharing nostalgic memories. On cold days, or in cold rooms, people use nostalgia to literally feel warmer.” -John Tierney, New York Times So in this crazy time, when we are all searching for answers and asking ourselves what the future will hold; take a step back, a big step. Step back into the past and you just might find exactly what you were looking for.
https://medium.com/the-innovation/the-power-of-nostalgia-cb08942aec22
['Cassie Regan']
2020-09-30 18:12:15.004000+00:00
['Mindfulness', 'Life', 'Culture', 'Self', 'Psychology']