title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Understanding Spark As If You Had Designed It
Understanding Spark As If You Had Designed It From a simple function to a resilient and distributed framework. Why caring about Spark? Among the current frameworks available on the data space, just a few have achieved the status that Spark has in terms of adoption and delivery. The frameworks has emerged as one of the clear winners specially on the Data Engineering side of the landscape. If you are reading this article, it means that you already understand the reasons behind the previous paragraph, so we are jumping directly into the main subject. Why caring about Spark internals? Someone might argue that we do not have to understand how an engine works in order to drive a car, which is true. However, one might also argue that understanding the engine makes you a better pilot, as you would be able to understand capabilities, limitations and eventual issues with the whole vehicle. Following the same rationale, you don’t have to understand Spark internals to make use of its APIs. However, if you do, a lot of pain from poor performance to cryptic bugs will be alleviated. Also, you will grasp concepts pervasive across the whole field of distributed systems. The approach In my understanding there are two ways to learn something: episteme and techne. The former is related to formal knowledge acquisition, through books, structured courses and so on. It is more focused on the what. The latter is related to the craft, the “learning by doing” which is more focused on the how. This is the path we are taking here. We are going to start from a simple problem that every beginner programmer could solve and evolve it to justify the architectural design of Spark. We will also understand HDFS (partially known as Hadoop) along the way, as it is a platform that plays very well with Spark. We are going to be language-agnostic, so all the code will actually be pseudo-code. The problem You have been hired and assigned a simple task: count how many even numbers you have in an array. You will read this array from a CSV file stored in your local file system. Without thinking too much you’d probably end up with the following chunk: New requirement 1 Your clients are delighted with the immense success of the previous solution, and people now think they can throw every problem onto you, so they ask you to also calculate the average of those even numbers. You certainly know about the SOLID principles and specially the Single Responsibility one, which says that a class or method should only have one reason to change. However, you decide to break the rules and just implemented as below: New requirement 2 Since you were so quick, people came out with yet another requirement: also return the sum of all even numbers. At this time you start considering not only the SOLID principles but also the way things are going. You know that usually, when something happens once it does not mean it is going to happen twice, but if it happens twice the third occurrence is right around the corner. So you start considering implementing something that is easier to extend, and you remember about the concept of encapsulation from Object Oriented Programming. Also, it is possible that if you capture the proper abstractions, you might not even have to change your implementation when another requirement comes. A set of abstractions to rule them all You start by considering that if they asked you to count even numbers, it is very likely that they will further ask you about the odd ones, or those below or above a value, or those within a range, etc. So even being an expert in the application of YAGNI (You Ain’t Gonna Need It), you decide to implement something that would support all of those cases. You conclude that in the end, all those operations are related to filtering values from the array, so instead of coding every possible filter, you decide to provide a filter function that takes the filter conditions. Also, in order to simplify the design, you decide to change the state of the object every time you have an operation called on it. Up to a new challenge You did it. Now, you are not only implementing all the requirements but also getting rid of new ones involving filtering values from the array. If instead of even numbers your clients now want odd ones, the only thing they have to do is to pass a new condition to the filter method and they are done. Amazing! But that new requirement you were waiting for has just arrived: they now need you to process a 3 TB array. You consider giving up. Your own hard disk is only 500 GB big, so you’d need 6 machines like yours entirely dedicated to store that file just in order to start. But your clients like you and are also persuasive, and after a well deserved raise, they also promised to provide you with not 6, but 30 new machines to solve the problem. Divide Having access to 30 new machines, you start to consider how to approach the problem. A single machine will not contain the whole file, so you will have to slice it into smaller chunks that could fit each new hard disk. Also, you consider that since you have enough resources, you can also store the same slice in more than one machine, as a backup. Maybe two copies per slice, which means you’d be able to find that slice in three different places. You format the hard disks and start copying the file, and in the process, you consider it is a good idea to save all slices in the same canonical parent folder in every machine, and also to prefix each of them with an identifier, which is related to which section of the bigger file it belongs to. You also think it is an equally good idea to have another directory in at least two machines containing metadata about what are the directories containing the slices and which machines contain the backups for each slice id. Since some of your machines will contain only data and some will contain only metadata giving directions and names to things, you decide to call the former data machines and the latter name machine. But since you’re actually creating a network, calling the machines nodes is more appropriate, so you have the data machines named data nodes and the metadata ones called name nodes. Still in the process of giving things names, you realize that slice is more associated to cakes and cheese than it is to data chunks. You are feeling quite inspired and creative, so you decide to give these slices a much better name: partitions. So, whatever your program end up becoming, it will process the whole file divided into partitions. After all those naming and decisions, you have something like this: Your very first distributed file system Conquer Now you have your file divided into partitions across a set of nodes (which we will creatively call cluster for now on), with backups and a metadata to help your program to find every partition and its backups. Since it does not make any sense to move the partitions, the question then becomes: to execute the same piece of code in every machine and get one single result? Should you send the whole program to every machine every time you need to run it? Or should you have some pieces of it already available there so that you only have to send the section written by your clients? The latter sounds better, so you go with it. In this course of action, the number one requirement is to have your ArrayOperator class available in every machine and only send the section specified by the main method. You also want to run your code as close to the data as possible, so your data nodes will have to also run your program. From this perspective, the nodes not only store data but also perform real work, so you decide to call them workers. Some sections of your code could also run in parallel. For example, for the program above, you can execute average(), sum() and size() in parallel as they are independent from one another. To allow that, your workers will need to support independent lines of execution, so you decide to convert each worker into some kind of daemon that will spawn new processes that will execute tasks independently (in the meantime, you realize that task is a name generic enough to refer to each unit that can be executed independently). And since you’re still inspired, you decide to creatively call the processes that will execute the tasks executors. Now all you have to do is to design your main method — which has access to your client code — to make it drive the separation of your clients’ code into the tasks that will compose the job, then ask the name node which data nodes contain each partitions of the file, then send the tasks in parallel to the worker machines, which will be prepared to launch executors that will execute the tasks and return the result. Since this piece of code will be driving the whole thing, you, still blessed by the same creativity, decide to call it driver. Your driver also needs to figure out how to put all the results together. In this case, it needs to add up all the summations received from each worker. But considering the strides made so far, it is gonna be a piece of cake. In summary, your driver will be coordinating the tasks that will get the job done. And here is your imagination again. Which better name to describe a set of tasks than job? Your beautiful piece of engineering Made to break After some overnights, you finally have all pieces running together. What an impressive feat! You test it and everything works as expected. You’re anxious for a demo, which your clients, after the considerable investment, are equally keen to see. You start the demo by praising yourself, which is due, and then move on to explain the architecture. Your clients get even more excited. You run the program. Everything breaks apart because five of your machines went offline, two from kernel panic, two from hard disks failures and one because of an untested feature that ended up in a bug. Everyone starts to cry, except you. Your clients lose faith, but your confidence is rock solid. You actually praise yourself again because you already have everything figured out. Those backups are not there out of coincidence. You promise a new demo in one week. Your clients leave the session quite grumpy and somewhat sad, but you keep it all together. “Less is less, not more”, Backups circa 2020 You had it right. Since every partition contains two other copies and you have 28 machines (remember, you reserved 2 for the name node), you’d be very unlucky if the failure of 5 machines brought your whole cluster down. But how to take advantage of redundancy? One thing you’re sure about is that it should start on the driver side, since this is the piece that is communicating with all nodes. If you have a failed node, it will be first noticed by the driver. Since the driver is already in touch with the name node to find the locations of the partitions when the job starts, maybe it can also ask the name node about the locations of all the copies that were sitting in the failed worker/data node. With that information, it could just resend the tasks to be executed on the copies and you’d be done! With the previous approach, you’d have the distributed processing of distributed data in a resilient way. You go for it. A fresh start You call your clients and ask for a new demo. They pretend to still be frustrated but can barely hide their excitation. They come to see you and enter the place saying some jokes about the last time. You only catch “blue screen” but doesn’t quite care about the punchlines. Before starting, you do something shocking: you ask them to randomly shutdown two workers/data nodes. They look surprised but get in the mood (it was fun to see them trying to outsmart you by randomly choosing machines with a treacherous smile). With two nodes less, you start the demo, which works like a charm. They cry, but this time the tears are different. They cheer you up, apologize for not believing, offer you a new raise and of course, bring a new requirement: instead of numbers, the array will now contain objects with multiple properties. More specifically, the records will contain names, ages and salaries, and they want to know what is the average age and maximum salary of people called Felipe. And they also want to save the result so that it can be accessed later without reprocessing. You’re not surprised at all. The cherry on top of the cake At this time you don’t have to think much. You have been playing with abstractions all the time, so now it is just a matter of moving one more level up. You depart from your previous design and change it like this: With that new design, you can now process any kind of record (this is why you changed its name to GeneralOperator). This is truly amazing! Think about it. You have a system that can read, write and process any kind of datasets in a distributed and resilient way. Speaking even more freely, you can claim you have a framework that supports the processing of resilient and distributed datasets of any kind. You feel the power that lies in your hands, but you think that the core of your magic, the GeneralOperator does not have a catchy enough name. Or at least it is not very self-explanatory. You don’t have better ideas though, so you just decide to call it Resilient and Distributed Datasets Reader, Writer and Processor. But that is too big. So maybe an acronym, like RDDRWP? Ouch, even worse. And what about RDD only? Easy to pronounce and sounding in case someone asks you to translate. Sounds good enough, you’re done. TLDR; Here is what you have done: 1. You have devised an infrastructure to store replicated data partitions in a distributed fashion composed of data nodes that hold data and name nodes that contain metadata about it (doesn’t this pair deserve a name of its own? What about HDFS?) 2. You created a structure called Resilient and Distributed Datasets (RDD for short) that can read, write and process data stored in a Hadoop cluster. 3. You have architected an infrastructure to execute tasks in parallel on the distributed partitions through Workers that control the execution on a given node and Executors which actually execute the tasks. 4. You devised a driver application that breaks a job provided by a client into multiple tasks, talks to name nodes to find out where partitions are and sends the tasks to remote workers. Man, you rock! But doesn’t your creation deserve a nice name? It’s been so many ideas, one spark after another. Yeah, Spark! That sounds like a name! You can market it like this Scaling Up This stuff you created certainly has a ton of value, but maybe it has a somewhat steep learning curve. On the other hand, the language of choice to crunch data has been, for a long time (maybe too long), Structured Query Language, or SQL. What about bringing that kind of capability into your Spark? Let’s chat with the clients. IMPORTANT The above is a very simplified view of Spark components and its main intention was to offer a general grasp of Spark’s architecture. Elements related to Catalyst, scheduling, transformation types, shuffling, plans, resource allocation, specialised API methods and others were intentionally left out in order to make the text simpler. They will be approached in further writings. FURTHER READINGS On Spark: https://data-flair.training/blogs/apache-spark-ecosystem-components/ On RDDs: https://spark.apache.org/docs/latest/rdd-programming-guide.html On OOP: https://en.wikipedia.org/wiki/Object-oriented_programming On S.O.L.I.D principles: https://scotch.io/bar-talk/s-o-l-i-d-the-first-five-principles-of-object-oriented-design On YAGNI: https://martinfowler.com/bliki/Yagni.html
https://towardsdatascience.com/understand-spark-as-if-you-had-designed-it-c9c13db6ac4b
['Felipe Melo']
2020-07-14 11:42:10.756000+00:00
['Distributed Systems', 'Software Development', 'Spark', 'Software Architecture', 'Big Data']
The Cancer Routine.
The Cancer Routine. On seeking normalcy. Photo by Ani Kolleshi on Unsplash There was dropping off the kids, then some cleaning, some work (now done remotely), and then chemo. She brought a wide-brimmed hat with her wherever she went…her skin was now sensitive to the sun…and usually went to Starbucks after her treatment. She played a word game while she waited for the shots. One, two, three. Sometimes they felt cold going in. She had to fill out the same form every day. How was her pain? Her anxiety? How was she sleeping? She lied and said she was fine across the board. Truth was, she still needed to remind her husband to wear his sleep apnea mask. She still had to shake him when he stopped breathing. And what would happen if she was not there to wake him up? What would happen then? So you see, she had to be there. She just had to.
https://medium.com/the-junction/the-cancer-routine-a838f3b365f6
['Lisa Martens']
2019-09-06 17:28:08.073000+00:00
['Short Story', 'Fiction', 'Cancer', 'Health', 'Healing']
Highlights of KDD 2019
The full list of talks with the corresponding slides could be found on the following link. Applied Data Science — guest talks Rich Caruana presented in his talk “Friends don’t let friends deploy black-box models” the importance of intelligibility and interpretability of machine learning models. Many machine learning researchers believe that if you train a deep net on enough data and it looks accurate in the test set, it is safe to deploy to production. In some context this is true, but in some specific settings, it can be extremely risky. In the study he realized in the nineties on the prediction of death for pneumonia, the most accurate algorithm was based on a Neural Net. They realized that a much simpler rule-based algorithm learned that asthma was reducing the risk of death if pneumonia occurred. Doctors confirmed that asthmatics are high risk, but it was a real pattern in the data (asthmatics notice symptoms sooner, get healthcare faster and receive more aggressive treatment). Eventually, they decided not to use the neural network in the US healthcare system, even if it performed best on test data. Caruana motivated the usage in this context of GAMS models, due to their accuracy (comparable with neural nets in this task), but highly interpretable by domain experts. Based on the application (in this case, a decision on the treatment of ill people), he also proposed to manually edit the model based on domain experts knowledge. Microsoft and Healthcare Peter Lee, from Microsoft, described the potential of ML in healthcare, as well as the challenges they are facing in his talk “The Unreasonable Effectiveness, and Difficulty, of Data in Healthcare”. Satya Nadella defined the new strategy of the company, shifting more and more to healthcare. Microsoft has many partnerships with medical institutions and hospitals, to collect data and provide analytics on top of data. It is natural to use machine learning techniques to create new innovative products in the area. The possibilities are endless, ranging from more systems assisting radiologists to delineate tumours, computer visions system to help diagnosis of tumours, graph and knowledge extraction from medical papers (4000 new papers are published every day on PubMed!). A place where ML brings value where you do not expect: Merritt Hawkins found in a 2018 survey that 78% of doctors suffered from symptoms of burnout. A particularly stressful task are medical visits: the doctor has to take accurate notes during medical visits, maintaining empathy with the patient. Microsoft is building an assistant that takes automatically notes so that the doctor can keep eye contact with the patient, and then interprets the text to extract medical concepts. In this way, the doctor can review the notes and he is ultimately completely owner of the process. The system is there to assist him, and it learns from past corrections, reducing progressively the number of interventions that the doctor has to do. Main challenges in the field are data collections since we miss modern standards for health data. They introduced, in a consortium on-boarding Google, IBM, Oracle and SalesForce, FHIR a standard for data models, API specs to exchange data, and a set of tools and servers to build applications with. US government is promoting FHIR as the data standard for health. It is the first citizen in Azure, and a server is published on Github. Retailers (pharmacies) are integrating that. He closed his talk with the message that ultimately we do not know how good is AI for prediction in medicine. Papers have often statistical methodological issues, and we miss a real perspective. Selected papers Revisited kd-trees The paper describes a variation of kd-trees for nearest neighbor search with favourable probabilistic guarantees. Their method gets inspiration from Random partitioning trees, although simpler. The algorithm relies on rotating the data with random rotation and creating an ordinary axis-aligned kd-tree. The search procedure is a defeatist procedure that looks for the nearest neighbor into the insertion leaf of the query (without backtracking, and a leaf contains a fixed number of nodes). The use of multiple trees with different rotations reduces the probability of a miss (just as in random projection-based methods such as LSH). The authors also use approximate schemes to perform random rotations to reduce the computation time required for a search query. The approximate final algorithm runs on O(dlg(d) + lg(n)) with n the number of points in the database and d the dimension of the data. This is a better search query complexity compared to the original vanilla kd-trees which have complexity logarithmic in n but exponential in d. The figure below from the paper illustrates the idea behind the paper with the search space of three rotated search trees being the union of the individual search queries. Check the reference paper here: Revisiting Kd-Trees for nearest neighbor search. K-multiple means The authors propose an extension to the k-means problem by relaxing the constraint of belonging to one cluster, instead they assume that the points belong to each centroid with a probability. The optimization problem seeks to find the solution minimizing the loss over the centroids and the probabilities vectors. The authors then propose an alternating optimization scheme and equivalent modelling as a constrained bipartite partitioning problem. The main motivation for the method is that it allows capturing non-convex clusters. Optimizing impression Counts for Outdoor Advertising This paper is an interesting one as it is the projection of retargeting to the real physical world. The authors aim to solve the problem of deploying ads on billboards in order to maximize influence or impression counts. The problem setup assumes having a set of billboards, a set of trajectories, and assumes that the influence of a billboard is a logistic function in the number of times the ad is seen by an individual driver on his journeys. In other words, the influence begins small, then increases rapidly, and then plateaus. Finding an optimal assignment that maximizes the overall influence is NP-Hard, the authors hence propose a branch and bound schema, and use a submodular estimation of the logistic influence function. Hands-On tutorials Deep Learning at Scale In this session, we went through the steps of single-node deep learning model to distributed model inference and finally distributed model training and productionization. We used Keras with a Tensorflowbackend for the deep learning model. We leveraged Spark to distribute the computations across the workers and Horovod to distribute the model training. Furthermore, in order to be able to track and reproduce our numerous experiments, we used MLFlow which is an open-source platform for managing the end-to-end machine learning lifecycle. One of the main advantages of MLFlow is that it is library-agnostic. You can use it with any machine learning or deep learning models. It is even possible to mix different programming languages such as Scala, Python and R. The slides are available here. Concept to Code: Deep Neural Conversational System This session showed a few deep learning algorithms for NLP. The repository with notebooks and paper references is available here. Democratizing & Accelerating AI through Automated Machine Learning It gave an introduction to AutoML tools in Microsoft environment. Reasons for AutoML are: it helps improving models, where AutoML tools do not need access to data, data is owned; it democratizes ML, enabling domain experts and data scientists to focus on business problems, and developers to prototype products based on ML. Finally, it accelerates the work of data scientists, that can leave hyperparameter tuning to automatic smart tools, and the ability to manage many more models than what they can do today. Code for the tutorial is available here on Github. “Tensorized Determinantal Point Process for Basket Completion” — our AI Lab poster session There is a lot of exciting research happening at Criteo, we contributed to this year’s KDD with our work on Tensorized Determinantal Point Process for Recommendation. This work focuses on learning to predict the next item that should be added to an online shopping basket. More precisely, the objective of basket completion is to suggest to a user one or more items according to items already in her cart. Early approaches involved computing a collection of rules in order to provide the recommendation, where all rules that satisfy the conditions are selected for recommendation. This is very heavy computation and not scalable for large catalogues. An alternative approach is based on determinantal point processes model co-purchase probability from item-item similarity kernel matrix and computation of determinants of the submatrices. The main contribution of this work is the generalization of the previous work on DPP for basket completion using tensorized approach enhanced by logistic regression. This new model allows us to capture ordered basket completion and we can leverage the information on the order in which items are added to a basket to improve predictive quality.
https://medium.com/criteo-labs/highlights-of-kdd-2019-22a90c267c8e
['Criteo Labs']
2019-08-09 08:19:14.190000+00:00
['Machine Learning', 'Data Science', 'AI', 'Kdd']
People Won’t Live up to Your Expectations
People Won’t Live up to Your Expectations But you can choose not to get annoyed Image by Ryan McGuire from Pixabay People act strangely. Yes, it is true. People do act strangely, and they don’t always live up to your expectations. So you called your good friend John, he didn’t pick up the phone and didn’t return your call for one whole week. You started fretting because John always returns your calls in a few hours at the most. You can brood for days and think of all the negative things you want — John doesn’t care, your call isn’t significant enough, what the hell does he think of himself, etc. etc. Ok, this is not what you do (but many people do) when somebody doesn’t return their calls. I’m sure there are many occasions when someone doesn’t live up to your expectations, and you get annoyed. You feel hurt, rejected, angry, upset, or some similar negative emotion. To complete the story, John did call you a few days later and explained that he had to jump onto a plane on an hour’s notice because of his crazy boss.
https://medium.com/one-minute-life-hacks/people-wont-live-up-to-your-expectations-7f70fb8cb3c2
['Sudipto Chanda']
2020-11-05 06:31:17.758000+00:00
['Communication', 'People', 'Emotional Intelligence', 'Behavior', 'Expectations']
Why Cryptocurrency is the Next Operating System for Capitalism
Money won’t last forever — that is guaranteed It didn’t exist when exchange evolved to become a feature of humanities first economic system, nor will it persist when there is no advantage to using it. That time is approaching far quicker than traditionalists care to admit. The reality is that our evolution to a largely cashless society is almost complete. I rarely have money on me physically, I can count on one hand the number of times I have had cash in my wallet in the last 3 years. Paper cash and metallic coins are prehistoric. That is what those who carelessly brandish Crytocurrency a bubble fail to comprehend. Money doesn’t care what you think. It is simply a means of exchange. When it’s utility is replaced by something more efficient it will become extinct. Right now it is a protected species with a few purists trying to revive it. Unfortunately, the poachers are pulling down each pillar which underpinned the system one by one. Soon it will fall. With Fiat valuations no longer tied to any commodity — with it’s price being entirely independent and it’s valuation contingent on what we collectively believe it to be — give me one sincere and serious argument which convinces me fiat isn’t also a bubble. Give me a coherent reason why that if we stopped believing in the value of paper money today it would be worth anything tomorrow. Without resorting to the argument of historical precedent, the size and scale of central banks or the promise these institutions have made to maintain a certain valuation what do you have to argue against it?Fundamentally it is still a question of trust and belief. This forces you to consider that there might be a technological solution which forces a level or trust and believe that is inconceivable in a human led system. You argument might still be that cryptocurrency is a bubble, but I raise you the perspective that all money is. It is a product of our beliefs married to our hope that it’s value will remain. Ditto stocks, shares and bonds. Money is, and has been for the last 30 years, an intellectual construct centred on humanities trust in Governance — but trust in these institutions is at a historical low. We don’t trust the reasons they give for the decisions they make, they’re incentives to act in our best interests or their ability to deliver a better future. Cryptocurrency isn’t just the future because that is what a committed band of dreamers would have you believe. It is the future because it is a new operating system for a decentralised world. It is the future because it takes back control of the things we are most dependent on for us to subsist. It is the future because it is already here making a difference to how we act. Bitcoin has enabled a whole generation of Venezuelans to have an alternative to crippling inflation left unchecked by corruption. No longer do we have to trust a government to reign over us and carelessly prescribe dangerous monetary policy which we must accept. No longer must we accept situations of austerity forced upon us due to government intervention in a financial collapse where there was no punishment for any of the individuals who caused it. No longer is our future dependent on the whims of governments. You can make any argument you like about Cryptocurrency being over valued, about it being manipulated, about it not being a viable medium for high frequency transactions to occur. That’s fine but what price do you place on control? What price would you put on trust programmed in to an immutable ledger where those participating hold the keys to how the platform develops. Unilateral arbitrary decision making is replaced by consensus. If you don’t understand the implications of that you’re not paying attention. If you don’t understand how fiat money works, you’re not qualified to judge whether cryptocurrency will be successful or not, period. Equally, if you don’t understand the mechanisms for mining, the underlying technology that powers cryptocurrency or the economics of scarcity you aren’t qualified to tell anyone why it is a revolution. So educate yourself and understand why things are changing, appreciate the technology underpinning the revolution. They you can positively impact the progress this new system can make. Otherwise you’re just another uneducated quack speculating to make a buck doing more damage than good. With all that being true, if you believe in Crypto let the market come to you.Understand that the success of the system is contingent on an unwavering belief that throughout history innovation has always disrupted what currently exists. If a system is better, exponentially so, then nothing will ever be able to stand in the way of progress. For the same reason Google destroyed Yahoo, and Facebook vanquished MySpace, Bitcoin and Ethereum will destroy money. In the same way Amazon has brutalised physical retail, Cryptocurrency will eradicate banks. If you don’t see this coming you aren’t paying attention. Let the non-believers have their day, but the moment central banks pushed the button on quantitative easing they signed the death warrant of capitalism operating system that monopolised the world. Capitlism isn’t going anywhere though. Cryptocurrency is simply a more efficient vessel which allows for its manifest destiny to be realised. Progress is relentless. Cryptocurrency is simply an upgrade
https://chrisherd.medium.com/why-cryptocurrency-is-the-next-operating-system-for-capitalism-8120de08a81d
['Chris Herd']
2018-07-06 08:16:11.622000+00:00
['Technology', 'Cryptocurrency', 'Blockchain', 'Future', 'Bitcoin']
Microsoft Build 2020 Expert Q&A: Cloud AI and Machine Learning Resources
Today at //MSBuild I hosted an Expert Q&A: Cloud AI and Machine Learning session on Microsoft Cloud AI and ML technologies. The following is a list of some of my answers to some of my favorite questions that I received. If you have any other open questions or topics you want to learn more about in the AI/ML space be sure to comment below. If you are new to Azure you can get started a free subscription using the link below. Questions Table Of Contents What is your day to day like as an AI Cloud Advocate? Where can I get started learning AI/ML? Where can I find AI/ML case studies and examples? Do you know of any good resources to get started with Speech Synthesis and Classification? What cool ML/AI work have you seen regarding Covid-19? Is there any shortcut for identifying a model against a particular problem without doing extensive research on the data? For Quality Analysts/Quality Engineers that are new to ML, where would you recommend starting when learning how to validate/interpret models and/or validating the impact of applied ML? Do you have some insights to Azure ML Data labeling tool? Does it support (semi-auto) labeling methods? What about instance segmentation annotation? Is there a Template for CI/CD for Azure Custom Vision and Azure QnA Maker? I have a lot of documents in different formats Word, PDF, Image, Text how can I extract the text so that I can process them with AI/ML. Does Microsoft have any resources for Classification Aerial or Satellite Imagery Do you have any solutions coming to help with scarcity of initial data especially with computer vision and speech synthesis? How much experience do you have with deploying deep learning models on IoT/hardware devices and which products utilizes this the best right now? How will DeepStream integration with Azure IoT edge be supported and any upcoming features or modules for IoT edge? In your opinion when is it good to use Databricks and when is it good to use AzureML? Expert Q&A: Cloud AI and Machine Learning Questions 1. What is your day to day like as an AI Cloud Advocate? This is a great question. While no two days are exactly the same as a developer advocate I spend my time divided between the follow three areas: Community — you’ll see me both offline (whether it’s conferences, meetups, and user groups) and online (from forums to open source projects and social media outlets) meeting and collaborating with you here in Israel. Content — we believe in the power of quality documentation. We listen to you and then directly contribute your feedback to making our documentation as empowering as possible.We also author blog posts, write articles, create videos, contribute to and create our own open source projects based on your needs. Engineering — at the end of the day, we are all engineers. We connect with developers in the field, foster strong relationships with teams at Microsoft, and work together to improve the experience of building solutions in the cloud. I do things like: Developing open source code to unblock you and provide inspiration. Write blog posts and articles about topics I believe will help you accomplish more. Ensure you have the best possible documentation available. Learning from you at user groups and conferences. Share learnings and updates with you at meetups and conferences. Connecting with you over social media @pythiccoder. Taking your feedback back to the correct product teams who can make a difference. Listening and growing, every day. For more information check out my post on the topic 2. Where can I get started learning AI/ML? There are a lot of amazing places to get started with AI/ML if you ask me almost too many here are some of my favorite resources. 3. Where can I find AI/ML case studies and examples? Microsoft provides some pretty great resources for our AI case studies I will link to a few of them below. 4. Do you know of any good resources to get started with Speech Synthesis and Classification? For getting started I would recommend checking out the Azure Speech Cognitive service offering. We have some really nice getting started demos for creating Speech to Text, Text to Speech and custom voice applications. To learn check more check out are documentation below. 5. What cool ML/AI work have you seen regarding Covid-19? There have been some really creative AI/ML solutions in this space. I’ll link to a couple really interesting ones that Microsoft has been directly involved in. Also check out this awesome dataset put together by AI2. 6. Is there any shortcut for identifying a model against a particular problem without doing extensive research on the data? It is critical to get to know your data in order to build quality AI models how ever in order to better get to know your data it is worth experimenting with different architectures. For classical Machine Learning problems I recommend taking a look at the cheat sheet below. But this is where in my opinion the Azure AutoML Service shines. With very little configuration and no code you can quickly test many model architectures to find a strong starting base line for your data. For more information check out our documentation. 7. For Quality Analysts/Quality Engineers that are new to ML, where would you recommend starting when learning how to validate/interpret models and/or validating the impact of applied ML? This is a great question I actually gave a presentation on this very topic just a few days ago (see below.) While interpretability in machine learning is still an open area for research I’d recommend taking a look at the Microsoft Interpretability toolkit as a strong starting point. Check out the documentation and getting started notebooks below to learn more. I also have a good post on SHAP values that I’d recommend as well One last area that is worth keeping an eye on is Data Drift we have some great tooling to help with this as well. 8. Do you have some insights to Azure ML Data labeling tool? Does it support (semi-auto) labeling methods? What about instance segmentation annotation? With the release of Azure ML, Microsoft provided a new labeling tool that currently supports image and bounding box labeling. The ML assisted labeling page lets you trigger automatic machine learning models to accelerate the labeling task and is supported in the Enterprise Azure ML Tier. If you need instance segmentation or Video annotation support I suggest looking at the open source Microsoft VoTT tool. 9. Is there a Template for CI/CD for Azure Custom Vision and Azure QnA Maker? Yes check out the resources below. 10. I have a lot of documents in different formats Word, PDF, Image, Text how can I extract the text so that I can process them with AI/ML. This is a great question Microsoft put together a reference architecture for exactly this scenario using Cognitive Search . Check out the documentation and JFK example below. Also check out the form Azure Form Recognizer Service. 11. Does Microsoft have any resources for Classification Aerial or Satellite Imagery Yes check out the resources below. 12. Do you have any solutions coming to help with scarcity of initial data especially with computer vision and speech synthesis? There are couple different approaches to handle scarcity of data from transfer learning from simulation, data augmentation, and one shot/ meta learning approaches I will provide some resources below. 13. How much experience do you have with deploying deep learning models on IoT/hardware devices and which products utilizes this the best right now? I have a good deal of experience with IoT Edge computing, the Cognitive Service Container Instances, Azure Machine Learning and the Custom Vision service are great examples of AI on the edge. 14. How will DeepStream integration with Azure IoT edge be supported and any upcoming features or modules for IoT edge? Great question take a look at the documentation and great demonstration video here from my Colleague Paul DeCarlo. 15. In your opinion when is it good to use Databricks and when is it good to use AzureML? This is a really good question that deserves a more in depth follow up post. As a general rule of thumb in my experience it makes sense to use Databricks for data process and AzureML for model development and deployment. Stay tuned for a more in depth follow up post. Next Steps: Check out our AI Content Dashboard of 30 amazing original AI content posts! About the Author Aaron (Ari) Bornstein is an AI researcher with a passion for history, engaging with new technologies and computational medicine. As an Open Source Engineer at Microsoft’s Cloud Developer Advocacy team, he collaborates with Israeli Hi-Tech Community, to solve real world problems with game changing technologies that are then documented, open sourced, and shared with the rest of the world.
https://medium.com/microsoftazure/microsoft-build-2020-expert-q-a-cloud-ai-and-machine-learning-resources-7c7ac2485989
['Aaron', 'Ari']
2020-05-20 19:27:04.046000+00:00
['Machine Learning', 'Data Science', 'AI', 'Microsoft', 'Azure']
How John Burn-Murdoch’s Influential Dataviz Helped The World Understand Coronavirus
One hears the word ‘unprecedented’ a lot these days. It’s as if the language we use to explain our world is breaking down and superlatives just aren’t able to keep up with the new reality brought to us by the coronavirus pandemic. Living through the past month has brought an avalanche of hard to answer questions, as we’re limited to data that is sparse and difficult to analyze. Many have noted the importance of data visualization in helping people attempt to make sense of it all with a few data journalists contributing significant impact. One of them is John Burn-Murdoch of the Financial Times (FT), whose breakout moment came on March 11 when his first log scale chart comparing the trajectory of infection rates between countries helped millions of people around the world understand that the pandemic was a trend just beginning in England and the USA. John continues to analyze and report on the coronavirus pandemic every day, so Nightingale is incredibly thankful that he took a few moments out of his busy day to speak about his experiences. Jason Forrest: How did your coverage of the coronavirus begin? John Burn-Murdoch: This is the biggest story as a data journalist that I’ve ever encountered, this is just a story that when this comes into the news you just know, this is our story. There’s no process where we’re assigned this, it’s just something that happens and as a journalist, you say “Right, I’m on it.” JF: How did you determine that these would be the charts you would use to tell this story? JBM: It’s always difficult with cases like this to try and retrace one’s steps and work out exactly where different bits of ideas came from. Because what happens with stuff like this, especially in data visualization (and especially for a massive story like this) is that everything comes from somewhere. John’s first tweet about his Coronavirus coverage, March 11th, 2020 There were multiple conversations going on when I made the first iteration of this chart that I’ve now made about 50 times. One of them was a conversation with one of our reporters who was interested in comparing the Spanish and UK daily numbers of cases and deaths in relation to Italy. I think it was on the 10th of March if I remember correctly. My response to answer that question was to see if I can get the whole table's worth of data to her to show what we’re looking at. In that initial email, I made a couple of versions of the charts that we are now doing every day. One was on a linear axis and one on the log axis and these were both just my way of saying, ‘here’s the data on about five countries and here’s what it shows in terms of the inevitability,’ that all countries were heading down the same road as Italy. So the first version (0.0) was just a very rough ggplot R graphic. Then I think it was the day after that there was a conversation in our morning news conference with all the editors where someone said, “Should we have the FT have a definitive chart which says where everyone is in relation to Italy that tries to answer that question of whether we’re all heading in that direction.” As soon as I heard that sort of appetite and interest from the editors, I thought “oh great, this is where I take the chart I made yesterday and do a more finessed version of it.” That was how it started coming to shape within the FT. But I’m fairly sure that I’d seen all of the constituent parts of the chart floating around in other people’s work. I know that someone called John Minton who’d been producing a lot of graphics already by this point, which used the idea of a starting point of 100 confirmed cases. I’ve encountered that in a lot of conversations with epidemiologists, that on a chart like this you can’t really start at the first case because you can’t say that a country has a fully-fledged outbreak when it only has one case. So if you want to make comparisons between countries, you’d want the chart anchored to an epidemiologically similar starting point of 100 cases. Then as for the log axis — as I said, the original chart that I did was in one log and one linear — it was immediately clear from doing it that the log one was going to be the more useful. Otherwise, you had loads of countries that just got completely lost and squashed into the bottom left-hand corner. In addition, that I’ve explained on Twitter in particular, when you’re dealing with a virus, a virus spreads exponentially not linearly, so this is just a rational path to take. John’s tweet on log scale (and several hundreds of comments later ) March 11, 2020 The only other point on a log scale is that, for me, it’s about the amount of visual bandwidth that you have to deal with when making a chart. When you use a linear scale to plot exponential growth a lot of that visual bandwidth is taken up by that increasingly sloping curve. If we know that all of these curves are going to be exponential curves then using a linear scale with the linear y-axis would use a lot of the visual space just to show that all of these countries are seeing cases at an exponential rate. That feels like a waste of that space since we know that’s the case anyway. By using the log scale you use your visual bandwidth to see the slope of the different rates of growth. You’re not wasting so much space looking at all countries just making the same curve. JF: It’s a very easy chart to “see” — to come up with a “so what” from. How was the initial reaction to it when you started to share it amongst the FT staff and then when it was published. Was it an immediate success? JBM: That’s the thing, this one more so than anything else I’ve done, this one has set the direction of my last month’s work. The first version that went out, it was the cumulative number of cases over time. That one was making the point that pretty much all the western countries look to be on the same trajectory as Italy. Of course, at that time, Italy was seen as this Ground Zero of the whole thing. So this chart was making a very emotionally powerful point that this country that we all agreed was going through something pretty terrible was just a few days down the same path that all of our countries were also setting out on. I think it was that message as much as the choice of geometry which really seem to cut through. Listen to this section of the interview: on the importance of dataviz in communicating the emotional story That first version got this huge engagement, a huge outpouring of responses and everyone all seemed to be endorsing the message in the chart itself. So from then on it just felt like the point was to drive the inevitability of coronavirus and how countries are going down the same path as Italy, it would be natural for us to update this from one day to the next. We already had, at the FT, a coronavirus tracker page. I think it was just a map and a data table, but we already had something which was a daily updating page showing the key figures on coronavirus. It was easy to just say ‘oh, well, we now have a new chart out there which seems to be resonating very strongly, so we’ll simply add that to the page here’. The rest is history since then it’s just been a case of making daily updates to this chart adding additional sister charts and iterating on the existing designs. JF: You said that you’ve made basically 50 changes to the chart over the last month. How has your relationship with the actual chart or the work changed in that time? JBM: I love the wording of that question. I think the idea of me having a relationship with the charts and the work is a very apt term to use. Because especially during this period of lockdown, where we will work from home, it does feel like a very intimate part of my daily routine. But yeah, it’s changed a lot because at the outset this was a case of an urgent need to keep updating this graphic which people felt strongly about, and to keep emphasizing the point of countries following Italy’s outbreak trajectory. But there are many ways in which the dynamic of the relationship has changed. So part of it is that the story itself has changed. You’ll see that we now lead with these slightly different charts, which are looking at daily numbers of infections and deaths, rather than the cumulative totals because the story really feels like it’s moving on. Instead of saying which country has the most cases or deaths, or how many days behind country X is country Y — it’s now a question of when does each country reach its peak in terms of infections or deaths and this question of when might it be possible for restrictions to ease. But the way that these charts are perceived has obviously changed a lot as well. For me, this is something completely unique, in that we now have a piece of visual journalism, which people far beyond the immediate readership of the FT have come to see as a part of the way that they experience news coverage of coronavirus. March 19th, 2020 There was The New York Times needle in the 2016 election, of course, and they have brought that back at times for other elections so it’s become synonymous with election graphics and you know, it feels like there’s something similar happening here. Where a few nights ago, I only published a subset of the charts I usually do, because it already got extremely late in the night when I was getting the stuff out there and I was inundated with people saying “Where are the other charts?” We get hundreds of emails and messages every day now exclusively about these charts — that’s emails directly to me, that’s emails to our team address, and a lot of my Twitter direct messages as well. So I think the biggest change for me is to be presiding over something which is now genuinely part of the lives of thousands and thousands of people as well as the typical demands according to the editors at the FT, there’s now this additional secondary set of stakeholders. And we feel — even if only on a personal level more than a professional one — that there are people here now who have come to expect something, who have very interesting and valuable feedback in comments multiple times a day or other things we could be doing with these charts. As the person who sticks pixel to paper, as it were, that’s just a very different dynamic to anything I’ve had before. In the past, I’ve made plenty of charts that have been about important emotional subject matter, but they tend to be a case of ‘publish and forget’ — you put something out there and it’s done. Whereas, here we might want to make small changes once a week or even once a day. It’s a much more involved process. Listen to this section of the interview: in collaborating with the general public Just as an aside, I think once things settle down, this is going to end up being an amazing resource in terms of public engagement with and response to data visualization. We’re now sitting on well over a thousand bits of written feedback to our chart and people saying what they like and what they don’t like, and have we considered x and have we considered Y. So yeah, on day one I thought I was just making a standalone chart that made an important point and on day 30 — whatever we’re on — this is now a product with a huge audience of people both at the FT and in the wider public who have come to rely on and to have expectations of. Just as an aside, the page on FT.com that these charts sit in is by far the most read page of the FT website ever. I think we’ve certainly got enough ‘credit’ built up here that anytime anyone in the future questions ‘what is the value of data visualization in the newsroom’, we’ve got several million answers here. JF: That’s amazing, totally amazing. So you said that you’re getting just an overwhelming amount of input and response from the broader global audience. What kind of toll is that taken on your personal life? JBM: It’s an excellent question. The nice thing about this has been that it has really been a brilliant incentive to streamline loads of workflows. I’ll answer this in several ways, but that has been quite nice. The first versions of this chart were made in ggplot and then a load of tidying it up in illustrator. I ported it over quite quickly into D3, because that allowed us to make some different styles and sizes of the graphics quickly, but a big breakthrough was a couple of weeks ago when I managed to clear the last couple of hurdles, and now the entire thing is done in D3 in the browser without me needing to do any fine-tuning and so in terms of web saving 15 minutes here and 20 minutes there, it’s been really nice to keep doing that fine-tuning and that streamlining. It was a necessity to do that because it was otherwise a huge load on me in terms of the amount of time I have to stand this up this every day. One of the particular challenges is that when countries publish their daily updates in the data, essentially everything comes in the evening time over here. Today, for example, and if I take a look just now, we’re still working on data from France, Germany, and Turkey in terms of our major countries of interest. Of course, in the US we only have partial data for so far. For me to update the charts so that they are timely — so that the numbers in these charts match the numbers in our news stories — I really need to be updating in the evening. So typically I start work updating the charts at 6 p.m. This just illustrates how much of a nerd I am, but last night I timed every sub-task of that chart updating process so I can try to see where I can make efficient saving next, so this is just after a conversation with my other half, and we were you saying how I can get back some of my evenings. Starting about 6 p.m., it ends up being about 1 hour 45 minutes, which is about 1/3 obtaining and cleaning up the related data, 1/3 making sure the charts are rendering as they should and moving some annotations around. Then 1/3 putting those into our new story and writing any commentary played alongside it. It’s been a big thing to deal with because the fact that I do that from 6 to 8 p.m. doesn’t change the fact that I still have my main day job. We have our main team meeting at 10 a.m. and the rest of the day is doing all of our huge amounts of other daily coronavirus reporting. These charts and this page is just one tiny bit of what our team is involved in at the FT. At the moment, I’m involved in other stories looking at the impact of different countries lockdown; at the impact of coronavirus on the environment and on pollution; and all sorts of other bits and pieces and like tracking the gradual reopening of China. Then around 6:00 p.m. when our team slack channel is a stream of hand waving emojis 🖐, that’s the time when our second shift begins for myself and a couple of others. It’s been a really big deal in that sense. The way I described it to someone else was it’s a bit like doing an election night shift every day for months. So you’ve got some excitement and adrenaline and intensity of covering a fast-moving data-rich story, but you can’t then have a ‘lie-in’ the next day because you’re going to do it again. So yeah, it’s been very intense. Listen to this section of the interview: on the intensity of daily COVID-19 reporting I’ve loved to be as involved in this very intense dataviz story as I have been, but there are plenty of other things that, in both work and in my own life, I’d love to be doing more of that had to be put on the back burner for the last month. We’re still very much going to be covering this story as a dataviz team, but the idea is to make it more of an automated and routine process and reduce the need for this sort of curated stuff that I’ve been heavily involved in over the month to date. JF: Yeah, that makes total sense. It’s a natural progression into something that’s just more sustainable. Right? I would presume every bit of pipeline you have has got to be well automated and documented. So you’ve had the time to get the technology right… JBM: …on that last point, that’s something we’re really ramping-up on at the moment. We now have a little box on that page on the FT website where we track when we’ve made changes to the charts and explain them. The idea has been to turn this more explicitly into a product and to make this something that readers see explanations as to why we’ve made changes, and how we’re doing things, and what’s changed where. The evolution of this from a chart into a product — that’s been part of what has made this so unique about this. JF: Last question: do you have any anecdotes you can share about interacting with so many people about this? JBM: I’ve exchanged hundreds of messages on this in the last couple of weeks, and there have been some good ones. But one that is less funny and more an interesting example of the sensitivities of the strength of opinion around this stuff: one of our sort of flagship charts in this series shows daily new numbers of deaths attributed to coronavirus and the headline states “Every day brings more deaths than the last in the UK and the US”. Every time you’re making a statement about a country that may be portrayed in a negative light (which some people might take that to be in this case) you’re going to get people who have a bit of a “fan” reaction to that and say, ‘you put down my country’. Listen to this section of the interview: on communicating the complexity of the data The reason this is a particularly complicated issue, in this case, is that the data we’re dealing with here that we get from countries with coronavirus is extremely patchy in terms of its quality and noise from day-to-day. That’s true to such an extent that I personally don’t think that the daily numbers we see and hear on the news every day are actually worth the paper they’re written on. Because from what we know now — even in countries like the UK and the US — the daily numbers and the fluctuations in those numbers of deaths that we get have as much to do with the idiosyncrasies of how deaths are reported as they are to the actual spread of the virus. April 7, 2020 For example, every Sunday and Monday in the UK (and I believe this is true of the US as well) the number of deaths reported falls. So every Sunday or Monday — this is a true behind-the-scenes story — I’ll get some readers emailing in and saying your headline says that deaths are rising day by day in my country but on Sunday/Monday the numbers are going down, so the headline’s wrong. And I’ll respond to them by saying, “look, it’s complicated. We believe that the nature of the daily data here implies a false level of precision, and therefore we’re using a seven-day moving average on our charts, which is the better reflection of the sort of week-to-week way that these viruses spread. You’re never going to have a peak day, it’s really a peak period. Our headline reflects that seven-day moving average is still trending upwards”. I’ll still get a reply then from those people saying “I understand what you’re saying, but the fact is your headline is still objectively incorrect.” Then like clockwork, every Tuesday, the backlog of reporting of deaths that have built up over the weekend is released and you get a huge spike in the numbers. It’s just one of those weird things where the decisions we make in these charts— things like using the seven-day moving average — which is all to try and be more honest and to try to portray a more truthful and meaningful picture of what’s actually happening out there. But people will really focus on those tiny details, even when we know and have explained that those details are actually highly misleading and not really reflective of what’s happening with the virus. But those are the details that people focus on and write a letter to the editor objecting in the strongest terms to what we’ve done — even though what we’ve done is explicitly going towards presenting a more honest picture of what’s happening. The point here is that this is an issue that people are really spending huge amounts of time poring over and people feel very strongly about. So we get into huge lengthy debates now with hundreds of people about what type of rolling average we should be using, or whether we can truly claim that something was going up every day when it shows the trending up every week and what type of log scale we should be using and that kind of thing. So yeah, the overall point is that the strength of feeling of it is huge and that’s and it’s incredible to be at the heart of that. But it can be pretty stressful at times. Here is the chart for April 13, 2020:
https://medium.com/nightingale/how-john-burn-murdochs-influential-dataviz-helped-the-world-understand-coronavirus-6cb4a09795ae
['Jason Forrest']
2020-04-15 16:13:39.311000+00:00
['Data Visualization', 'Covid 19', 'Data Journalism', 'Journalism', 'Interviews']
Dragon Quest XI: When Home No Longer Exists
Dragon Quest XI: When Home No Longer Exists What does it mean to lose your safe haven? There is palpable tension as the hero crosses the grassy, sun-baked fields. A multitude of colourful flowers dot the landscape, and there is not a cloud to be seen overhead. Apart from escaping their dogged pursuers, it is a perfectly beautiful day. The hero’s mind is whirling. He was the chosen one. The Luminary, the world’s sole hope against the Darkness. Reason dictated the kingdom he was born and raised in would welcome with open arms. Instead, his King had declared him the spawn of Darkness, and even worse — soldiers were sent marching to his hometown, Cobblestone. Cobblestone. Source: Dragon Quest Wiki. Hurry. Flowers and grass are tromped on underfoot as the hero and his new party member traverse the expansive terrain that seems to stretch on forever. Hurry. There is a limit to how fast they can go without a steed. Travel is slow despite the burning urge to get there already. Then finally, he spots the familiar well-worn path leading to his home. Cobblestone awaits. He braces himself as he emerges on the other side of the dark and long tunnel. Like air rushing into throat once strangled by rope, the hero is relieved of the tension that has built up during his arduous journey home. His town is peaceful. Children frolic in patches of grass and sunshine while adults mill about, trading words over some matter or the other. The hero nods to himself and goes up to speak to a familiar face. Standing nearby is a portly man sporting a bald, who greets him cheerfully. “Welcome to Cobblestone! Not a lot goes on here, but I hope you enjoy your visit.” Fast forward a number of discoveries later, and the hero comes to a realization. What he’s experiencing is a memory of the past, a far flung moment of passed time. His vision goes fuzzy. The next time he sweeps his gaze across his town, he is met with smoking ruins devoid of any semblance of life. Wrecked houses, smouldering and crumbled stone bridges, bare stumps where stood magnificent old trees. The wind rushes past, sweeping through a town deadened of its characteristic cheer and laughter. There is not a single person to welcome him. Framed by the sun, the hero stands, alone. It has been a long while since I’ve played a traditional JRPG, but after so many years, I’d like to think I’m familiar with the story beats their scripts tend to follow. When I watched the opening cutscenes of Dragon Quest: XI, I had a pretty strong inkling that the destruction of the hero’s precious home was in the cards. After all, the main character wouldn’t be a hero if he didn’t save the world despite his own suffering. In essence, there shouldn’t be anything that surprised me. But somehow, when I stood on the hill overlooking the ruined village, the sheer desolation that sight inspired in me was shocking. My theory for why it managed to provoke such a strong, emotional reaction is that because I’d been shown a vision of the past, of better times, it made the truth hit harder. I’ve seen this tactic mirrored in other games or just media in general; giving a person false hope is a one way ticket to completely decimate their will. Even if I’m wrong, which I probably am, I’m still in awe of how the game managed to reel me in. At that point, I was only four hours into the game. Considering how long it takes to follow a JRPG plot to completion, I shouldn’t have been so invested to the point where I felt as if it was me who had lost the only place I belonged. In that moment, I wasn’t a third party leering at a character on a screen. During those few seconds, the revelation that the place I grew up in no longer existed sent my heart plummeting.
https://medium.com/super-jump/dragon-quest-xi-when-home-no-longer-exists-1f5b249c8d58
['Tow M.Y']
2020-12-24 10:21:04.349000+00:00
['Features', 'Gaming', 'Self', 'Psychology', 'Art']
There are no safe states.
It doesn’t matter if you’re in a swing state, or a “safe” state — blue or red. No state is safe from the ideas espoused by Donald Trump — ideas that are endorsed by the KKK — ideas that are rejected by real Republicans — ideas that are laced with cynicism, fear, bigotry, and hate. Vote. Please. Proclaim that America has no place for these ideas*. * It is crucial that you vote tomorrow. After the election, we’ll figure out what motivated many Americans to overlook these ideas and cast their vote for a flawed human anyway. We need to fix what allowed this. And we will.
https://medium.com/alttext/there-are-no-safe-states-3044f4dbe529
['Ben Edwards']
2017-04-23 04:55:40.318000+00:00
['Society', 'Trump', 'Politics']
API Profiling at Pinterest
Anika Mukherji | API Intern When I walked into Pinterest on the first day of my internship and learned I’d be focusing on profiling the API Gateway service — the core backend service of the Pinterest product — my only thought was “What is profiling?”. Profiling is often shoved aside as a side project or lower priority concern, and it’s often not taught in college CS courses. Essentially, writing services come first, and profiling them is a distant second (if it happens at all). Moreover, profiling is not always seen as a precursor to optimization, which can result in wasted time optimizing code that doesn’t significantly affect performance in production. That being said, profiling is a critical step in the software development process in order to create a truly performant system. Before my arrival at Pinterest, a basic webapp had been built to accompany a regularly scheduled CPU profiling job (and consequent flamegraph generation) for all of our Node and Python services. My primary goal for the summer was to expand this tool to support our API Gateway service while making it flexible for use in other services in the future. The ultimate goal is to use it for profiling of all Pinterest services. The three arms of functionality I worked on were memory profiling, endpoint operational cost calculation and dead code detection. Solving for increased optimization I primarily worked on optimizations, including expanding resource tracking and profiling tooling. In terms of performance in production, our evaluation of resource utilization for the API Gateway service was limited to CPU usage. There was a need for a holistic assessment of which parts of the API Gateway service were performant, and which parts of the codebase needed quality improvement. With that information, developer resources could be allocated to the least performant endpoints, and we could improve the overall process of optimization. What exactly is profiling? Software profiling is a type of dynamic programming analysis that aims to facilitate optimization by collecting statistics associated with execution of the software. Common profiling measurements include CPU, memory, and frequency of function calls. Essentially, profiling scripts are executed in tandem with another executing program for a certain duration of time (or for the entirety of the script being profiled), and they output a profile (i.e. a summary) of relevant statistics afterwards. The recorded metrics can then be used to evaluate and analyze how the program behaves. There are two common types of approaches to profiling: Event-Based Profiling: Track all occurrences of certain events (such as function calls, returns, and thrown exceptions) Deterministic (more accurate) Heavy overhead (slower, more likely to impact profiled process) Example Python packages include: cProfile/profile, pstats, line_profiler Statistical Profiling: Sample data by probing call stack periodically Non-deterministic (less accurate, though you can mitigate through stochastic noise reduction) Low overhead (faster, less likely to impact profiled process) Example Python packages include: vmprof, tracemalloc, statprof, pyflame We opted for statistical profiling for our production machines because of the lower overhead. If the job is run regularly for long periods of time, accuracy increases without increasing response latency due to heavy overhead. While profiling is important, it should not harm production performance. Memory profiling TL;DR: tracemalloc to track memory blocks Our API Gateway service is written in Python, so the most apparent solution was to use an existing Python package to gather memory stack traces. Python 3’s tracemalloc package was the most appealing, with one large problem: we still use Python2.7. While our Python 3 migration is underway, it’ll be many months until that project is completed. This incompatibility forced us to patch and distribute our own copy of Python, in addition to using the backported pytracemalloc package. Just another reminder that updating to the latest version of Python is ideal for both performance and utilization of latest tooling. The basic approach here was to run a script on a remote node (one of our API production hosts) that sends signals 15 minutes apart that trigger signal handlers (functions registered to execute when a certain signal arrives). Signals were a fitting choice because they don’t add any overhead when not running the signal handler and because we don’t want to enable profiling all the time on all the machines. (Even a 0.1% overhead at scale is expensive.) We decided to overload the SIGRTMIN+N signals to start and stop the profiling job on a received signal. The stack traces are collected and saved to a temporary file within /tmp/. Another script is run on the remote host to produce a flamegraph, and then all files are saved to a persistent datastore and sourced by our Profiler webapp. Operational cost calculations TL;DR: Finding the expensive endpoints (and their owners!) The calculation of endpoint operational costs required the combination of two sorts of data: resource utilization data, and request metrics. Our resource utilization information is given in two units — USD and instance hours — and is provided on a monthly basis. Using request counts, the relative popularity of each endpoint can be calculated. This popularity is used as a weight to divide total resources used by the API Gateway service. Since most of our request data is in units of requests per minute, I decided to break cost down to that time scale as well. As each API endpoint has an owner, average operational costs for a given owning team is also calculated. The ability to identify the most costly endpoints, as well as the engineers/teams to whom they belong, encourages ownership and proactive monitoring of their performance. It’s important to note these calculated metrics aren’t absolute sources of truth; their significance instead lies in how they compare relative to one another. The ability to identify unperformant outliers is the main objective, not quantifying exact monetary impact. This approach is naïve in that it doesn’t properly account for CPU time, or make distinctions between costly handlers (endpoint-specific functions in the API Gateway) and costly requests. For example, requests can trigger asynchronous tasks which aren’t necessarily attributed to the API Gateway Service, the same endpoints with different parameters can have different cost structures (as can different handlers) and downstream service processing isn’t associated with a given API request. We could address these deficiencies by creating an integration test rig that runs a set of known production-like requests and measures CPU time spent relative to the baseline for the application. We could further maximize impact of this by incorporating it into our continuous integration process, giving developers key insights into the impacts of their code changes. Additionally tracing via a given Request-ID would enable more holistic coverage for our overall architecture. Dead code detection TL;DR: Uncovering abandoned code (and deleting it) Unused and unowned code is a problem. Old experiments, old tests, old files, etc. can rapidly clutter repositories and binaries if they’re able to fly under the radar. Discovering which lines of which files are never executed in production is both useful and easily actionable. In pursuit of identifying this dead code hiding in our service, I employed a standard Python test coverage tool. While the primary use of a test coverage tool is to discover which lines of code are missed by unit and integration tests, you can run a job to run the same tool on a randomly selected production machine to see what lines of code are “missed”. As the job is run several times a day, the lines that are commonly missed in all runs for a given day are surfaced. An annotated version of the file is shown for easier visualization of which lines are “dead” and who to contact to see if the code should be removed. This is a fairly naïve implementation to begin detecting dead code. The codebase in question may be used by multiple services and jobs, and determining the dead code in common among all of them is a complex problem that still needs to be more carefully addressed. It’s also fairly expensive as it uses an event based collection technique rather than statistical sampling. What’s next TL;DR: it’s all for optimization I don’t have much experience with “big data”, but after building these tools and starting to run the jobs regularly, I was bombarded with large influxes of data. My gut reaction was to shove it all into the webapp and leave developers to figure out what was useful (more is better, right?). However, I quickly learned that while this data made sense to me as someone who spent weeks working on generating it, it was opaque and arguably impenetrable for engineers who hadn’t used flamegraphs or lacked perspective into operational cost. With respect to utility, simply disseminating the raw data was far from optimal. It came to my attention the new features I created would most likely have the following primary uses, so these were the key insights to be surfaced: Finding files and functions that use the most memory Engineers finding how expensive their API endpoints are Starting point for cleaning dead code out of our repositories Finding the most popular and costly parts of the API To spread awareness of this tool around the company, I held an engineering-wide workshop with flamegraph-reading and other profiling analysis activities. In just two days, two different potential optimizations (single line changes) were found and realized, saving the company a significant amount of annual spend. At a surface level, these use cases provide a wide range of insights on resource utilization by the API and what parts of the codebase are used less in production. The birds eye view, however, is much more exciting and motivating. Not all parts of the codebase are created equally — some functions will be executed a much greater number of times than others. Spending too many hours on rarely executed endpoints is a poor use of developer resources and is the worst possible strategy to optimize performance; in other words, blind optimization is not really optimization.
https://medium.com/pinterest-engineering/api-profiling-at-pinterest-6fa9333b4961
['Pinterest Engineering']
2018-11-15 20:29:37.694000+00:00
['Engineering', 'Internships', 'API']
An 18 Minute Routine for Success
The routine Morning — 5 minutes Taking 5 minutes in the morning to plan your day is incredibly important. Bregman recommends sitting down with your to-do list and figuring out what you can do that will make today highly successful. What progress can you make towards your goals? Where can you schedule the things on your list? Plan what you are going to do today. “Effectively navigating a day is the same as effectively navigating down a rocky precipice on a mountain bike. We need to look ahead. Plan the route. And then follow through.” For me, I follow a more Stoic practice — I visualize the day ahead and consider what challenges I may face. I then decide how I will deal with those challenges. I also write one thing I am grateful for, my intention for how I want to live this day, and the one thing I need to do today to be able to say that this day was a success. These 5 minutes in the morning can look like anything. The point is to take some time to reflect on the day ahead and plan your direction. Each hour — 1 minute x 8 Set an alarm for each hour. When your alarm goes off, take one minute to reflect. Take a deep breath Reflect on the productivity of the last hour — did you do what you wanted to get done? How was your focus? How are you feeling? Set an intention for what you are going to do for the next hour. Bregman says, “Reconnect with the outcome you’re trying to achieve, not just the things you’re doing.” “Manage your day hour by hour. Don’t let the hours manage you.” — Peter Bregman Evening — 5 minutes Take 5 minutes at the end of your work day to reflect on how the day went. Bregman suggests asking yourself the following questions: How did the day go? What success did I experience? What challenges did I endure? What did I learn today? About myself? About others? What do I plan to do — differently or the same — tomorrow? Whom did I interact with? Anyone I need to update? Thank? Ask a question of? Share feedback with? Again, mine looks slightly different. I recount the top 3 wins of the day, what I am grateful for, what I am looking forward to tomorrow, and my one thing for tomorrow. If I’m feeling particularly motivated, I will run through the questions Seneca asked himself each evening: What did I do badly? What did I do well? How can I be better tomorrow and what tasks were left undone? Figure out what will be most valuable to you in your evening reflection. This will be highly personal and will depend on what your work day looks like and what your goals are. Again, the point is to reflect and ensure you’re on the right path, as well as setting you up for a productive day tomorrow.
https://medium.com/change-your-mind/an-18-minute-routine-for-success-4c7d0baedc4c
['Ashley Richmond']
2020-12-24 05:46:40.451000+00:00
['Health', 'Lifestyle', 'Self Improvement', 'Advice', 'Habits']
Achieving Accessibility Across Your React Web Apps
Web Content Accessibility Guidelines (WCAG) WCAG — developed by a W3C initiative called The Web Accessibility Initiative (WAI) — provides developers with technical specifications and guidelines on how to increase web usability for people with disabilities. The latest WCAG version is 2.1 with 2.2 scheduled to be released in early 2021. A working draft of version 2.2 can be seen here. There are 4 main principles to the WCAG: Perceivable — all user interface components must be presented to users in a way they can receive it i.e. it can’t be invisible to all of their senses. Operable — users must be able to operate the interface. Understandable — the user must understand how to use the interface. Functionalities must not be beyond their understanding. Robust — it must be robust enough that if technologies advance, the content must still be accessible by users. According to WCAG, if any of the above isn’t true, users with disabilities will not be able to access the web. Each of the above principles will have sub-guidelines which in turn have success criteria. Below I’ve listed a few of the guidelines found within each principle. A full list can be found on WCAG’s website. Perceivable Guidelines Text Alternative — e.g. alt text in images. Any content that isn’t text should have a text alternative. — e.g. text in images. Any content that isn’t text should have a text alternative. Distinguishable — separating foreground and background colour to make content easy to read. Operable Guidelines Keyboard Accessible — make all functionality available from the keyboard. — make all functionality available from the keyboard. Navigable — provide ways for users to navigate, find content, and determine where they are. Understandable Guidelines Readable — make the content readable and understandable. — make the content readable and understandable. Input Assistance — help users avoid and correct mistakes. Robust Guidelines Compatible — maximise compatibility with current or future user agents. Each guideline will have testable success criteria based on three levels of conformance A (lowest), AA, and AAA (highest). Check out the complete list on their website to find detailed success criteria.
https://medium.com/swlh/mastering-accessibility-across-your-react-apps-f3f628a5f1fd
['Natalie Mclaren']
2020-09-22 20:47:55.296000+00:00
['React', 'Programming', 'JavaScript', 'Accessibility', 'Testing']
Madhur Jaffrey on Indian Cooking and Staying Creative As You Age
Madhur Jaffrey on Indian Cooking and Staying Creative As You Age The writer and actress explains how she stays inspired There are many ways to live a healthy life. The Health Diaries is a weekly series about the habits that keep notable people living well. Madhur Jaffrey is one of the most prolifically creative people of her time. The 85-year-old Indian-born actress has also worked as a food and travel writer, and TV personality. She’s written over a dozen cookbooks (her newest, Madhur Jaffrey’s Instantly Indian Cookbook: Modern and Classic Recipes for the Instant Pot, is available now). She had her own food-based TV program in the U.K. in the ’80s, and she was named Commander of the Order of the British Empire around the same time, thanks to her focus on bringing the U.S., U.K., and India together over culinary commonalities. And then there’s her new rap video, “Nani,” which she recently released with up-and-coming artist Zohran Mamdani, a young Queens MC who calls himself Mr. Cardamom. And that’s just a small portion of her resume. This week, Jaffrey talks with Elemental about her irregular schedule, her mind-over-matter philosophy on life, and the way she’s maintained her creativity over the years. Because of my theater and writing background, and because my husband is a musician, we don’t have any kind of routine for getting up or going to bed. We stay up late and we get up when we get up, unless there’s something to do. If someone says “I’ll call you at 9 o’clock,” I demure and say, “Can you find a later time?” I’m definitely not early to bed and early to rise. I’m more late to bed, late to rise. I struggle out of bed every morning. When you’re 85, you have every disease known to man. I have umpteen pills to take, and I check my weight and take my medicines, whatever they are. Then I wait because I’m not supposed to eat immediately after taking one of them, so I go to my computer and look at my email. When I was a kid, I ate white toast with cheese and tomato every day. But now I have to be careful, so I usually eat some kind of grainy toast with no cheese on it, although the tomato stays. I’ll take one sardine and put it on my toast, spread it out, then put a layer of Indian pickles to give it some jolt. Then I put my slice of tomato on top, with salt, pepper, and lemon juice, and now my sandwich is ready. I have it with either decaf coffee or Indian tea. I can’t have caffeinated coffee. I take some supplements and vitamins, too. I take probiotics and then coenzyme Q10 for the heart. I also have berries every day, usually a combination of blueberries, raspberries, and golden berries. It’s usually 11 a.m. or 12 p.m. by the time I eat my breakfast. Then around 4 p.m., I get a little bit hungry again. Dinner is also a small meal for us. By then, it’s usually quite late, 9:30 or 10 p.m. I usually eat a piece of fish grilled or cooked in an Indian way, with veggies on the side. I try not to have starch at night. But if I’m entertaining for dinner, then I go crazy. I drive myself to the point where I am exhausted because I’ve cooked so much, but then it’s wonderful to recover in bed. After dinner, we watch some television. I like to watch drama shows and my husband likes to watch news shows. Either that, or we go out. And sometimes, we are traveling. We go to bed when we’re tired. I exercise regularly but I also fall off the wagon regularly. At my age, when you get sick, it takes a while to recover! And exercise is so easy to postpone. When I do exercise, I have an hour-long routine that includes walking, weights, balance exercises, strengthening my legs by sitting and standing on a chair, exercises for my back, all of that kind of stuff. I love to drink whiskey but I am trying not to. That’s a big change for me. I look at it longingly but I have it seldom, now. The same goes for wine. I like to have a glass of wine every night for dinner but I’m doing that less and less. Sometimes I’ll cut my wine with seltzer just so I can have a glass in my hand to feel like I’m actively drinking it — but then I’m not taking in the sugar and calories. Basically, I cheat! To me, “healthy” means I have the energy to do what I want to do. At my age, your sight and hearing are affected and things begin to fade. You have to be alert to everything. So when I get up and have the energy to do what I want, I can plow through a lot of my physical problems. I just ignore them and I keep going. I think if you fall into the physical pain and say “Oh my god, I am suffering,” you will never get out of bed. So I tell myself to shut up. My mind is stronger than the rest of me. I’m heavily promoting my new cookbook right now and also talking about my new rap video. The rap video happened because I met this young man and I thought, I will help him. I do this for students at NYU a lot, especially when they are making their first film. If they sound young, intelligent, and enthusiastic, and their first work has potential, I will help them because I want to encourage them. We shot this video in two days and they were very happy days; all of his friends were so full of energy, happiness, drive, and ambition. I would have been one of them if I was their age. The hardest part for me was the rapping. I had to stay with the beat and it was very fast. I studied the lyrics for months before we shot. I also used all of my own clothes, jewelry, and shoes in the video. I had an idea about how this woman would be: goddess-like, but also in her mind she was young and free as a bird. My new cookbook is about how to make Indian food with an Instant pot. The last thing you want is for Indian rice to be al dente; if you do that, you’ve made terrible rice. So in this book, I say: Follow my recipe, hit the right buttons, and it’ll come out perfect. You will never make bad rice. This is also true for the other recipes, which include mushrooms and beans and delicious things that people can make easily. I stay busy because I love working and I love new ideas. Whatever I’m writing about or working on, I’m doing it because I love the idea. I will do any project if I feel enthusiastic about it and I find that the energy just comes quite naturally to me. It’s all mental energy; physically I’m exhausted, but mentally I have a lot there. I can create out of nothing and I hope this is true for the rest of my life.
https://elemental.medium.com/madhur-jaffrey-on-indian-cooking-and-staying-creative-as-you-age-733d3d639b38
['Jenni Gritters']
2019-05-28 12:57:35.855000+00:00
['Madhur Jaffrey', 'Food', 'Health', 'The Health Diaries', 'Lifestyle']
Ghostface Killah: Sensitive Genius Poet, Also Backwards-Thinking Sexist
Here is video of an interview Wu-Tang Clan MC Ghostface Killah did last week on Angela Yee’s radio show on the Shade 45 satellite station. It is a good example of how a hugely talented artist can be engaging and enjoyable to watch, even as he espouses horribly repellent views on matters of ethics or politics. Here is a guy who can render the tenderness and intimacy of maternal love in twenty exquisitely chosen words: “But I remember this/Moms would lick her fingertips/to wipe the cold out my eye before school with her spit.” (From 1997’s “All That I Got Is You.”) And here is a guy who can say: “That’s what’s wrong with our people and shit, they put our women equal to men. We’re not equal… Don’t put me equal. I was here first!” That’s a bummer, no way around it. It’s like my favorite rapper is a member of the Promise Keepers. Still, I love his music. I play his records all the time, I play his records for my kid. And I could listen to him talk all day. What does this mean? I don’t know for sure. Something about art being apolitical, I guess. Anyway, the clip is nothing if not interesting. (Note: it is full of curse words. Ghost is a renowned vulgarian. He once told me, at the end of an interview I did with him in the 90s, “Don’t take the curse words out and shit, those are my favorite shits.”)
https://medium.com/the-awl/ghostface-killah-sensitive-genius-poet-also-backwards-thinking-sexist-6fc30db0a64d
['Dave Bry']
2016-05-12 23:50:19.955000+00:00
['Music', 'Rap', 'Ghostface Killah']
Feature Selection and Dimensionality Reduction Using Covariance Matrix Plot
This article will discuss how the covariance matrix plot can be used for feature selection and dimensionality reduction. Why are feature selection and dimensionality reduction important? A machine learning algorithm (such as classification, clustering or regression) uses a training dataset to determine weight factors that can be applied to unseen data for predictive purposes. Before implementing a machine learning algorithm, it is necessary to select only relevant features in the training dataset. The process of transforming a dataset in order to select only relevant features necessary for training is called dimensionality reduction. Feature selection and dimensionality reduction are important because of three main reasons: Prevents Overfitting: A high-dimensional dataset having too many features can sometimes lead to overfitting (model captures both real and random effects). Simplicity: An over-complex model having too many features can be hard to interpret especially when features are correlated with each other. Computational Efficiency: A model trained on a lower-dimensional dataset is computationally efficient (execution of algorithm requires less computational time). Dimensionality reduction, therefore, plays a crucial role in data preprocessing. We will illustrate the process of feature selection and dimensionality reduction with the covariance matrix plot using the cruise ship dataset cruise_ship_info.csv. Suppose we want to build a regression model to predict cruise ship crew size based on the following features: [‘age’, ‘tonnage’, ‘passengers’, ‘length’, ‘cabins’, ‘passenger_density’]. Our model can be expressed as: where X is the feature matrix, and w the weights to be learned during training. The question we would like to address is the following: Out of the 6 features [‘age’, ‘tonnage’, ‘passengers’, ‘length’, ‘cabins’, ‘passenger_density’], which of these are the most important? We will determine what features will be needed for training the model. The dataset and jupyter notebook file for this article can be downloaded from this repository: https://github.com/bot13956/ML_Model_for_Predicting_Ships_Crew_Size. 1. Import Necessary Libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns 2. Read dataset and display columns df=pd.read_csv("cruise_ship_info.csv") df.head() 3. Calculate basic statistics of the data df.describe() 4. Generate Pairplot cols = ['Age', 'Tonnage', 'passengers', 'length', 'cabins','passenger_density','crew'] sns.pairplot(df[cols], size=2.0) We observe from the pair plots that the target variable ‘crew’ correlates well with 4 predictor variables, namely,’tonnage’, ‘passengers’, ‘length’, and ‘cabins’. To quantify the degree of correlation, we calculate the covariance matrix. 5. Variable selection for predicting “crew” size 5 (a) Calculation of the covariance matrix cols = ['Age', 'Tonnage', 'passengers', 'length', 'cabins','passenger_density','crew'] from sklearn.preprocessing import StandardScaler stdsc = StandardScaler() X_std = stdsc.fit_transform(df[cols].iloc[:,range(0,7)].values) cov_mat =np.cov(X_std.T) plt.figure(figsize=(10,10)) sns.set(font_scale=1.5) hm = sns.heatmap(cov_mat, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 12}, cmap='coolwarm', yticklabels=cols, xticklabels=cols) plt.title('Covariance matrix showing correlation coefficients', size = 18) plt.tight_layout() plt.show() 5 (b) Selecting important variables (columns) From the covariance matrix plot above, if we assume important features have a correlation coefficient of 0.6 or greater, then we see that the “crew” variable correlates strongly with 4 predictor variables: “tonnage”, “passengers”, “length, and “cabins”. cols_selected = ['Tonnage', 'passengers', 'length', 'cabins','crew'] df[cols_selected].head() In summary, we’ve shown how a covariance matrix can be used for variable selection and dimensionality reduction. We’ve reduced the original dimension from 6 to 4. Other advanced methods for feature selection and dimensionality reduction are Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Lasso Regression, and Ridge Regression. Find out more by clicking on the following links: Training a Machine Learning Model on a Dataset with Highly-Correlated Features Machine Learning: Dimensionality Reduction via Principal Component Analysis Machine Learning: Dimensionality Reduction via Linear Discriminant Analysis Building a Machine Learning Recommendation Model from Scratch
https://medium.com/towards-artificial-intelligence/feature-selection-and-dimensionality-reduction-using-covariance-matrix-plot-b4c7498abd07
['Benjamin Obi Tayo Ph.D.']
2020-06-11 16:53:51.395000+00:00
['Machine Learning', 'Data Science', 'Data Visualization', 'Python', 'Feature Selection']
Top 5 Weird and Beautiful Arabic Expressions That Don’t Exist in English
Na’eeman — نعيماً Meaning: Congratulations on your cleanliness! Literal Meaning: Doesn’t have a literal meaning in and of itself, but this word is derived from the word naa’em, which means, ‘something like paradise.’ Kind of like saying congratulations, you are reborn, or coming out of heaven. You’re fresh! Usage: When someone comes out of a shower, cuts their hair/beard, or does their makeup/nails. I love this expression. It makes me feel warm and fuzzy inside. As a child, there wasn’t a single shower I had where my parents and family wouldn’t tell me Naeeman. Naturally, now my life feels missing when people don’t say it to me. So I made sure my partner learnt it and every time I finish a shower, I stare him down until he tells me Naeeeman! Every time I see someone with a new haircut, I’m just dying to tell them ‘Naeeeeeeeeman!’ But I just can’t, because it doesn’t exist in our mutually shared language. I went through a phase where I tried to translate it and tell people congratulations for their haircuts and quickly realized it was coming across as super weird. Walaw! — ولو Meaning: It’s nothing/don’t mention it! or Are you serious? Although it is translatable, the emotion behind the words is almost worlds apart. I use this a lot when people thank me for things which I think they shouldn’t thank me for, but with a rising intonation which signals something like ‘how dare you, thank me for this menial task!’ Also commonly used to express frustration at someone when they do something that you can’t believe they’ve done or said, also implying that the person is a little stupid — in a fun way, of course. Sahtain! — صحتين Literal Meaning: two healths! Meaning: Bon appétit Although it is similar to bon appétit which is still not English but understood in principle, it is distinctive in than it can be used anytime surrounding a meal: before, during, and after. So, rather than saying enjoy the meal, it is more about, ‘May you digest in good health’ or something along those lines. It makes it quite flexible. Another difference is that it can be used to people that you are not serving, and strangers too. So if you’re walking down the street and see someone mid-eating, you can say ‘Sahtain!’ I think if I did that in French people would think I’m a weirdo. Yaeteek(i) el Aafye — يعطيك العافيه (also: Yateek(i) alf Aafye) Meaning: May (God) end your life well or give you 1000 good endings (Note: The God in the sentence here is implied and not always used, although sometimes people do insert ‘God.’ ) This is an incredibly compassionate statement, letting people of all backgrounds and rankings know that not only do you recognize their hard work, but you truly appreciate it and wish them the best for it. It’s a truly beautiful way to say hello and or goodbye to someone working in a corner shop, delivering you a package, or just generally working while you have the liberty of receiving their hard work. To my humble knowledge, nothing comes close to this statement in the English language. I truly miss using it with people and letting them know that I am thankful for their work, however simple it may be. To’borne/Yo’borne — يؤبرني Meaning: You’re SO cute! Literal meaning: Bury me! This is a strange one for sure. Usually, parents and grandparents use this with younger family members, but when I tried repeating it back to my grandma as a child she’d gently scold me, telling me that I couldn’t use this phrase on her. I didn’t understand why until much later on when I realized the literal meaning of the expression and what it stemmed from. The idea behind this is, I love you so much I don’t want to see you die and bury you, but I’d rather you bury me. It can also be used between lovers or in a lighter way between friends. Weirdly morbid, but beautiful? I guess? A’kabalek — عقبالك *bonus number 6, just because it’s a bit funny. Meaning: you next! Literal meaning: you next Usually said to unmarried women, men, and their parents when they are at a wedding or engagement party. At its core, it is a deeply compassionate statement: ‘I wish you also find happiness in marriage and in love.’ Of course, when used by your lousy great-aunt, it can mean ‘you’re becoming an old spinster, let’s get moving.’ It’s the statement I dreaded most at weddings and engagement parties since I had no plans of getting married. But, I guess all their good wishes worked, as I’m now happily engaged too.
https://medium.com/an-idea/top-5-weird-and-beautiful-arabic-expressions-that-dont-exist-in-english-6931df474129
['Yara Zeitoun']
2020-12-22 15:13:17.079000+00:00
['Culture', 'Language', 'Travel', 'Society', 'Love']
Exploratory Data Analysis — What is it and why is it so important? (Part 1/2)
Components of EDA To me, there are main components of exploring data: Understanding your variables Cleaning your dataset Analyzing relationships between variables In this article, we’ll take a look at the first two components. 1. Understanding Your Variables You don’t know what you don’t know. And if you don’t know what you don’t know, then how are you supposed to know whether your insights make sense or not? You won’t. To give an example, I was exploring data provided by the NFL (data here) to see if I could discover any insights regarding variables that increase the likelihood of injury. One insight that I got was that Linebackers accumulated more than eight times as many injuries as Tight Ends. However, I had no idea what the difference between a Linebacker and a Tight End was, and because of this, I didn’t know if my insights made sense or not. Sure, I can Google what the differences between the two are, but I won’t always be able to rely on Google! Now you can see why understanding your data is so important. Let’s see how we can do this in practice. As an example, I used the same dataset that I used to create my first Random Forest model, the Used Car Dataset here. First, I imported all of the libraries that I knew I’d need for my analysis and conducted some preliminary analyses. #Import Libraries import numpy as np import pandas as pd import matplotlib.pylab as plt import seaborn as sns #Understanding my variables df.shape df.head() df.columns .shape returns the number of rows by the number of columns for my dataset. My output was (525839, 22), meaning the dataset has 525839 rows and 22 columns. .head() returns the first 5 rows of my dataset. This is useful if you want to see some example values for each variable. .columns returns the name of all of your columns in the dataset. df.columns output Once I knew all of the variables in the dataset, I wanted to get a better understanding of the different values for each variable. df.nunique(axis=0) df.describe().apply(lambda s: s.apply(lambda x: format(x, 'f'))) .nunique(axis=0) returns the number of unique values for each variable. .describe() summarizes the count, mean, standard deviation, min, and max for numeric variables. The code that follows this simply formats each row to the regular format and suppresses scientific notation (see here). df.nunique(axis=0) output df.describe().apply(lambda s: s.apply(lambda x: format(x, ‘f’))) output Immediately, I noticed an issue with price, year, and odometer. For example, the minimum and maximum price are $0.00 and $3,048,344,231.00 respectively. You’ll see how I dealt with this in the next section. I still wanted to get a better understanding of my discrete variables. df.condition.unique() Using .unique(), I took a look at my discrete variables, including ‘condition’. df.condition.unique() You can see that there are many synonyms of each other, like ‘excellent’ and ‘like new’. While this isn’t the greatest example, there will be some instances where it‘s ideal to clump together different words. For example, if you were analyzing weather patterns, you may want to reclassify ‘cloudy’, ‘grey’, ‘cloudy with a chance of rain’, and ‘mostly cloudy’ simply as ‘cloudy’. Later you’ll see that I end up omitting this column due to having too many null values, but if you wanted to re-classify the condition values, you could use the code below: # Reclassify condition column def clean_condition(row): good = ['good','fair'] excellent = ['excellent','like new'] if row.condition in good: return 'good' if row.condition in excellent: return 'excellent' return row.condition # Clean dataframe def clean_df(playlist): df_cleaned = df.copy() df_cleaned['condition'] = df_cleaned.apply(lambda row: clean_condition(row), axis=1) return df_cleaned # Get df with reclassfied 'condition' column df_cleaned = clean_df(df) print(df_cleaned.condition.unique()) And you can see that the values have been re-classified below. print(df_cleaned.condition.unique()) output 2. Cleaning your dataset You now know how to reclassify discrete data if needed, but there are a number of things that still need to be looked at. a. Removing Redundant variables First I got rid of variables that I thought were redundant. This includes url, image_url, and city_url. df_cleaned = df_cleaned.copy().drop(['url','image_url','city_url'], axis=1) b. Variable Selection Next, I wanted to get rid of any columns that had too many null values. Thanks to my friend, Richie, I used the following code to remove any columns that had 40% or more of its data as null values. Depending on the situation, I may want to increase or decrease the threshold. The remaining columns are shown below. NA_val = df_cleaned.isna().sum() def na_filter(na, threshold = .4): #only select variables that passees the threshold col_pass = [] for i in na.keys(): if na[i]/df_cleaned.shape[0]<threshold: col_pass.append(i) return col_pass df_cleaned = df_cleaned[na_filter(NA_val)] df_cleaned.columns c. Removing Outliers Revisiting the issue previously addressed, I set parameters for price, year, and odometer to remove any values outside of the set boundaries. In this case, I used my intuition to determine parameters — I’m sure there are methods to determine the optimal boundaries, but I haven’t looked into it yet! df_cleaned = df_cleaned[df_cleaned['price'].between(999.99, 99999.00)] df_cleaned = df_cleaned[df_cleaned['year'] > 1990] df_cleaned = df_cleaned[df_cleaned['odometer'] < 899999.00] df_cleaned.describe().apply(lambda s: s.apply(lambda x: format(x, 'f'))) You can see that the minimum and maximum values have changed in the results below. d. Removing Rows with Null Values Lastly, I used .dropna(axis=0) to remove any rows with null values. After the code below, I went from 371982 to 208765 rows. df_cleaned = df_cleaned.dropna(axis=0) df_cleaned.shape And that’s it for now! In the second part, we’ll cover exploring the relationship between variables through visualizations. (Click here for part 2.) You can see my Kaggle Notebook here.
https://medium.com/swlh/exploratory-data-analysis-what-is-it-and-why-is-it-so-important-part-1-2-240d58a89695
['Terence Shin']
2020-01-02 23:53:25.711000+00:00
['Analytics', 'Data', 'Data Science', 'Exploratory Data Analysis', 'Machine Learning']
Hierarchical Clustering Explained
Hierarchical Clustering Explained Unsupervised Algorithms | Data Series | Episode 8.3 In the previous episode we have taken a look at the popular clustering technique called K-means clustering. In this episode we will take a look at another widely used clustering technique called Hierarchical clustering. Please consider watching this video if any section of this article is unclear: Video Link What is Hierarchical clustering? Hierarchical clustering is an unsupervised machine learning algorithm where its job is to find clusters within data. We can then use these clusters identified by the algorithm to make predictions for which group or cluster a new observation belongs to. Overview Similar to K-means clustering, Hierarchical clustering takes data and finds clusters: What differs, however, is the algorithm to identify clusters. We will discuss at the end the relative advantages and disadvantages of Hierarchical clustering compared to K-means clustering. The Algorithm Step 1: Treat each data point as a cluster. Calculate the Euclidian distance each cluster is away from each other: Step 2: Using the distance matrix identify the clusters closest to each other: Step 3: Link these clusters together to form a new cluster: Step 4: Calculate the distance each cluster’s mean point is away from each other: Step 5: Repeat steps 2 to 4 until a single cluster is formed. One cluster has been formed so we stop. Step 6: Cut our dendrogram at a chosen point to give the clusters identified by our algorithm at that point. The point we choose to cut is usually done visually. and we are done! Linkage Methods Note that in step 4 we calculated the distance each cluster is away from each other (known as dissimilarity) based on the centroid or mean point of each cluster. We then link the clusters with the smallest of such dissimilarity. This is known as Centroid Linkage: There are however other methods to link clusters: Ward’s method shares the same objective function as K-means clustering discussed in the previous episode. Considerations of Hierarchical clustering Advantages Do not have to manually select number of clusters K. Easy to implement. Dendrogram can give useful information. No need for many random centroid initializations as with K-means clustering. Disadvantages With large datasets it is difficult to determine the number of suitable clusters from the dendrogram. Computationally expensive, slower than K-mean clustering. Sensitive to outliers. In the next episode we will be implementing Hierarchical clustering on a real-life dataset using Python. Summary Hierarchical clustering is an unsupervised machine learning algorithm that is used to cluster data into groups machine learning algorithm that is used to cluster data into groups The algorithm works by linking clusters, using a certain linkage method (mean, complete, single, ward’s method etc.) to form new clusters. The above process produces a dendrogram where we can see the linkages of each cluster. We can cut our dendrogram at a certain point to obtain suitable clusters from our data. Prev Episode _______ Next Episode
https://medium.com/swlh/hierarchical-clustering-explained-with-example-63b2fe9060dd
['Mazen Ahmed']
2020-12-14 16:16:26.996000+00:00
['Machine Learning', 'Hierarchical Clustering', 'Clustering', 'Data Science', 'Statistics']
Train/Test Split and Cross Validation in Python
Hi everyone! After my last post on linear regression in Python, I thought it would only be natural to write a post about Train/Test Split and Cross Validation. As usual, I am going to give a short overview on the topic and then give an example on implementing it in Python. These are two rather important concepts in data science and data analysis and are used as tools to prevent (or at least minimize) overfitting. I’ll explain what that is — when we’re using a statistical model (like linear regression, for example), we usually fit the model on a training set in order to make predications on a data that wasn’t trained (general data). Overfitting means that what we’ve fit the model too much to the training data. It will all make sense pretty soon, I promise! What is Overfitting/Underfitting a Model? As mentioned, in statistics and machine learning we usually split our data into two subsets: training data and testing data (and sometimes to three: train, validate and test), and fit our model on the train data, in order to make predictions on the test data. When we do that, one of two thing might happen: we overfit our model or we underfit our model. We don’t want any of these things to happen, because they affect the predictability of our model — we might be using a model that has lower accuracy and/or is ungeneralized (meaning you can’t generalize your predictions on other data). Let’s see what under and overfitting actually mean: Overfitting Overfitting means that model we trained has trained “too well” and is now, well, fit too closely to the training dataset. This usually happens when the model is too complex (i.e. too many features/variables compared to the number of observations). This model will be very accurate on the training data but will probably be very not accurate on untrained or new data. It is because this model is not generalized (or not AS generalized), meaning you can generalize the results and can’t make any inferences on other data, which is, ultimately, what you are trying to do. Basically, when this happens, the model learns or describes the “noise” in the training data instead of the actual relationships between variables in the data. This noise, obviously, isn’t part in of any new dataset, and cannot be applied to it. Underfitting In contrast to overfitting, when a model is underfitted, it means that the model does not fit the training data and therefore misses the trends in the data. It also means the model cannot be generalized to new data. As you probably guessed (or figured out!), this is usually the result of a very simple model (not enough predictors/independent variables). It could also happen when, for example, we fit a linear model (like linear regression) to data that is not linear. It almost goes without saying that this model will have poor predictive ability (on training data and can’t be generalized to other data). An example of overfitting, underfitting and a model that’s “just right!” It is worth noting the underfitting is not as prevalent as overfitting. Nevertheless, we want to avoid both of those problems in data analysis. You might say we are trying to find the middle ground between under and overfitting our model. As you will see, train/test split and cross validation help to avoid overfitting more than underfitting. Let’s dive into both of them! Train/Test Split As I said before, the data we use is usually split into training data and test data. The training set contains a known output and the model learns on this data in order to be generalized to other data later on. We have the test dataset (or subset) in order to test our model’s prediction on this subset. Train/Test Split Let’s see how to do this in Python. We’ll do this using the Scikit-Learn library and specifically the train_test_split method. We’ll start with importing the necessary libraries: import pandas as pd from sklearn import datasets, linear_model from sklearn.model_selection import train_test_split from matplotlib import pyplot as plt Let’s quickly go over the libraries I’ve imported: Pandas — to load the data file as a Pandas data frame and analyze the data. If you want to read more on Pandas, feel free to check out my post! — to load the data file as a Pandas data frame and analyze the data. If you want to read more on Pandas, feel free to check out my post! From Sklearn , I’ve imported the datasets module, so I can load a sample dataset, and the linear_model, so I can run a linear regression , I’ve imported the datasets module, so I can load a sample dataset, and the linear_model, so I can run a linear regression From Sklearn, sub-library model_selection , I’ve imported the train_test_split so I can, well, split to training and test sets sub-library , I’ve imported the train_test_split so I can, well, split to training and test sets From Matplotlib I’ve imported pyplot in order to plot graphs of the data OK, all set! Let’s load in the diabetes dataset, turn it into a data frame and define the columns’ names: # Load the Diabetes dataset columns = “age sex bmi map tc ldl hdl tch ltg glu”.split() # Declare the columns names diabetes = datasets.load_diabetes() # Call the diabetes dataset from sklearn df = pd.DataFrame(diabetes.data, columns=columns) # load the dataset as a pandas data frame y = diabetes.target # define the target variable (dependent variable) as y Now we can use the train_test_split function in order to make the split. The test_size=0.2 inside the function indicates the percentage of the data that should be held over for testing. It’s usually around 80/20 or 70/30. # create training and testing vars X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.2) print X_train.shape, y_train.shape print X_test.shape, y_test.shape (353, 10) (353,) (89, 10) (89,) Now we’ll fit the model on the training data: # fit a model lm = linear_model.LinearRegression() model = lm.fit(X_train, y_train) predictions = lm.predict(X_test) As you can see, we’re fitting the model on the training data and trying to predict the test data. Let’s see what (some of) the predictions are: predictions[0:5] array([ 205.68012533, 64.58785513, 175.12880278, 169.95993301, 128.92035866]) Note: because I used [0:5] after predictions, it only showed the first five predicted values. Removing the [0:5] would have made it print all of the predicted values that our model created. Let’s plot the model: ## The line / model plt.scatter(y_test, predictions) plt.xlabel(“True Values”) plt.ylabel(“Predictions”) And print the accuracy score: print “Score:”, model.score(X_test, y_test) Score: 0.485829586737 There you go! Here is a summary of what I did: I’ve loaded in the data, split it into a training and testing sets, fitted a regression model to the training data, made predictions based on this data and tested the predictions on the test data. Seems good, right? But train/test split does have its dangers — what if the split we make isn’t random? What if one subset of our data has only people from a certain state, employees with a certain income level but not other income levels, only women or only people at a certain age? (imagine a file ordered by one of these). This will result in overfitting, even though we’re trying to avoid it! This is where cross validation comes in. Cross Validation In the previous paragraph, I mentioned the caveats in the train/test split method. In order to avoid this, we can perform something called cross validation. It’s very similar to train/test split, but it’s applied to more subsets. Meaning, we split our data into k subsets, and train on k-1 one of those subset. What we do is to hold the last subset for test. We’re able to do it for each of the subsets. Visual Representation of Train/Test Split and Cross Validation. H/t to my DSI instructor, Joseph Nelson! There are a bunch of cross validation methods, I’ll go over two of them: the first is K-Folds Cross Validation and the second is Leave One Out Cross Validation (LOOCV) K-Folds Cross Validation In K-Folds Cross Validation we split our data into k different subsets (or folds). We use k-1 subsets to train our data and leave the last subset (or the last fold) as test data. We then average the model against each of the folds and then finalize our model. After that we test it against the test set. Visual representation of K-Folds. Again, H/t to Joseph Nelson! Here is a very simple example from the Sklearn documentation for K-Folds: from sklearn.model_selection import KFold # import KFold X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]]) # create an array y = np.array([1, 2, 3, 4]) # Create another array kf = KFold(n_splits=2) # Define the split - into 2 folds kf.get_n_splits(X) # returns the number of splitting iterations in the cross-validator print(kf) KFold(n_splits=2, random_state=None, shuffle=False) And let’s see the result — the folds: for train_index, test_index in kf.split(X): print(“TRAIN:”, train_index, “TEST:”, test_index) X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] ('TRAIN:', array([2, 3]), 'TEST:', array([0, 1])) ('TRAIN:', array([0, 1]), 'TEST:', array([2, 3])) As you can see, the function split the original data into different subsets of the data. Again, very simple example but I think it explains the concept pretty well. Leave One Out Cross Validation (LOOCV) This is another method for cross validation, Leave One Out Cross Validation (by the way, these methods are not the only two, there are a bunch of other methods for cross validation. Check them out in the Sklearn website). In this type of cross validation, the number of folds (subsets) equals to the number of observations we have in the dataset. We then average ALL of these folds and build our model with the average. We then test the model against the last fold. Because we would get a big number of training sets (equals to the number of samples), this method is very computationally expensive and should be used on small datasets. If the dataset is big, it would most likely be better to use a different method, like kfold. Let’s check out another example from Sklearn: from sklearn.model_selection import LeaveOneOut X = np.array([[1, 2], [3, 4]]) y = np.array([1, 2]) loo = LeaveOneOut() loo.get_n_splits(X) for train_index, test_index in loo.split(X): print("TRAIN:", train_index, "TEST:", test_index) X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] print(X_train, X_test, y_train, y_test) And this is the output: ('TRAIN:', array([1]), 'TEST:', array([0])) (array([[3, 4]]), array([[1, 2]]), array([2]), array([1])) ('TRAIN:', array([0]), 'TEST:', array([1])) (array([[1, 2]]), array([[3, 4]]), array([1]), array([2])) Again, simple example, but I really do think it helps in understanding the basic concept of this method. So, what method should we use? How many folds? Well, the more folds we have, we will be reducing the error due the bias but increasing the error due to variance; the computational price would go up too, obviously — the more folds you have, the longer it would take to compute it and you would need more memory. With a lower number of folds, we’re reducing the error due to variance, but the error due to bias would be bigger. It’s would also computationally cheaper. Therefore, in big datasets, k=3 is usually advised. In smaller datasets, as I’ve mentioned before, it’s best to use LOOCV.
https://towardsdatascience.com/train-test-split-and-cross-validation-in-python-80b61beca4b6
['Adi Bronshtein']
2020-03-24 15:43:53.509000+00:00
['Data Analysis', 'Statistics', 'Python', 'Data Science', 'Machine Learning']
Navigating the Sea of Explainability
Setting the right course and steering responsibly This article is coauthored by Joy Rimchala and Shir Meir Lador. Setting the right course Rapid adoption of complex machine learning (ML) models in recent years has brought with it a new challenge for today’s companies: how to interpret, understand, and explain the reasoning behind these complex models’ predictions. Treating complex ML systems as trustworthy black boxes without sanity checking has led to some disastrous outcomes, as evidenced by recent disclosures of gender and racial biases in GenderShades¹. As ML-assisted predictions integrate more deeply into high-stakes decision-making, such as medical diagnoses, recidivism risk prediction, loan approval processes, etc., knowing the root causes of an ML prediction becomes crucial. If we know that certain model predictions reflect bias and are not aligned with our best knowledge and societal values (such as an equal opportunity policy or outcome equity), we can detect these undesirable ML defects, prevent the deployment of such ML systems, and correct model defects. The GenderShades study reported noticeable differences in gender classification accuracy of widely used face detection algorithms including Microsoft Cognitive Services Face API, FACE++, and IBM watson visual recognition. There’s a large gap in gender misclassification rates among different subgroups, with the largest gap of 34.4% observed between a lighter skinned male face compared to darker skinned female face. Our mission at Intuit is powering prosperity around the world. To help small businesses and individuals increase their odds for success, in the last few years Intuit has been infusing AI and ML across its platform and solutions. As data scientists at Intuit, we have a unique privilege and power to develop ML models that make decisions that affect people’s lives. With that power, we also bear the responsibility to make sure our models are held in the highest standards, and are not discriminating in any manner. “Integrity without compromise” is one of Intuit’s core values. As we grow as an AI/ML-driven organization, machine intelligibility has become a priority for Intuit’s AI/ML products. This year, Intuit hosted an Explainable AI workshop (XAI 2019) at KDD 2019. We gleaned many valuable learnings from this workshop that we will start to incorporate in our product and service strategies. Understanding the current state of interpretability Interpretability is an active area of research and the description provided below is meant to provide a high level summary of the current state of the field. Interpretability methods fall into two major categories based on whether the model being interpreted is: (a) black box (unintelligible) or (b) glass box (intelligible). In the following section, we will explain and compare each of the approaches. We will also describe how we can use intelligible models to better understand our data. Then we will review a method to detect high-performing intelligible models for any use case (Rashomon curves). Finally, we will compare local and global explanations and feature-based vs. concept-based explanation. Black box: Black box interpretability methods attempt to explain already-existing ML models without taking into account the inner workings of the model (i.e., the learned decision functions). This class of interpretability methods is model-agnostic and can be integrated easily with a wide variety of ML models, from decision tree-based models to complex neural networks² ³ ⁴ ⁵. Applying black box interpretability does not require any changes in the way ML practitioners create and train the models. For this reason, black box interpretability methods enjoy wider adoption among ML practitioners. Black box interpretability methods are also referred to as “post-hoc” interpretability, as they can be used to interrogate ML models after training and deployment without any knowledge of the training procedures. Examples of black box interpretability methods include LIME², Shapley⁶, Integrated Gradients⁷, DeepLIFT⁸, etc. Post-hoc model interpretations are a proxy for explanations. The explanations derived in this manner are not necessarily guaranteed to be human-friendly, useful, or actionable. Glass box: A glass box approach with intelligible ML models requires that models be “interpretable“ upfront (aka “pre-hoc”)⁹ ¹⁰. The advantage of this approach is the ease with which ML practitioners can tease out model explanations, detect data and/or label flaws, and in some cases, edit the model’s decisions if they do not align with practitioner values or domain knowledge. Rich Caruana, Senior Principal Researcher at Microsoft Research and one of KDD XAI 2019’s keynote speakers, demonstrated how his team built a highly accurate, intelligible, and editable ML model based on generalized additive models (GAMs)¹¹ and applied it to mortality prediction in pneumonia cases¹². This version, named also GA2M ( or “GAM on steroids”) is optimized by gradient boosting instead of the cubic splines in the original version, and achieves results comparable to modern ML models (such as random forest or gradient-boosted trees). Using Intelligible models Caruana shared how his team uses intelligible models to better understand and correct their data. For example, the intelligible model learned the rule that patients with pneumonia who have a history of asthma have a lower risk of dying from pneumonia than the general population. This rule is counterintuitive, but reflects a true pattern in the training data: patients with a history of asthma who presented with pneumonia usually were admitted not only to the hospital but directly to the Intensive Care Unit. The aggressive care received by asthmatic pneumonia patients was so effective that it lowered their risk of dying from pneumonia in comparison with the general population. Because the prognosis for these patients is better than average, models trained on the data incorrectly learn that asthma lowers mortality risk, when in fact asthmatics have much higher risk (if not aggressively treated). If simpler, intelligible models can learn counterintuitive association — such as, having asthma implies lower pneumonia risk — more complex neural network-based algorithms can probably do the same. Even if we can remove the asthma bias from the data, what other incorrect things were learned? This is the classic problem of statistical confounding: when a variable (in our case, treatment intensity) is associated with both the dependent and independent variable, causing a spurious association. The treatment intensity is influenced by the variable of asthma, and in turn reduces the risk of mortality. This observation illustrates the importance of model intelligibility in high stakes decision-making. Models that captured true but spurious patterns or idiosyncrasies in the data — such as false association in the pneumonia example or societal biases — could generate predictions that lead to undesirable consequential outcomes such as mistreating patients. Current ML models are trained to minimize prediction errors on the training data and not on aligning with any human intuition and concepts, so there’s no guarantee that models will align the human’s values. More often than not, ML models trained on human-curated datasets will reflect the defect or bias in the data¹³. An intelligible model allows these defects to surface during model validation. Currently, only a small subset of algorithms — namely decision tree-based models and generalized additive models (GAMs) — are intelligible. Decision tree-based models and GAMs are not used in ML applications (such as computer vision, natural language processing, and time series predictions) because the best possible versions of these models currently do not perform at the state-of-the-art-level of complex deep neural network-based models. Detecting high-performing intelligible models for any use case When we’re able to choose between equally-performing intelligible and black box models, the best practice is to choose the intelligible one¹⁴. How can we know whether a high-performing intelligible model exists for a particular application? Cynthia Rudin, Professor of Computer Science at Duke University and the Institute of Mathematical Statistics (IMS) Fellow 2019 (also a KDD XAI 2019 panelist) proposed a diagnostic tool, called the “Rashomon Curve,”¹⁵ that helps ML practitioners answer this question. Let’s first define a few terms. “Rashomon effect” denotes the situation in which there exist many different and approximately-equally accurate descriptions to explain a phenomenon. The term “Rashomon effect” is derived from a popular Japanese film (Rashomon) known for a plot that involves various characters providing self-serving descriptions of the same incident. A “Rashomon set,” defined over the hypothesis space of all possible models in a model class, is a subset of ML models that have training performance close to the best model in the class. The “Rashomon ratio” is the cardinality of the Rashomon set divided by the cardinality of all possible models (with varying levels of accuracy). Thus, “Rashomon ratio” is defined uniquely for each ML task/dataset pair. When the Rashomon ratio is large, there exist several equally highly accurate ML models to solve that ML task. Some of these highly accurate models within the Rashomon set might have desirable properties such as intelligibility and it may be worthwhile to find such models. Thus, Rashomon ratio serves as an indicator of the simplicity of the ML problem. In her KDD 2019 keynote talk, Rudin introduced the “Rashomon curve”¹⁵ (see figure below), a diagnostic curve connecting the log Rashomon ratio of hierarchy of model classes with increasing complexity as a function of the empirical risk (the error rate bound on the model classes). Left figure : Empirical Rashomon sets are defined for each ML task/dataset pair and class of model (hypothesis space). Rashomon curves connect the Rashomon ratio of increasingly complex model classes as a function of empirical risk (the observed error of a particular model class). The horizontal part of the Γ-shape corresponds to a decrease in the empirical risk (increase in model accuracy) as we move through the hierarchy of hypothesis spaces (H1 on the top right to H7 on the bottom left). The length of arrow delta is the generalization error. If the ML problem is too complex for a model class considered, only the horizontal part of the Rashomon curve is observed. This is an indication that the model class considered is not complex enough to learn the training data well. On the other hand, if the ML model class considered is too complex for the training data, only the vertical part of the Rashomon curve is observed. Right Figure: The state-of-the-art for the various public ML problems/datasets on their own Rashomon curve, indicating whether it’s likely to be too complex or too simple for the given ML task/dataset pairs. When solving an ML problem, one might consider a hierarchy of model classes starting from simpler to more complex model (hypothesis) classes. In the beginning, the model classes remain too simple for the ML task and the model’s error rate continues to decrease with increasing complexity. This observation corresponds to moving along the horizontal part of the Rashomon curve from right to left. In this case, the Rashomon volume grows at about the same rate as the volume of all the set of all possible models (with varying accuracy). In the regime when the ML model classes start to become too complex for the ML tasks, the model error rates remain the same. This corresponds to traversing the vertical part of the Rashomon curve from the top toward the bottom. In this regime, the set of all possible models outgrow the Rashomon set and the Rashomon ratio drops sharply. The turning point in the Rashomon curve (“Rashomon elbow”) is a sweet spot where lower complexity (higher log Rashomon ratio) and higher accuracy (low empirical risk) meet. Thus, among the hierarchy of model classes, those that fall in the vicinity of the Rashomon elbow are likely to have the right level of complexity for achieving the best balance of high accuracy with desired properties such as generalizability and interpretability. Local vs. Global Explanation Interpretability methods can provide two types of explanations: local and global¹⁶. Local explanations describe how a model classifies a single data instance, and answer questions such as, “Which data element(s) are most responsible for the classification output?” In image classification, this is equivalent to identifying which pixel is responsible for a “cat” image class prediction, and by how much. Local explanations are crucial for investigating ML decisions around individual data points. LIME [local interpretable model-agnostic explanations taken from [2]]. The original model’s decision function is represented by the blue/pink background, and is clearly nonlinear. The bright red cross is the instance being explained (let’s call it X). Perturbed instances are sampled around X and are weighted according to their proximity to X (weight here is represented by size). Original model predictions are calculated on these perturbed instances. These are used to train a linear model (dashed line) that approximates the model well in the vicinity of X. Note that the explanation in this case is not faithful globally, but it is faithful locally around X. This figure illustrates a local explanation of Google’s Inception neural network on some arbitrary image. (4) keep as explanations of the parts of the image that are most positive toward a certain class. In this case, the classifier predicts Electric Guitar, even though the image contains an acoustic guitar. The explanation reveals why it would confuse the two: the fretboard is very similar. A global explanation, on the other hand, attempts to provide a holistic summarization of how a model generates predictions for an entire class of objects or data sets, rather than focusing on a single prediction and data point. The plots above demonstrate global explanations for a model trained on the census income dataset to predict the income level for adults, based on demographic features. The left figure (calculated using InterpertML) shows the score predicted by a GA2M model as a function of age. This directly illustrates the age contribution to the model. The right figure shows the mean absolute score of each feature to the model (aka feature importance). The two most popular techniques for global explanations are feature importance and partial dependence plots. Feature importance provides a score that indicates how useful or valuable each feature was in the construction of the model. In models based on decision trees (like random forests or gradient boosting), the more a feature is used to make key decisions within the decision trees, the higher its relative importance. Partial dependence plots (PDP) show the dependence between the target variable and a set of “target” features, marginalizing over the values of all other features (the “complement” features). Intuitively, we can interpret the partial dependence as the expected target response as a function of the “target” features. A partial dependence plot helps us understand how a specific feature value affects predictions, which can be useful for model and data debugging as demonstrated in¹². Feature-based vs. concept-based explanation Early interpretability methods relied on using input features to construct the explanation. This approach is known as feature-based explanation. A key difficulty with feature-based explanations is that most ML models operate on features, such as pixel values, that do not correspond to high-level concepts that humans can easily understand. In her KDD XAI 2019 keynote, Been Kim, Senior Research Scientist at Google Brain, pointed out that feature-based explanations applied to state-of-the-art complex black-box models (such as InceptionV3 or GoogleLeNet) can yield non-sensible explanations¹⁷ ¹⁸. More importantly, feature-based explanations for ML problems where the input features have high dimensionality does not necessarily lead to human-friendly explanations. Testing with concept activation vectors (TCAV) quantitatively measures how a “concept” (defined by a collection of inputs) is learned by a trained model by quantifying sensitivity of a model prediction class along the direction of concept activation vectors (CAVs). CAVs are defined per layer and per concept by training a linear classifier [such as support vector machine (SVM)] over the activation states of a layer of a pre-trained network using a collection of “concept-defining” inputs vs. random inputs (CAV testing inputs). These CAV testing inputs can be constructed post-hoc and do not need to be part of the training/evaluation and target task-specific datasets. TCAV can be used with both black box complex models as well as interpretable models. TCAV is currently publicly available at : https://github.com/tensorflow/tcav Concept-based explainability constructs the explanation based on human-defined concepts rather than a representation of the inputs based on features and internal model (activation) states. To achieve this, the input feature and model internal state and human-defined concept are represented in two vector spaces: (Em) and (Eh), respectively. The functional mapping between these two vector spaces, if it exists, provides a way of extracting human-defined concepts from input features and ML model internal states. In her keynote, Kim presented testing with concept activation vector (TCAV), a procedure to quantitatively translate between the human-defined concept space (Eh) and the model internal state (Em)¹⁹. TCAV requires two main ingredients: (1) concept-containing inputs and negative samples (random inputs), and (2) pretrained ML models on which the concepts are tested. To test how well a trained ML model captured a particular concept, the concept-containing and random inputs are inferenced on subcomponents (layers) of a trained ML model. Then, a linear classifier such as a support vector machine is trained to distinguish the activation of the network due to concept-containing vs. random inputs. A result of this training are concept activation vectors (CAVs). Once CAVs are defined, the directional derivative of the class probability along CAVs can be computed for each instance that belong to a class. Finally, the “concept importance” for a class is computed as a fraction of the instances in the class that get positively activated by the concept containing inputs vs. random inputs. This approach allows humans to ask whether a model “learns” a particular expressible concept, and how well. For example, a human can ask how well a computer vision model “X” learns to associate the concept of “white coat” or “stethoscope” in doctor images using TCAV. To do this, human testers can first assemble a collection of images containing white coats and random images, then apply the pretrained “X” on this collection of images to get the predictions, and compute the TCAV scores for the “white coat” concept. This TCAV score quantifies how important the concept of “white coat” was to a prediction of class “doctor” in an image classification task. TCAV is an example-driven approach, so it still requires careful selection of the concept data instances as inputs. TCAV also relies on humans to generate concepts to test, and on having the concept be expressible in the concept inputs. Example of a realistic use of TCAV to quantify how much the concepts of “white coat,” “stethoscope,” and “male” are important for an image classification model for a positive prediction of the “doctor” class. Concept-based interpretability methods like TCAV are a step toward extracting “human-friendly” ML explanations. It is up to today’s ML practitioners to make responsible and correct judgment calls on whether model predictions are sensible, and whether they align with our positive values. It is up to us to correct the defects in trained black-box ML models, and TCAV can help illuminate where the flaws are. What can we do better? As a community of ML practitioners, it’s our responsibility to clearly define what we want Explainable AI to become, and to establish guidelines for generating explanations that take into consideration what piece of information to use, how (in what manner) to construct the explainability in a way that is beneficial (not harmful or abusive), and when (in which situation/context and to whom) to deliver it. While today’s Explainable AI methods help to pinpoint the defects in ML systems, there’s much work ahead of us. For now, here are some tips for bringing explainability to the forefront of today’s practice: Choose an intelligible model whenever possible. Make sure the model and data align with your domain knowledge and societal values, using intelligible models and local explanations. Measure machine learning model performance to be sure decisions are aligned with societal values (for example, when modeling data includes protected groups, optimize for consistency and equal opportunity, as well as accuracy)²⁰. Build causality into model explanations²¹. Measure explanation usefulness and actionability²². Closing Thoughts In the span of a few short years, explainable AI as a field has come a very long way. As co-organizers of this workshop, we were privileged to witness tremendous enthusiasm for explainability in ML. For all of us, explainability can be our “true North.” How we can use ML responsibly by ensuring that “our values are aligned and our knowledge is reflected” for the benefit of humanity. This goes beyond achieving end user’ trust or achieving fairness in a narrowly-defined sense. We want to use explainability in conjunction with societal values for the benefit of everyone whose life and livelihood comes into contact with, or is affected by, ML. Acknowledgment We would like to thank the community of volunteers who helped review the XAI KDD workshop papers in a timely manner. We are also grateful to our workshop speakers and panelists for sharing their knowledge, wisdom and superb content. References: [1] Joy Buolamwini, Timnit Gebru. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency 2018 PMLR 81:77–91. [2] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Model-agnostic interpretability of machine learning. arXiv preprint arXiv:1606.05386, 2016. [3] Ruth C Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) 2017, pages 3429–3437. [4] Piotr Dabkowski and Yarin Gal. Real time image saliency for black box classifiers. In Advances in Neural Information Processing Systems (NIPS) 2017, pages 6967–6976. [5] Chun-Hao Chang, Elliot Creager, Anna Goldenberg, and David Duvenaud. Explaining image classifiers by counterfactual generation. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), 2019. [6] Scoot Lundberg and Su-In Lee. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems, 2017. [7] Mukund Sundararajan, Ankur Taly, Qiqi Yan. Axiomatic Attribution for Deep Networks. In Proceedings of the 34th International Conference on Machine Learning (ICML) 2017 Vol. 70, Pages 3319–3328. [8] Avanti Shrikumar, Peyton Greenside, Anshul Kundaje. Learning Important Features Through Propagating Activation Differences. In ICML 2017 and PMLR Vol 70 pages 3145–3153. [9] Been Kim, Cynthia Rudin, and Julie A Shah. The Bayesian Case Model: A generative approach for case-based reasoning and prototype classification. In NIPS 2014, pages 1952–1960. [10] B. Ustun and C. Rudin. Methods and models for interpretable linear classification. arXiv:1405.4047 2014. [11]Trevor Hastie Robert Tibshirani. Generalized Additive Models: Some Applications. In Journal of the American Statistical Association 1987, 82:398, 371–386. [12] Rich Caruana, Paul Koch, Yin Lou, Marc Sturm, Johannes Gehrke, Noemie Elhadad. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining Pages 1721–1730. [13] Yair Horesh, Noa Haas, Elhanan Mishraky, Yehezkel S. Resheff, Shir Meir Lador Paired-Consistency: An Example-Based Model-Agnostic Approach to Fairness Regularization in Machine Learning. In arXiv:1908.02641, 2019. [14] Cynthia Rudin. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. In Nature Machine Intelligence (2019) Vol.1, pages 206–215. [15] Semenova, Lesia, and Cynthia Rudin. A study in Rashomon curves and volumes: A new perspective on generalization and model simplicity in machine learning. In arXiv preprint arXiv:1908.01755, 2019. [16] Kim, Been and Doshi-Velez, Finale. Towards a rigorous science of interpretable machine learning. In arXiv:1702.08608, 2017. [17] Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim. Sanity Checks for Saliency Maps. In NeurIPS 2018. [18] Mengjiao Yang and Been Kim BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth. In arXiv:1907.09701, 2019. [19] Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV). In ICML 2018, pages 2673–2682. [20] David Alvarez-Melis, Tommi S. Jaakkola. Towards Robust Interpretability with Self-Explaining Neural Networks. In arXiv:1806.07538, 2018. [21] Yash Goyal, Uri Shalit, Been Kim. Explaining Classifiers with Causal Concept Effect (CaCE). In arXiv:1907.07165, 2019. [22] Berk Ustun, Alexander Spangher, Yang Liu. Actionable Recourse in Linear Classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*) 2019, Pages 10–19.
https://medium.com/intuit-engineering/navigating-the-sea-of-explainability-f6cc4631f473
['Joy Rimchala']
2019-10-16 17:22:09.525000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Ai And Data Science', 'Data']
Anti-Fascism is a Public Service. Proud Boys Terrorized the Nation’s Capital
I was in the streets that night, trapped outside the police line surrounding BLMP. For hours, I was a refugee lurking in the streets of my hometown. Several of my personal friends were brutalized and beaten by Proud Boys. If my partner and I had not brought a non-black change of clothes, if we made any missteps, told the wrong lie, or turned down the wrong alley, we would have been assaulted as well. It’s been a week since the MAGA invasion turned my city into a battleground. Collecting my reflections and processing that trauma has been a challenge. I am still struggling to sift through it all and to put it to paper. But the take-away, I think, is the same as it’s always been: “F*ck 12/12” (Tyrone Turner/WAMU) The fascists must be stopped. And the country must wake up to the dire conditions that have lead us to nation-wide authoritarian violence. The Proud Boys are fascists. Just like the Nazis so many of them try to emulate, they commit acts of violence against innocent people demonized by their party’s interests. They hate communists, they hate queer people, they hate Black Lives Matter, and they really hate “antifa.” Why? Because antifa stands for anti-fascist. “6 million wasn’t enough” referring to the 6 million Jews murdered in the Holocaust. This is a Proud Boy. They are Nazis. (Twitter) Because “antifa” represents the united efforts of ordinary people bold enough to resist their hateful ideas. Anti-fascism is a public service because the public is being actively threatened by bigoted nationalists lead by Donald Trump. For all of the mainstream and right-wing media’s efforts to demonize us, black bloc demonstrators like myself are still organizing to stand for justice and to confront those that set us further back. This year’s uprisings for police abolition and BIPOC liberation have morphed, temporarily, into something else. What began as a nation-wide reckoning on race has created a broad alliance of comrades who agree that being anti-racist means standing up to the president’s racist, misogynistic, and homophobic death cult. Anti-racism and anti-fascism have coalesced into the same force for justice. We still want to defund and abolish the police. We still want communities of color to be given the support and opportunities they deserve. We still want to renovate the American system to finally make space for the oppressed and disenfranchised. However, since the election, we have had to focus on fighting fucking Nazis. And we did it fairly well. Contrary to what Bill Barr, Alex Jones, and Fucker Carlson would have you believe, none of the people in that video are paid by George Soros. Or the PRC. Or Putin. Or Joe Biden. DC’s activist community is an organic network of organizations, co-ops, collectives, and dozens of unaffiliated individuals that share a belief in radical love and in caring for local communities. We organize regular mutual aid events — providing meals, clothes, health services, and communal support to the unhoused and underserved in DC. We raise funds and awareness for Black lives in need. We write press releases and public statements about the injustices in our city and around the nation. And we are NOT all affiliated with Black Lives Matter (TM). The majority of organizations working on the ground are grassroots and community funded. While Black Lives Matter DC often coordinates with the grassroots orgs, and though they are a big part of our family, it is wrong to say that we all share the same black and yellow banner. We all believe that Black lives matter. But there is a wide range of how these orgs and individuals interpret how to fight for that belief. Many want to march and demonstrate publicly to pressure political leaders to make changes. Some seek to address systemic failures horizontally, by independently crowd sourcing and distributing resources which the government has failed to provide. Others are focused on venting our collective rage at police officers and the American imperial system. And a number are intent on fighting fascists, and keeping white nationalist reactionaries at bay while we advocate for systemic change. For December 12th, a broad coalition was formed between these organizations to do just that. Anti-racist/Anti-fascist Defense Coalition (Dee Dwer/NPR) I don’t think Joe Biden’s communist insurgent force could ever look as good as these comrades. And just because you see white folks in black bloc, shouting our chants, and holding our lines, it does not mean that the movement for BIPOC liberation has been infiltrated by outside agitators. In many cases, putting white volunteers at the front line was a conscious choice made collectively by leaders of color to keep our Black comrades safe. We are all committed, foremost, to keeping the community and each other protected. Our shared belief in the strength of human dignity empowers us to love and care for one another. We strive to embody the change that the world so desperately needs— the change which you, the average American, must embrace if you claim to believe in progress.
https://medium.com/afrosapiophile/anti-fascism-is-a-public-service-proud-boys-terrorized-the-nations-capital-b6ad8c4ab4d9
[]
2020-12-22 19:01:19.127000+00:00
['Society', 'Election 2020', 'Politics', 'Fascism', 'Donald Trump']
by Martino Pietropoli
First thing in the morning: a glass of water and a cartoon by The Fluxus. Follow
https://medium.com/the-fluxus/saturday-new-york-9cd144888953
['Martino Pietropoli']
2018-08-04 00:16:01.283000+00:00
['New York', 'Art', 'Drawing', 'Saturday']
How We Reduced Lambda Functions Costs by Thousands of Dollars
We were serving +80M Lambda invocations per day across multiple AWS regions with an unpleasant surprise in the form of a significant bill. Lambda Invocations in Frankfurt Region It was very easy and cheap to build a Lambda based applications that we forgot to estimate and optimize the Lambda costs earlier during development phase, so once we start running heavy workloads in production, the cost become significant and we spent thousands of dollars daily 💸 To keep Lambda cost under control, understanding its behavior was critical. Lambda pricing model is based on the following factors: Number of executions. Duration, rounded to the nearest 100ms. Memory allocated to the function. Data transfer (out to the internet, inter-region and intra-region). In order to reduce AWS Lambda costs, we monitored Lambda memory usage and execution time based on logs stored in CloudWatch. CloudWatch Reporting Log We’ve updated our previous centralized logging platform to extract relevant metrics (Duration, Billed Duration and Memory Size) from “REPORT” log entry reported through CloudWatch and store them into InfluxDB: You can check out the link above for a step-by-step guide on how to setup the following workflow: Next, we created dynamic visualizations on Grafana based on metrics available in the timeseries database and we were able to monitor in near real-time Lambda runtime usage. A graphical representation of the metrics for Lambda functions is shown below: Grafana Dashboard You can also use CloudWatch Logs Insights to issue ad-hoc queries to analyse statistics from recents invocations of your Lambda functions: CloudWatch Logs Insights We leveraged these metrics to set Slack notifications when memory allocation is either too low (risk of failure) or too high (risk of over-paying) and to identify the billed duration, memory usage for the ten most expensive Lambda functions. When performing heuristic analysis of Lambda logs, we gain insights into the right sizing of each Lambda function deployed in our AWS account and we avoided excessive over-allocation of memory. Hence, significantly reduced the Lambda’s cost. Memory allocation can make a big difference in your Lambda function cost. Too much allocated memory and you’ll overpay. Too little and your function will be at risk of failing. Therefore, you want to keep a healthy balance when it comes to memory allocation. To gather more insights and uncover hidden costs, we had to identify the most expensive functions. Thats where Lambda Tags comes into the play. We leveraged those metadata to breakdown the cost per Stack (project): By reducing the invocation frequency (control concurrency with SQS), we reduced the cost up to 99% and CO2 emissions footprint of our B2C app Cleanfox 🚀💰 At a deeper level, we also breakdown the cost by Lambda function name using a secondary tag which is Function tag: Once the target functions were identified, we reviewed the execution flow and applied some optimisation in our code to shorten the running time and resources needed (Memory and CPU) By continuously monitoring increases in spend, we end up building scalable, secure and resilient Lambda based solutions while maintaining maximum cost-effectiveness. Also, we are now configuring Lambda runtime parameters appropriately at the sandbox stage and we’re evaluating alternative services like Spot Instances & Batch Jobs to run heavy non-critical workloads considering the hidden costs of Serverless. Drop your comments, feedback, or suggestions below — or connect with me directly on Twitter @mlabouardy. We’re not sharing this just to make noise We’re sharing this because we’re looking for people that want to help us solve some of these problems. There’s only so much insight we can fit into a job advert so we hope this has given a bit more and whet your appetite. If you’re keeping an open mind about a new role or just want a chat — get in touch or apply — we’d love to hear from you!
https://medium.com/foxintelligence-inside/how-we-reduced-lambda-functions-costs-by-thousands-of-dollars-8279b0a69931
['Mohamed Labouardy']
2019-08-06 14:36:07.294000+00:00
['Serverless', 'AWS', 'Tech', 'Monitoring', 'Cloud']
How to Juice Your Website’s Numbers (and Why It’s a Very Bad Idea)
Marketers everywhere are being asked to do more with less these days — fewer resources, greater traffic and engagement; less time, more content; the list goes on. So what are agency and in-house marketers to do when they’ve over-committed and they realize they’re not equipped to deliver? Many choose to do the right thing and own up to it. Others choose to cheat. As someone managing an agency relationship, it’s important to be able to decipher when the numbers look almost too good to be true. In this article, I’ll outline the various ways dubious marketers and agencies inflate their performance and the steps good marketers can take to ensure they’re not being duped. Down on the (Click) Farm For the uninitiated, click farms are entities where individuals get paid to interact with content for a fraction of a cent per engagement. Advanced click farms are run by small groups of individuals with hundreds of phones, computers, and accounts working in an automated fashion to interact with various marketing channels on behalf of their clients. Employing click farms is an effective way to drive traffic — not qualified traffic, mind you, but traffic nonetheless. Many unscrupulous agencies utilize click farms to drive results and show clients quick wins. When reviewing traffic, many marketers focus solely on the end statistics and rarely review where that traffic is coming from. Digging deeper into onsite data to determine where users are coming from and how they’re interacting with your website is the best way to understand whether click farms are driving traffic to your site. From there you’ll just have to figure out how to filter out the shady traffic. Tag Once, Tag Right Proper tagging is the key to proper reporting. On the flip side, improper tagging is the key to inflating your results, whether or not you’re doing it on purpose. I’ve worked with a number of clients with duplicate tracking codes on their sites (for whatever reason) and I always explain to them the consequences of having duplicate reporting scripts, especially if they are pointing to the same property. What ends up happening is the platform will double count pageviews and pages per session, and it can affect overall bounce rate. Some marketers turn a blind eye to bad tagging because it projects success (e.g., “our traffic has doubled vs. last year! We must really be great”), while others intentionally manipulate tracking scripts to generate desired outcomes. As a good marketer, you should be cognizant of ongoing performance, investigate spikes, and question results that seem too good to be true. It’s the only way to defend your data against reporting manipulation. This bounce rate is a lie. I, for One, Do Not Welcome Our Spam Bot Overloards Bots. I hate them, but not everyone shares that hatred. Some marketers use bots to “improve” their metrics, and some use them to harm their competitors. That said, bots are effective at brute forcing metrics, but not much else. They simply serve to unfairly skew data to either the benefit or detriment of the websites they target. Much like click farms, you can protect the integrity of your data by digging deeper to determine where traffic is coming from and how it is engaging with your site. Resist the Temptation As time goes on, we’ll continue to see more and more marketers (agency and in-house) adopting some of these shady, “traffic-generating” and “results-driving” practices in an attempt to prove their ROI to stakeholders. And just like all the other blackhat techniques, this will probably work for a while. But it won’t work long-term. The problem with these schemes is that they’re all about propping up metrics that mean nothing by themselves, which is a surefire way to doom yourself in the long run. Once you turn to the dark side and showcase your tremendous “success,” it’s hard to go back to honest, sustainable strategy and you eventually find yourself turning to blackhat strategies just to maintain what you have. The best way to reach your goals is by taking the time to create a great user experience and build long-term relationships with real customers. Jumping from shady practice to shady practice will never get you there. In a world where juicing numbers can be as easy as adding a second tag to a website, you must learn to spot and diagnose when numbers look a little too good. It’s the only way to protect yourself from data manipulators, inaccurate results, and a whole lot of facepalming. Perspective from Andy Stuckey, a Senior Digital Marketing Manager at Element Three
https://medium.com/element-three/how-to-juice-your-websites-numbers-and-why-it-s-a-very-bad-idea-44058a165d50
['Element Three']
2018-03-23 13:38:37.185000+00:00
['Marketing', 'Digital', 'Analytics', 'Digital Marketing', 'Data']
Social media trends in the Middle East and North Africa
First published by IJNet, the International Journalists’ Network, and co-authored with Payton Bruni. It’s now eight years since the Arab Spring rocked many parts of the Middle East. At the time, social media was identified as a major factor in the geo-political upheaval seen across much of the region. With hindsight, that was probably overstated. Nonetheless, social media did play a role in bringing awareness of these protests to a global audience, and it also helped to provoke discussions about the role that social media can play as a driver for change. I’ve been covering this topic since 2012, and earlier this year — with University of Oregon student Payton Bruni — I published my seventh annual report on social media in the Middle East. Here are five ways we found that social media in the Middle East differs from other markets like North America and Europe. 1. Young people are still using Facebook Jonathan Labin, managing director for Facebook Middle East. Image via Arabian Business. In contrast to the #deleteFacebook movement in the United States, as well as wider stagnation in many Western markets, not only is Facebook usage continuing to grow in the Middle East, but Arab youth are using it more than ever. Last year’s Arab Youth survey revealed that almost two thirds (63%) of Arab youth look first to Facebook and Twitter for news. More widely, nearly half of young Arabs (49%) say they get their news on Facebook daily, up from 35% last year; and 61% of Arab youth say they use Facebook more frequently than a year ago. 2. Saudi Arabia continues to see massive social media growth With a population of 32 million, Saudi Arabia (KSA) is the second most populous country in the region (behind Egypt, now home to more than 100 million). Social media use continues to grow rapidly across the Kingdom of Saudi Arabia, a trend of interest to brands, agencies and media companies alike. Data from We Are Social and Hootsuite found that social media users in KSA grew by 32%, compared to a worldwide average of 13%, from January 2017 to January 2018.
https://medium.com/damian-radcliffe/social-media-trends-in-the-middle-east-and-north-africa-2b1c3fcf73e3
['Damian Radcliffe']
2019-08-21 17:32:33.142000+00:00
['Influencers', 'Middle East', 'Facebook', 'Saudi Arabia', 'Social Media']
Master Modern JavaScript — Array includes, Array reduce, Map object and much more
Photo by Julien Pouplard on Unsplash Over the past few years, there have been many updates to the JavaScript language. And these updates are very useful if you want to improve your coding skills. So let’s look at some of the things added in JavaScript which you need to be familiar with to improve your skills and get a high paying job. Note: This is the final short preview of content from Mastering Modern JavaScript book. There is a lot more covered in the actual book. Check out my previous post to get more preview content if you missed it. So let’s get started. Array.prototype.includes ES7 has added this function that checks if an element is present in the array or not and returns a boolean value of either true or false . // ES5 Code const numbers = ["one", "two", "three", "four"]; console.log(numbers.indexOf("one") > -1); // true console.log(numbers.indexOf("five") > -1); // false The same code using the Array includes method can be written as shown below: // ES7 Code const numbers = ["one", "two", "three", "four"]; console.log(numbers.includes("one")); // true console.log(numbers.includes("five")); // false So using the Array includes methods makes code short and easy to understand. The includes method also comes in handy when comparing with different values. Take a look at the below code: const day = "monday"; if(day === "monday" || day === "tuesday" || day === "wednesday") { // do something } The above code using the includes method can be simplified as shown below: const day = "monday"; if(["monday", "tuesday", "wednesday"].includes(day)) { // do something }
https://medium.com/javascript-in-plain-english/master-modern-javascript-array-includes-array-reduce-map-object-and-much-more-7d28a8d4428d
['Yogesh Chavan']
2020-12-11 10:41:56.071000+00:00
['Angular', 'JavaScript', 'React', 'Vue', 'Programming']
Polaroids From Colombia, a Decade Later
No dar papaya is an expression unique to Colombia (it makes no sense to other Spanish speakers, even in neighboring countries) that means show no vulnerabilities, don’t be an easy target, be careful. For years I had a very boring working title, “De Colombia.” Then one day “No Dar Papaya” came to me and I knew it was perfect. The photos are about Colombia, they couldn’t have been created anywhere else. So I wanted a title that was very Colombian. Image courtesy of Matt O’Brien. All rights reserved. Image courtesy of Matt O’Brien. All rights reserved. No dar papaya is not just an expression, it reflects a mentality that speaks to the historic and contemporary reality of Colombia — 51 years of war, a tough economic situation for most, and high crime rates. They say it is the eleventh commandment, and the twelfth commandment is “Papaya puesta es papaya partida,” which means if somebody leaves a papaya you better grab it. I took that expression to heart in Colombia, and I would generally move around very alert, walking differently than I normally do — chest out, tough guy mode — to project no fear and to communicate to would-be assailants “Don’t mess with me. It could go badly for you. Go find another, easier, target.” It worked very well, except for the night I got attacked by a guy with a knife. That night, in downtown Medellín, I was walking with a friend, laughing and talking with her, paying attention to her and not my surroundings, and I felt somebody grabbing my shirt violently. I turn around and this guy’s got my shirt bunched up in one hand, arm outstretched, and in the other hand, cocked back, he has a knife, ready to plunge it into my chest. There were three other guys, all about nineteen. I asked them what they wanted, they said my cell phone. “It’s yours.” And one of them reached into my pocket and got it. That guy was prepared to kill me for a phone that they could sell for twenty bucks. Image courtesy of Matt O’Brien. All rights reserved. Image courtesy of Matt O’Brien. All rights reserved. My concept for this project was always more expansive and diffuse — let’s explore Colombia with no set parameters — and Polaroid seemed to go well with that concept. No Dar Papaya has a sort of abstract and impressionistic quality to it, which I think helps to put more emphasis on the emotional content and less on the descriptive. We are surrounded by digital images. These Polaroid images offer a different experience to the viewer. The camera doesn’t lend itself to action images — there are only a few in the book — because it is hard to compose and it is slow, and with the flash, you lose that wonderful color palette, so I didn’t shoot at night. But I think that the diversity of images does a good job of conveying Colombia, not with any pretense of an objective overview, but more like snippets, glimpses into the realities and possibilities of Colombia. Image courtesy of Matt O’Brien. All rights reserved. Image courtesy of Matt O’Brien. All rights reserved. I’ve been speaking Spanish all of my adult life, and it was key to the work in Colombia, not only teaching, but also the photography itself, because you are interacting with people, creating rapport, and you need to get along and move around in the country. Without Spanish, you couldn’t come to understand the culture so well and make friends, and the work would reflect that.
https://medium.com/pixel-magazine/polaroids-from-colombia-a-decade-later-c9e29ad4dc84
['Pixel Magazine']
2017-12-12 23:06:46.358000+00:00
['Travel', 'Books', 'Art', 'Photography']
Are Movie Theaters Going the Way of the Abacus?
MARKETING | MOVIE THEATERS Are Movie Theaters Going the Way of the Abacus? Netflix teed them up for the kill. Covid is swinging the bat. Photo by Denise Jans on Unsplash I hate paying fifteen bucks for a tub of popcorn and a drink at the movies. “But you get free refills.” If I were to eat the entire (admittedly delicious) wheelbarrow-sized tub of buttered popcorn and go back for a refill, I might as well live directly in a pigsty. Also, I end up drinking the accompanying pony-keg-sized soft drink during the previews to the previews. Which perfectly times my first restroom run twenty minutes into the feature film. I have calculated that each time I have to pee during a movie at the theater it costs me around thirty cents worth of viewing time. Water Smuggling and Ticket Prices To avoid the exorbitant drink prices, I have been known to smuggle a water bottle in my pants and make do with that. Never you mind where I stash it in my pants. And, of course, none of the above even takes into account the ticket prices. The average single-ticket price before Covid-19 was $9.16. That’s right in line with an entire month of Netflix’s basic plan, which offers thousands of movies and television shows from the comfort of your own couch, which hopefully doesn’t have spilled popcorn and smashed Raisinettes on the floor in front of it. That calculation doesn’t count what the typical household pays for internet connectivity, but that is typically a sunk cost, anyway. Now, multiply all that by however many people you go to the theater with. You need to borrow against your 401K just to visit the theater and eat snacks. Another complication is that the big streaming platforms are creating ever-better original content. And lots of it. All in all, that’s an uphill fight for theaters. The Hammer Before the Final Fall Enter Covid-19. I drive past the main theater in our town every few days and, frankly, it’s a pitiful sight most of the week. The lights are on, but no one is home. Occasionally, tractor-trailers use the parking lot as an overnight place to stay. It looks like a picture of a movie theater where all signs of normal life were photoshopped out. I’m waiting to see tumbleweeds blowing through the parking lot. With AMC being in danger of running out of cash in early 2021 and Regal shutting down all 536 U.S. theaters this past October, theater chains are under serious financial pressure. Adding more fuel to the dumpster fire, movie studios are experimenting with streamed movie openings (Hello, “Mulan” on Disney+). How Disney rolled out Mulan is a story for another time, but the fact is that movies have successfully opened direct-to-streaming. Or successful enough, anyway. Universal’s “Troll World Tour” bypassed closed theaters in April and earned $100 million in the first three weeks. If Covid continues to be the bane of human existence, movie studios will open more and more big-budget movies to non-theater audiences. Movies are Loss Leaders But given that theater owners think of movies as a loss leader, the days of theaters have to be all but numbered. It’s an expensive way to basically sell popcorn, candy, and drinks. Even though the chalk outline hasn’t been drawn around the body yet, theaters are dying in their present form. Showing movies to essentially sell snacks at prices that piss most people off is no longer a sustainable long-term business concept when virtually every movie ever made can be streamed right to a consumer’s television. Theaters and Culture The big “but” to the bad news is that the movie theater, when it’s open, has an enormous screen and an awesome sound system. Great for the Avatar- and Star Wars-type films. Regardless, given all of the challenges I mentioned above, I really have to want to see a movie before I will show my face (and my wallet) in the theater. But when I go, I generally enjoy the experience. I’m not a cinephile personally. Honestly, that sounds like an extremely naughty word. But movies are a deep part of our culture. And going to a movie theater is a big part of our culture as well. Theater operators, just a quick appeal to you. Make the prices more affordable and make it up on the volume of new movie-goers you would attract. For cinephiles (there’s that word again), AMC Stubs and Regal’s Unlimited subscription seem like a step in the right direction. I truly don’t want to see this historic section of culture wiped away. It will never have the power it once held. That is gone. But I would like to visit a comfortable, clean theater every once in a while, and really absorb a good movie with all my senses. I’m hoping that movie theaters don’t go the way of the dodo bird, the abacus, and Blockbuster Video, but I’m convinced they will. If Netflix and other streaming services kill them in a competitive marketplace, so be it. But please don’t let Covid do it. I’ll contribute my part and go see Wonder Woman 1984 and Black Widow and other theater-worthy movies in my local cinema. I might even buy some popcorn to go with my smuggled water. But movie theaters are treading water in choppy seas. Sharks and hurricanes are coming fast. And popcorn sales won’t save them.
https://medium.com/illumination/are-movie-theaters-going-the-way-of-the-abacus-f168994edb80
['Walker Sweet']
2020-12-14 21:56:00.126000+00:00
['Movies', 'Business', 'Netflix', 'Marketing', 'Disney']
Dream WiFi
Dream WiFi What could WiFi look like in 10 years? Photo by Fabian Horst / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0) Maybe it wouldn’t look like anything we’d see now. Instead, WiFi might actually be a repeater for 5G or other technologies that are on the market. In such a scenario, ISPs might position more around being an in-home 5G repeater service than hauling cable. Perhaps a fibre to 5G connection could allow for better streaming in home. Devices themselves would be 5G and connect directly to the Internet with no setup. The process of a device being associated to you would come from a notification on your phone. After that, you could associate / disassociate devices through an app or portal. Locating devices would be down to the centimetre. The connectivity protocol would allow for a “Find My Phone” type of service to be able to make the ring. Because setup would be so easy, IoT devices would finally proliferate. Lightbulbs and outlets could be controlled and send power consumption information. Faucets and plumbing could truly become smart without scaring a plumber. The dream of Internet connected devices could finally be realized.
https://medium.com/predict/dream-wifi-f27b80536f67
['Leor Grebler']
2020-08-10 00:54:51.688000+00:00
['5g', 'Wifi', 'IoT', 'Ideas', 'Future']
Anger is a Mystical Warning
Something is Awry Family Archives I’m with a married couple. They sit on the plaid couch in my office. I am their pastor. “Would two weeks from Sunday work?” The husband and wife are planning a party to celebrate their 40th anniversary. They want it at the church. He says, “We’d like to make it during fellowship hour after worship. Most of our friends are from church anyway.” “And if they aren’t, then you will have extra people in worship. How about that?” She was born in Madison, Wisconsin. They were married in Milwaukee. His job at Wright Patterson, Air Force Base, brought them to Xenia, Ohio, a thirty-minute drive to work. “I used to listen to cassettes in my car. The drive went faster,” he said. “My new car doesn’t even have a CD player.” I say, “Tell me a highlight or two of your marriage.” “Of course our two sons,’ she says, “We couldn’t be prouder of them. He says, “You know, in all these years we have never argued. Never. Not once.” For a moment, the three of us sit still, me in the wood chair (did it come from a yard sale?) and them together on the couch. I think I noticed, or imagined, a slight twitch in her face. She crossed her arms. Five more minutes of conversation and the party is planned. There will be cake and balloons. Their friend Nancy will make bean soup. (The soup will simmer all Saturday and during worship too). The husband asks, “Will you offer a prayer? I suppose before the soup is served.”
https://medium.com/the-neurons-of-heaven/anger-is-a-mystical-warning-db96fd80d79c
['Tim Shapiro']
2020-12-20 18:11:06.084000+00:00
['Neuroscience', 'Religion', 'Family', 'God', 'Anger']
Running Your Business By The Numbers: What Works For Tara McMullin
When I signed off on my taxes last month, it was the first time in 10 years that I didn’t owe any money to the IRS. In fact, I got a refund. Now, I’d love to tell you that’s because I was much more diligent with my financial planning. And, that is partially true. But the main reason I’m getting a refund is that I personally made a lot less money last year. Not gonna lie: making less money was a big hit to my ego. Worse, I realized how much my personal identity as a provider, a businesswoman, and a leader was tied up in the dollar dollar bills. Let me clarify: I don’t define myself by how much money I make. I don’t think I’m worthless if I’m not rich… What happened is that I had been using money as validation. I equated my ability to do my job with my ability to continue to grow the revenue my company generates. So it wasn’t so much the money itself — but continuing to push the needle on that money that felt tied to my value as an entrepreneur. Taking a deliberate step back to pivot, as well as develop a new product and marketing strategy, as I have over the last 2 years, just didn’t allow me to grow at the same rate. But, instead of seeing that objectively, I responded emotionally. I’ve recently learned something fairly obvious but nevertheless profound about myself: I define myself by my accomplishments. Not just because my accomplishments tell others something about who I am but because I worry, deep down, that I don’t have much to offer. The more I accomplish, the more value I can believe I have. Accomplishing that year-after-year revenue growth was a sign that I had created something valuable… that I was valuable. In that way, money has been an easy metric for me to use to measure my worth and to calculate the exact value I’m creating in the world. That means that when my paycheck took a hit, it felt like my credibility took a hit. Of course, revenue is just one very small way to measure success or value. Thankfully, I can use it to pay my mortgage but otherwise, it’s about as useful as a Facebook like or an Instagram follow when it comes to measuring my personal value. While I’m personally working on not defining my identity or credibility solely by what I’ve accomplished, it has been helpful for me to look at what we’ve accomplished as a company outside of my self-imposed numbers game. I’m choosing to take pride in the process and enjoy the journey of refining my approach. Today, my company produces this exceptional podcast that gives you behind-the-scenes access to how businesses actually run (no gurus, hype, or magic formulas). My company hosts an exceptional network of small business owners having candid conversations about what’s working and not working in their businesses. We’ve dialed in operations, honed our approach, and nurtured a community culture of constructive optimism. My company facilitates small group masterminds that bring business owners together around a common goal. I’ve personally had the chance to level up my facilitation skills and learn how much I love this role. Today, my company operates better than it ever has. Our customers are happier than they’ve ever been. Our products are being used by more people than they ever have. We don’t have to have hockey stick revenue growth, a shiny medal, or an award for best small business owner community to know what we create is insanely valuable. And the real upside is that, because we’ve taken the time to get systems right and found the energy to do things exceptionally well, we’re poised to generate more revenue than ever before. The company that we’ve built is capable of 10x-ing our best ever year of revenue. Of course, coming to this understanding was incredibly difficult. After we pivoted the business and revenue declined, I wanted hide from the numbers. All those numbers told me at first was how much I was failing at a mission I believed in wholeheartedly. All I could see was the gap between our potential revenue and our actual revenue. But the more I looked… the more I allowed myself to explore our revenue numbers, the more I could see the real opportunities to reshape our company, our product, our brand — and my own personal identity. The numbers told me a great story about what was possible if I was willing to stick it out. So I’ll take that refund this year and remind myself that it’s a symbol of a much bigger investment: doing great work, creating things of immense value, and aiming for being exceptional. Numbers give us a lot of information about what’s working and what’s not in a business. They can tell us a pretty interesting story… if we’re willing to listen to it. Numbers might not tell us everything we need to know but they certainly help us ask better questions and point to new possibilities. Throughout May, we’ll be featuring candid conversations with small business owners who have changed course because they paid attention to the numbers — everything from profit margin to time management to website traffic from Pinterest. You’ll hear from Jennifer Johansson who found a new opportunity to sell her art after one of her Pins went viral. You’ll hear from Grace & Vine founder Madison Wetherill who made a big decision about which of her two businesses she should put her focus on after running the numbers. You’ll also hear from Systems Saved Me founder Jordan Gill about how she ran the numbers to decide both on her pricing and the unique way she delivers her service. And, you’ll hear from Do Less author and Origin Collective founder Kate Northrup about how she discovered doing less actually allowed her to accomplish more as a mother, wife, and entrepreneur during a special LIVE episode. Plus, I spoke with Rita Barry about the ways she looks at traffic and conversion rate numbers with her clients. And, you’ll hear from a member of the Bench bookkeeping team about ways you can dig into your business finances. As you listen, I challenge you to get curious about what your business’s numbers might be revealing about your own next steps as a business owner. Give yourself the opportunity to peer into numbers you might have been avoiding (like your business expenses or your sales conversion rate). And challenge yourself to take a fresh look at numbers you thought you had a handle on. Spend plenty of time just noticing these numbers. You don’t need to make decisions yet. Give yourself permission to just look — no action necessary — so you can form a full understanding of what’s going on. *** Have you changed course in your business because you got real with the numbers? Have you discovered a new opportunity right under your nose when you examined your traffic, profit margin, or conversion rate? We want to hear about it! Share your story on Instagram and tag me, @tara_mcmullin and use the hashtag #explorewhatworks.
https://medium.com/help-yourself/running-your-business-by-the-numbers-what-works-for-tara-mcmullin-2fcce950dfc2
['Tara Mcmullin']
2019-05-02 16:41:53.015000+00:00
['Sales', 'Entrepreneurship', 'Business', 'Small Business', 'Podcast']
Building a simple lane detection iOS app using OpenCV
Computer Vision Have you ever wanted to build an app that add stickers to a face? Or maybe an app that can read text on boards for visually impaired users? Apps with features such as those mentioned above use some form of computer vision algorithm; a piece of code that tries to make sense of what the iOS device is able to see. There are some frameworks and libraries out there that are able to achieve face detection or text extraction in a few lines of code without needing to go into the details of how they achieve it. However in some cases the features offered by those frameworks and libraries might not satisfy your needs. In cases where you need to implement your own computer vision algorithm, the most popular tool to help you achieve your goal is OpenCV. OpenCV is an open source library that contains functions aimed at real-time computer vision. In this post I will show you how to use OpenCV in an iOS app. We will create an iOS app that will detect the road lane in which the user is driving. Computer Vision techniques and how to do computer vision is out of scope in this post. We will learn how to consume OpenCV, which is a C++ library, from within our Swift code inside an iOS app. The computer vision algorithm we will use is based on Kemal Ficici’s Hackster.io project. I have ported the Python computer vision algorithm from Kemal’s post to C++ and will be providing it to you in this post. Getting started In this section we will cover the steps to build an iOS app that contains a view controller which will display the back camera feed of the iOS device and overlay any road lane on top of the camera feed on the screen. To achieve that we will: Create SimpleLaneDetection app project Process frames from the back camera Import OpenCV Insert lane detector algorithm into the project Consume lane detector algorithm from Swift Display lane detection results Create SimpleLaneDetection project Let’s start by creating a new Xcode project. Open Xcode and then from menu select File > New > Project… Next, select Single View App template and then click on Next. Select Single View App template Name the project SimpleLaneDetection and then click Next. Finally store the project wherever convenient for you and then click Finish. Name the project SimpleLaneDetection The Single View App template creates an app with a single blank screen ready to run. Process frames from the back camera In this section we will show the feed from the back camera of our iOS devices on the screen. On the previous step when we created the project from template. The template included a single blank screen named ViewController . Inside the ViewController we will process the camera feed. Let’s open ViewController.swift . We will first need access to the code that will allow us access to the camera. We will make use of the AVFoundation framework to do so. Add the following line in ViewController after import UIKit : import AVFoundation AVFoundation is a framework by Apple already included within iOS that will allow us to communicate with device’s camera. The following steps below will leverage code included within the AVFoundation framework. These are classes usually preappended with AV . Next we will need to create an instance of AVCaptureSession which will coordinate inputs, such as the camera and/or microphone, into outputs such as video, frames or still image capture. Let’s create a property to hold an instance of AVCaptureSession in our ViewController : import UIKit import AVFoundation class ViewController: UIViewController { private var captureSession: AVCaptureSession = AVCaptureSession() ... Next let’s add the back camera of our iOS device as an input of our capture session. Add the following function to our ViewController : private func addCameraInput() { guard let device = AVCaptureDevice.DiscoverySession( deviceTypes: [.builtInWideAngleCamera, .builtInDualCamera, .builtInTrueDepthCamera], mediaType: .video, position: .back).devices.first else { fatalError("No back camera device found, please make sure to run SimpleLaneDetection in an iOS device and not a simulator") } let cameraInput = try! AVCaptureDeviceInput(device: device) self.captureSession.addInput(cameraInput) } Note: we won’t be able to run our app on iOS simulators; they don’t have access to cameras. Let’s call our addCameraInput() function from viewDidLoad() function. override func viewDidLoad() { super.viewDidLoad() self.addCameraInput() // add this line } Access to the camera requires user permission. I won’t delve into managing permissions. In this tutorial we assume that access to the camera will always be granted by the user. However we still need to let the operating system know that we need access to the camera. Open Info.plist and add a new key NSCameraUsageDescription with String value Required for detecting road lanes . As you finishing entering the key, Xcode will automatically will replace NSCameraUsageDescription with Privacy — Camera Usage Description . Your Info.plist should look like the following: Info.plist with camera usage description We now have access to the camera. Next let’s access each image frame from the camera stream. To access frames in real time we have to create an instance of AVCaptureVideoDataOutput class. Furthermore we have tell it to delegate the camera frames to our ViewController , where we will process them. But before we can do that our ViewController must be able to receive those frames. Let’s make our ViewController conform to AVCaptureVideoDataOutputSampleBufferDelegate protocol: class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate { ... } Next let’s add the function that will receive the frames in our ViewController . func captureOutput( _ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { // here we can process the frame print("did receive frame") } Now our ViewController is ready to receive and process frames. Let’s create an instance of the AVCaptureVideoDataOutput which will output the video frames from the capture session to wherever we want to process the frame. At the top of the ViewController declare the following property: private let videoDataOutput = AVCaptureVideoDataOutput() Let’s create a function where we will configure the videoDataOutput . We will tell it where to send the frames from the camera and where to get the frames from: the capture session. Add the following function to the ViewController . private func getFrames() { videoDataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: kCVPixelFormatType_32BGRA)] as [String : Any] videoDataOutput.alwaysDiscardsLateVideoFrames = true videoDataOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "camera.frame.processing.queue")) self.captureSession.addOutput(videoDataOutput) guard let connection = self.videoDataOutput.connection(with: AVMediaType.video), connection.isVideoOrientationSupported else { return } connection.videoOrientation = .portrait } On third line we tell the video output who to deliver the frame to by setting the sampleBufferDelegate . In this case self is the instance of the ViewController . Additionally we tell the videoDataOutput that we want to process the frame in a new queue. If you aren’t familiar with DispatchQueue ‘s, just think of them as workers. The main worker has the responsibility to manage the user interface, any additional intensive task to the main queue can lead to a slow app or worse your app could crash. So it’s a good idea to process frames in another queue. Let’s now tell the capture session by calling our getFrames() function at the end of the viewDidLoad() function. Lastly let’s verify that we do receive frames. At the end viewDidLoad add After insert self.captureSession.startRunning() to start coordinating the input and outputs that we previously configured. viewDidLoad should look like the code below: override func viewDidLoad() { super.viewDidLoad() self.addCameraInput() self.getFrames() self.captureSession.startRunning() } Run the app on a device. Watch the console (View > Debug Area > Show Debug Area), you should be able to see “did receive frame” printed out continuously whilst the app is running. Console output when processing frames Now we are able to receive and process frames from the camera feed. Import OpenCV to the project In the previous section we enabled our app to receive and process frames from the back camera of an iOS device. Next we need to detect the road lane on the frame. However the computer vision algorithm to do lane detection requires OpenCV. Therefore in this section we will first fetch and install OpenCV in our iOS app. Let’s download OpenCV 3.4.5. Once downloaded let’s import it into our SimpleLaneDetection app target. Drag and drop opencv2.framework into the project. Drag and drop opencv2.framework Once opencv2.framework is dropped into the project, Xcode will prompt a window with the options for adding opencv2.framework . For Destination check Copy items if needed. For the Added folders option select the Create groups option. For the Add to targets option check SimpleLaneDetection target. Click on Finish. Adding opencv2.framework to project options Our selection of the adding opencv2.framework to the project options will copy opencv2.framework into our project and link the framework to our app. You should find opencv2.framework in Linked Frameworks and Libraries under General tab for the SimpleLaneDetection app target configuration. Insert lane detection algorithm Let’s add the code to detect where the lane is in the image frame. Let’s add C++ header and implementation files to our app. Don’t worry you don’t need to have C++ knowledge. The C++ computer vision algorithm will be provided. From menu click on File > New > File... Next search and select for C++ File template. Add new file to app using C++ File template Click next and name it LaneDetector and check Also create header file. Name the file LaneDetector, check header file creation checkbox Finally click Next and then Create. Xcode will then prompt you with some options to configure the app to use of multiple languages. Click on Create Bridging Header option. The bridging header file is important as it will allow us to consume our lane detector algorithm by allowing different languages to talk to each other. For now know that it will be needed later on. We will revisit the bridging header later on this post. Let’s open LaneDetector.hpp and, copy and paste the code below: LaneDetector.hpp Next open LaneDetector.cpp and, copy and paste the code below: LaneDetector.cpp Consume lane detection algorithm using Swift In the previous section we added the lane detector algorithm. The lane detector algorithm overlays the road lane on top of the camera feed and then returns the combined image. However we haven’t yet consumed that code. So let’s just do that in this section. Our Swift code is not able to consume C++ code (at least not at the time of writing). However Objective-C is. Furthermore we can consume Objective-C code through Swift. So let’s create Objective-C code to bridge between Swift and C++. Start by adding a new header file to the project. Select File > New > File… and then select Header file from the iOS template. Create header file Next name it LaneDetectorBridge. Copy and paste the code below in LaneDetectorBridge.h : Here basically we are declaring a single method in our LangeDetectorBridge class which will take an UIImage instance and return a UIImage instance with the lane overlayed. Next create an Objective-C file that will implement the LangeDetectorBridge interface. Select File > New > File.. and then select Objective-C File from iOS template. Name it LangeDetectorBridge . Create Objective-C file Once create edit the file name of the recently created LangeDetectorBridge.m and add an extra m . Your file should be named LaneDetectorBridge.mm . Add an extra “m” to the file extension of LaneDetectorBridge.m The extra m will tell Xcode that this an Objective-C++ file. LaneDetectorBridge is now allowed to use C++ from within. Next let’s add the code to bridge Swift to our C++ algorithm and back. Copy and past the code below to LaneDetectorBridge.mm : LaneDetectorBridge.mm LaneDetectorBridge converts UIImage s into OpenCV image representation. Then it runs lane detection which returns an image with lane overlayed on top of it. And finally converts the OpenCV image representation back to UIImage . One more step before we can consume LaneDetectorBridge from our Swift code is to tell Xcode to make that class accessible to Swift. We do so by declaring the header files to be accessible in our bridging file. Open SimpleLaneDetection-Bridging-Header.h and add the following line: #import "LaneDetectorBridge.h" And lastly we have to convert frames coming from the camera stream into UIImage’s and then calling our LaneDetectorBridge . Replace the contents of the captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) function in ViewController with the following code: guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags.readOnly) let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer) let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer) let width = CVPixelBufferGetWidth(imageBuffer) let height = CVPixelBufferGetHeight(imageBuffer) let colorSpace = CGColorSpaceCreateDeviceRGB() var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue let context = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) guard let quartzImage = context?.makeImage() else { return } CVPixelBufferUnlockBaseAddress(imageBuffer, CVPixelBufferLockFlags.readOnly) let image = UIImage(cgImage: quartzImage) The code above will convert the camera frame bitmap into an UIImage . We are finally ready to call our LaneDetectorBridge . Add the following line at the end of the captureOutput function: let imageWithLaneOverlay = LaneDetectorBridge().detectLane(in: image) Display lane detection results In the previous section we started processing the images coming from the back camera of an iOS device. The next step is to display those processed images with lanes overlayed. For that let’s add a UIImageView to our ViewController where we will display such images on the screen for the user to view. Open Main.storyboard . Click on the library button located on the toolbar. Open object library Once the object library is open search for UIImageView . Search for UIImageView Next drag and drop Image View into the blank canvas in Main.storyboard . Drag and drop Image View Once the UIImageView is placed on the canvas, maintain the control⌃ key and then drag the UIImageView a blank area of the canvas. Hold control key and drag the UIImageView onto the blank canvas Notice the UIImageView itself will not move. However once you let go of the mouse a layout pop up menu will appear. Layout options pop up menu On the layout pop up menu we are able to set out layout constraints on the UIImageView relative to the canvas holding this view. Using the command ⌘ key select Center Horizontally in Safe Area , Center Vertically in Safe Area and Equal Heights . This will make the UIImageView cover the height of the screen whilst being centred in it. As for the width we will make the UIImageView automatically resize respecting the aspect ratio of the image contained within it. Select the UIImageView and then open the attributes inspector (View > Inspectors > Show Attributes Inspector). In the attributes inspector set Aspect Fit for the Content Mode option. Let’s create a reference to UIImageView so we can set the image of the UIImageView from our ViewController programmatically. Open the assistant editor (View > Assistant Editor > Show Assistant Editor). Next holding the control ⌃ key drag and drop the UIImageView from Main.storyboard to the inside of ViewController class. Control + drag and drop UIImageView into ViewController class Once you let go a new pop up will appear with options to configure the reference you are creating in your ViewController class for the UIImageView . Reference configuration pop up Name the reference imageView and then click on Connect . The last step to do is to set the image of the UIImageView to the one outputted by the lane detector algorithm. At the end of captureOutput method in ViewController add: DispatchQueue.main.async { self.imageView.image = imageWithLaneOverlay } If you recall on the Process frames from the camera section we told the video output to that we wanted to process the frames on a queue which was not the main queue; the one in charge to handle the user interface. By setting the image to the image view displayed to the user we are updating the user interface. Therefore we have to tell the main worker to do so. And thats all 🎉! Run the app, point the camera to a road lane and see it in action! Summary In this post we have learnt how to use OpenCV to process images and then display the results back. We learnt that consuming C++ code from Swift is not so straight forward. Swift can’t talk to C++. However Swift can talk to Objective-C. Objective-C can talk to C++ using a special linking language between them called Objective-C++. On the outside Objective-C++ looks like regular Objective-C. On the inside however Objective-C++ is able to call C++ code. Final notes The chosen computer vision algorithm for this post is untested. Furthermore Kemal Ficici also offers a curved lane detection algorithm which I will attempt to convert to C++ in a future post. You can find the full source code for this post here. If you liked this post please don’t forget to clap. Stay tuned for more posts on iOS development! Follow me on Twitter or Medium!
https://medium.com/onfido-tech/building-a-simple-lane-detection-ios-app-using-opencv-4f70d8a6e6bc
['Anurag Ajwani']
2020-06-16 14:01:26.727000+00:00
['Computer Vision', 'Opencv', 'Swift', 'Ios Development', 'iOS']
Why we invested in Access Fintech
The Access Fintech team I remember the first time I visited a bank’s trading floor. It was both a huge relief and mild disappointment to discover that it is nothing like the Wolf of Wall Street-style high octane drama that Hollywood would have us believe. The overwhelming sound is one of humming computers and screens and people tapping out Bloomberg or Symphony chat messages and emails, thousands and thousands of emails. This chatter and hum is how capital markets operate — and how billions of dollars get traded in millions of individual transactions every day. If I told you that between 3 and 6 percent of all capital markets trades fail or are completed late, maybe upon first hearing, that wouldn’t seem so bad. But think about it: as a consumer, would you accept that roughly every twentieth payment you make with your bank card doesn’t go through? I did the rough math and, for me, that would mean that I would have to chase failed payments at least once per week. Or would you be happy if your video streaming service went out for roughly an hour randomly every day? We are unlikely to tolerate this for long as consumers, even if the main consequence is that we feel annoyed, and time has been wasted. For capital market participants, the consequences are much more far-reaching — from employing thousands of people in middle and back offices, whose job it is to chase failed trades (over phone and email), to having to reserve precious capital on their balance sheets against incomplete trades, to facing higher risk from open positions sitting on their books for days. Not to mention the potentially unhappy clients, lost profits, and even fines. So, like so many of the problems B2B software solves, this one is as unglamorous as it is big, persistent, and tough to crack. The root cause of this problem is that banks, asset managers, and other capital market participants don’t have a shared format or “ontology” in which to share trade data and, historically, have not been willing to share their trading book data on a network basis. This is the problem Access Fintech, our latest investment, is solving. Founded in 2016 by fintech veterans Roy Saadon and Steve Fazio, Access Fintech’s platform allows capital market participants to share data about their trades in whichever format best suits them, from Excel spreadsheets to two-way APIs. That data is then cleansed, normalised and shared securely with counterparties, so that trade exceptions can be handled more efficiently and with less risk. Access Fintech’s product team works relentlessly with market participants to agree workflows for handling exceptions, which then can become the “de facto” market standard. And to be clear, this is not an improvement, this is a genuine transformation. This is moving the market from millions of disparate phone calls and emails every day to a complete shared workflow and single source of truth. The Covid-19 crisis has proven a breakthrough moment for CEO Roy, who previously founded the post-trade processor Traiana, which London-based inter-dealer broker ICAP paid $238m to acquire in 2007, and for the Access Fintech team. The sharp rise in market volatility at the peak of the crisis in March triggered a similarly sharp uptick in failed and delayed trades. With a growing backlog of trade exceptions to clean up, customers flocked to Access Fintech. The last few weeks — and weekends! — have seen Roy and the team working tirelessly to onboard this wave of new customers. Their efforts were evident in the customer reference calls I took during our due diligence. I’ve always found these calls to be one of the most interesting parts of the investment process, but talking to Access Fintech customers was particularly engaging. Numerous senior software buyers at leading banks and asset managers told me the platform was “game changing”, and suggested it is “only the start” of its potential applications. And they’re right. In raising their Series B funding, Access Fintech will forge ahead with something that has eluded capital markets for decades: a shared ontology and workflow for resolving commonplace operational challenges. The platform has the potential to bring greater efficiency to a wide range of verticals, from derivatives payment affirmation, to loan settlements, and beyond. Access Fintech has already won the backing of major capital markets players — we are delighted to be leading this $20m round alongside existing investors Goldman Sachs, JPMorgan and Citi, and with Deutsche Bank joining as a new investor. In the coming months, we’re going to be helping the business build out its team, and will be on the hunt for senior executives who can help it scale. We’ll support the team as it doubles down on onboarding the backlog of customers waiting to get onto the platform, and as they invest in sales and customer success teams. The Access Fintech team will also use this funding to build out the product further, engaging with the finance industry to define new and shared ways to resolve their most critical operational issues. For the Dawn team, Access Fintech is yet another fantastic example of how B2B software has not only remained resilient through 2020, but continues to transform the working lives of thousands of people. As we continue to navigate the uncharted territory of a uniquely distributed workforce and ever increasing market volatility, Access Fintech is uniquely poised to solve some of the most urgent pain points of capital markets operations and beyond.
https://medium.com/dawn-capital/why-we-invested-in-access-fintech-c6349738bb14
['Mina Mutafchieva']
2020-10-06 10:53:58.425000+00:00
['Finance', 'Software Development', 'Startup', 'Venture Capital', 'Fintech']
How to Make a More Effective Software Team?
Photo by Dylan Gillis on Unsplash Writing software is often a team effort. It takes effort to make a team effective. In this article, we’ll look at what makes a team effective. No Broken Windows The whole team has to focus on quality. If something’s broken, everyone has to be proactive in fixing issues. Even if they’re small, they add up faster. Some teams have a quality officer to look for issues and delegate people to fix them. The whole team needs to focus on quality. Boiled Frogs The boiled frog analogy is where the frog slowly boils in the pot as the water inside gets hot slowly. After a long time, the pot’s water is so hot that it boiled the frog. In the same vein, we don’t want our software to get gradually worse in quality that the quality issues will become a big issue later on. Everyone is assuming that everyone else is handling those issues while everything hasn’t changed. We should actively monitor the environment for changes. Also, we shouldn’t let scope creep slide if we ever want to finish a project. Even if some change has been approved by the team, we still need to review if we can do all the changes required. We should look for things that aren’t in the original requirement and think if we can do them at all. Then we’ve to take them out if they aren’t. Communicate Communication is always important. Without it, we’ll step into each other’s toes. The team also needs to communicate clearly to people outside the team. People look forward to meeting with teams with a distinct personality because they’ll be prepared to make everyone feel good. They produce easy to read and clear documentation. They speak with one voice and may even have a sense of humor. We got to generate a brand so that we’re seen as one outside the team. May come up with a logo or a name to make our team memorable. Don’t Repeat Yourself Duplication is always bad. Duplicated work between different members doesn’t help us and leads to wasted effort and a maintenance nightmare. We need communication and project librarian. The project librarian tracks the work that is done by people so that we won’t delegate work twice to different people. They’ll spot impending duplication by reading material they’re handling. The librarian is the go-to person for tracking the work. If the task is too big for one library, then different people should be appointed to focus on different parts of the work. Orthogonality Different functions of a software team like analysts, architects, designs, programmers, testers, etc. can’t work in isolation. It’s a mistake to think that they can all work without talking top each other. Therefore, it’s better to organize around functionality rather than job functions. We should split the team functionally so that each small team is responsible for a particular functional aspect of the final system. Then the teams can organize internally as they see fit to get things done. Functionality doesn’t have to be use cases. They can be technical functionality like data manipulation, user interface and things like that. These teams can do their things and then they won’t be too coupled to each other if we make our software systems orthogonal and decouple modules from each other. Then if we decide to change anything worked by those small teams, then the changes are done by those small teams. This is because the change won’t affect other teams because of the orthogonality. We can reduce the number of interactions between individuals’ work,m reducing time to delivery, increasing quality, and reducing the number of defects. Everyone in the team is responsible for a given functionality, so they feel more ownership. This works well with responsible developers and strong project management. Then the project world needs one technical and one administrative lead. The technical lead would look at the big picture and eliminate duplicate effort. The administrative head would schedule resources for teams' needs, monitor progress and decide on priorities in terms of business needs. Photo by You X Ventures on Unsplash Conclusion We can create effect teams by eliminating duplicate effort and communicate effectively inside and outside of the team. Also, splitting teams by functionality make more sense since software modules are supposed to be designed orthogonally so that one part doesn’t affect another. Therefore, they can be built without much coupling between each other.
https://medium.com/dev-genius/how-to-make-a-more-effective-software-team-2c9f16dd75c9
['John Au-Yeung']
2020-06-25 15:28:35.776000+00:00
['Product Management', 'Programming', 'Software Development', 'Productivity', 'Web Development']
Data representations for neural networks — Tensor, Vector and Scaler Basics
Key attributes of Tensors A tensor is defined by three key attributes: Number of axes (rank) — For instance, a 3D tensor has three axes, and a matrix has two axes. This is also called the tensor’s ndim in Python libraries such as Numpy. Shape — This is a tuple of integers that describes how many dimensions the tensor has along each axis. For instance, the previous matrix example has shape (3, 5), and the 3D tensor example has shape (3, 3, 5). A vector has a shape with a single element, such as (5,), whereas a scalar has an empty shape, (). Data type (usually called dtype in Python libraries) — This is the type of the data contained in the tensor; for instance, a tensor’s type could be float32, uint8, float64, and so on. On rare occasions, you may see a char tensor. Note that string tensors don’t exist in Numpy (or in most other libraries), because tensors live in preallocated, contiguous memory segments: and strings, being variable length, would preclude the use of this implementation. You can see all supported dtypes at tf.dtypes.DType. To make this more concrete, let’s look back at the data we processed in the MNIST example. First, we load the MNIST dataset: This is a dataset of 60,000 28x28 grayscale images of the 10 digits, along with a test set of 10,000 images. from keras.datasets import mnist (X_train_images, Y_train), (X_test_images, Y_test) = mnist.load_data() # To display the number of axes of the tensor X_train_images , the ndim attribute: print(X_train_images.ndim) # Here’s its shape: print('shape of X_train_images', X_train_images.shape) # And this is its data type, the dtype attribute: print('dtypes ', X_train_images.dtype) 3 The MNIST dataset will be loaded as a set of training and test inputs (X) and outputs (Y). The imputs are samples of digit images while the outputs contain the numerical value each input represents. So what we have here is a 3D tensor of 8-bit integers. More precisely, it’s an array of 60,000 matrices of 28 × 8 integers. Each such matrix is a grayscale image, with coefficients between 0 and 255. Let’s display the fourth digit in this 3D tensor, using Matplotlib’s imshow function. Below are some details on this function from its official docs. The input may either be actual RGB(A) data, or 2D scalar data, which will be rendered as a pseudocolor image. For displaying a grayscale image set up the color mapping using the parameters cmap=’gray’, vmin=0, vmax=255. The number of pixels used to render an image is set by the axes size and the dpi of the figure. A note on image size, pixel and numpy array Images are made up of pixels and each pixel is a dot of color. Lets say I have a cat image of 1200×800 pixels. When an image is loaded into a computer, it is saved as an array of numbers. Each pixel in a color image is made up of a Red, Green and Blue (RGB) part. It can take any value between 0 and 255 with 0 being the darkest and 255 being the brightest. In a grayscale image, each pixel is represented by just a single number between 0 and 1. If a pixel is 0, it is completely black, if it is 1 it is completely white. Everything in between is a shade of gray. So, if the cat image was black and white, it would be a 2D numpy array with shape (800, 1200). As it is a color image, it is in fact a 3D numpy array (to represent the three different color channels) with shape (800, 1200, 3). Now to display our mnist image data as an image, i.e., on a 2D regular raster. digit = X_train_images[4] plt.imshow(digit, cmap=plt.cm.binary) # plt.imshow just finishes drawing a picture instead of printing it. If you want to print the picture, you just need to add plt.show. plt.show() Let’s inspect a few examples of this MNIST dataset, containing only grayscale images. I will use the subplot() funtion — The Matplotlib subplot() function can be called to plot two or more plots in one figure. Matplotlib supports all kind of subplots including 2x1 vertical, 2x1 horizontal or a 2x2 grid. Here is the arguments for matplotlib.pyplot.subplots: *args, (int, int, index) Three integers (nrows, ncols, index). The subplot will take the index position on a grid with nrows rows and ncols columns. index starts at 1 in the upper left corner and increases to the right. There is also an important difference between plt.subplots() and plt.subplot() , notice the missing 's' at the end. We can use plt.subplots() to make all their subplots at once and it returns the figure and axes (plural of axis) of the subplots as a tuple. A figure can be understood as a canvas where you paint your sketch. # create a subplot with 2 rows and 1 columns fig, ax = plt.subplots(2,1) Whereas, you can use plt.subplot() if you want to add the subplots separately. It returns only the axis of one subplot. fig = plt.figure() # create the canvas for plotting ax1 = plt.subplot(2,1,1) # (2,1,1) indicates total number of rows, columns, and figure number respectively ax2 = plt.subplot(2,1,2) In many cases, plt.subplots() is preferred because it gives you easier options to directly customize your whole figure # for example, sharing x-axis, y-axis for all subplots can be specified at once fig, ax = plt.subplots(2,2, sharex=True, sharey=True) whereas, with plt.subplot() , one will have to specify individually for each axis which can become cumbersome. fig = plt.figure() for i in range(9): plt.subplot(3, 3, i+1) plt.tight_layout() plt.imshow(X_test_images[i], cmap='gray', interpolation='none') plt.title("Digits: {}".format(Y_train[i])) plt.xticks([]) plt.yticks([]) The parameter tight-layout to fit plots within your figure cleanly. tight_layout automatically adjusts subplot params so that the subplot(s) fits in to the figure area. This is an experimental feature and may not work for some cases. It only checks the extents of ticklabels, axis labels, and titles. An alternative to tight_layout is constrained_layout. Manipulating tensors in Numpy In the previous example, we selected a specific digit alongside the first axis using the syntax X_train_images[i] . Selecting specific elements in a tensor is called tensor slicing. Let’s look at the tensor-slicing operations you can do on Numpy arrays. The following example selects digits #10 to #100 (#100 isn’t included) and puts them in an array of shape (90, 28, 28): my_slice = X_train_images[10:100] print(my_slice.shape) It’s equivalent to this more detailed notation, which specifies a start index and stop index for the slice along each tensor axis. Note that : is equivalent to selecting the entire axis: # The below slice notation with colon (:) is equivalent to the previous implementation my_slice = X_train_images[10:100, :, :] my_slice.shape # Even the below slice notation with colon (:) is also equivalent to the previous implementation my_slice = X_train_images[10:100, 0:28, 0:28] my_slice.shape A note on Python slice notation (:) So what does the 3 mean in somesequence[::3] ? : is the delimiter of the slice syntax to 'slice out' sub-parts in sequences , [start:end] [1:5] is equivalent to "from 1 to 5" (5 not included) [1:] is equivalent to "1 to end" [len(a):] is equivalent to "from length of a to end" Remember that [1:5] starts with the object at index 1, and the object at index 5 is not included. The third parameter is the step. So, it means ‘nothing for the first argument, nothing for the second, and jump by three’. It gets every third item of the sequence sliced. [::3] just means that you have not specified any start or end indices for your slice. Since you have specified a step, 3, this will take every third entry of something starting at the first index. For example: '123123123'[::3] # '111' source a[start:stop] # items start through stop-1 a[start:] # items start through the rest of the array a[:stop] # items from the beginning through stop-1 a[:] # a copy of the whole array There is also the step value, which can be used with any of the above: a[start:stop:step] # start through not past stop, by step The key point to remember is that the :stop value represents the first value that is not in the selected slice. So, the difference between stop and start is the number of elements selected (if step is 1, the default). The other feature is that start or stop may be a negative number, which means it counts from the end of the array instead of the beginning. So: a[-1] # last item in the array a[-2:] # last two items in the array a[:-2] # everything except the last two items Similarly, step may be a negative number: a[::-1] # all items in the array, reversed a[1::-1] # the first two items, reversed a[:-3:-1] # the last two items, reversed a[-3::-1] # everything except the last two items, reversed Python is kind to the programmer if there are fewer items than you ask for. For example, if you ask for a[:-2] and a only contains one element, you get an empty list instead of an error. Sometimes you would prefer the error, so you have to be aware that this may happen. Relation to slice() object The slicing operator [] is actually being used in the above code with a slice() object using the : notation (which is only valid within [] ), i.e.: a[start:stop:step] is equivalent to: a[slice(start, stop, step)] Slice objects also behave slightly differently depending on the number of arguments, similarly to range() , i.e. both slice(stop) and slice(start, stop[, step]) are supported. To skip specifying a given argument, one might use None , so that e.g. a[start:] is equivalent to a[slice(start, None)] or a[::-1] is equivalent to a[slice(None, None, -1)] .
https://medium.com/analytics-vidhya/data-representations-for-neural-networks-tensor-vector-scaler-basics-4beae5910398
['Rohan Paul']
2020-12-08 05:52:43.467000+00:00
['Machine Learning', 'Python', 'TensorFlow', 'Python3', 'Data Science']
Who defines your startup’s success?
By Matt Carroll <@MattatMIT> July 19, 2016: A trio of news media stories, videos, and data viz compiled weekly. Get notified via email? Email 3toread (at) gmail.com. Get notified via email: Send note to 3toread (at) gmail.com “3 to read” online Matt Carroll runs the Future of News initiative at the MIT Media Lab.
https://medium.com/3-to-read/3-to-read-pokemon-go-newsrooms-who-defines-your-success-why-stats-stink-with-bad-art-7f2c0cbc059b
['Matt Carroll']
2016-07-19 12:25:44.208000+00:00
['Journalism', 'Stats', 'Media', 'Pokemon Go', 'Data']
Introducing FastBert — A simple Deep Learning library for BERT Models
Usage Import the required packages. Please note that I have not included the usual suspects such as os, pandas, etc. Define general parameters and path locations for data, labels and pretrained models. (some good engineering practices) Tokenizer Create a tokenizer object. The is the BPE based WordPiece tokenizer and is available from the magnificient Hugging Face BERT PyTorch library. The do_lower_case parameter depends on the version of the BERT pretrained model you have used. In case you use uncased models, set this value to true, else set it to false. For this example we have use the BERT base uncased model and hence do_lower_case parameter is set to true. GPU & Device Training a BERT model does require a single or more preferably multiple GPUs. In this step we can setup GPU parameters for our training. Note that in the future releases, this step will be abstracted from the user and the library will automatically determine the correct device profile. BertDataBunch This is an excellent idea borrowed from fast.ai library. The databunch object takes training, validation and test csv files and converts the data into internal representation for BERT. The object also instantiates the correct data-loaders based on device profile and batch_size and max_sequence_length. The DataBunch object provides the location to the data files and the label.csv file. For each of the data files, i.e. train.csv, val.csv and/or test.csv, the databunch creates a dataloader object by converting the csv data into BERT-specific input objects. I would encourage you to explore the structure of the databunch object using Jupyter notebook. BertLearner Another concept in line with the fast.ai library, BertLearner is the ‘learner’ object that holds everything together. It encapsulates the key logic for the lifecycle of the model such as training, validation and inference. The learner object will take the databunch created earlier as as input alongwith some of the other parameters such as location for one of the pretrained BERT models, FP16 training, multi_gpu and multi_label options. The learner class contains the logic for training loop, validation loop, optimiser strategies and key metrics calculation. This help the developers focus on their custom use-cases without worrying about these repetitive activities. At the same time the learner object is flexible enough to be customised either via using flexible parameters or by creating a subclass of BertLearner and redefining relevant methods. The learner object does the following upon initiation: Creates a PyTorch BERT model and initialises the same with provided pre-trained weights. Based on the multi_label parameter, the model class will be BertForSequenceClassification or BertForMultiLabelSequenceClassification. Assigns the model to the right device, i.e. CUDA based GPU or CPU. if Nvidia Apex is available, the distributed processing functions of Apex will be utilised. fast-bert provides a bunch of metrics. for multi-class classification, you will generally use accuracy whereas for multi-label classification, you should consider using accuracy_thresh and/or roc_auc. Train the model Start the model training by calling fit method on the learner object. the method takes epoch, learning rate and optimiser schedule_type as input. Following schedule types are supported (again courtesy of the Hugging Face Bert library): none : always returns learning rate 1. : always returns learning rate 1. warmup_constant : Linearly increases learning rate from 0 to 1 over warmup fraction of training steps. Keeps learning rate equal to 1. after warmup. warmup_linear : Linearly increases learning rate from 0 to 1 over warmup fraction of training steps. Linearly decreases learning rate from 1. to 0. over remaining 1 - warmup steps. warmup_cosine : Linearly increases learning rate from 0 to 1 over warmup fraction of training steps. Decreases learning rate from 1. to 0. over remaining 1 - warmup steps following a cosine curve. If cycles (default=0.5) is different from default, learning rate follows cosine function after warmup. warmup_cosine_hard_restarts : Linearly increases learning rate from 0 to 1 over warmup fraction of training steps. If cycles (default=1.) is different from default, learning rate follows cycles times a cosine decaying learning rate (with hard restarts). warmup_cosine_warmup_restarts : All training progress is divided in cycles (default=1.) parts of equal length. Every part follows a schedule with the first warmup fraction of the training steps linearly increasing from 0. to 1., followed by a learning rate decreasing from 1. to 0. following a cosine curve. Note that the total number of all warmup steps over all cycles together is equal to warmup * cycles On calling the fit method, the library will start printing the progress information on the logger object. It will print training and validation losses, and the metric that you have requested. In order to repeat the experiment with different parameters, just create a new learner object and call fit method on the same. If you have tons of GPU compute, then you can possibly run multiple experiments in parallel by instantiating multiple databunch and learner objects at the same time. Once you are happy with your experiments, call the save_and_reload method on learner object to persist the model on the file structure.
https://medium.com/huggingface/introducing-fastbert-a-simple-deep-learning-library-for-bert-models-89ff763ad384
['Kaushal Trivedi']
2019-09-14 00:24:58.232000+00:00
['Machine Learning', 'Artificial Intelligence', 'NLP', 'Naturallanguageprocessing', 'Bert']
What is the difference between Git and GitHub?
What is the difference between Git and GitHub? pandaquests Follow Oct 4 · 2 min read Often new programmers confuse Git and GitHub. Both are used by software developers on a daily basis. These two are complementary, but they are not the same. In this article I’ll explain the main difference between these two, so you’ll never confuse them ever again. Photo by Luke Chesser on Unsplash Git Git is a software to manage different version of your code. These functionality is also called version control or source code management. It runs locally on your computer as a command line tool, i.e. you typically interact with it via the terminal, even though there exists plenty of Git clients in order to make working with Git more convenient. Git is not the only tool for version control or source code management. Besides Git you have Subversion (SVN), Mercurial, CVS, etc. Here is a list of different version control software. GitHub GitHub is a web service where you can upload your code repository. As a web application it provides a web interface with buttons, textboxes, etc. that you can use to interact with it. It provides functionalities of Git and also some of its own. Besides GitHub there are also other similar web services, for example: GitLab, BitBucket, SourceForge, etc. Here is a list of GitHub alternatives. Summary Below I list the main differences between GitHub and Git Do you have any questions? Did I miss anything? Share your thoughts and comment below
https://medium.com/javascript-in-plain-english/what-is-the-difference-between-git-and-github-23fc6ac62b13
[]
2020-10-04 19:14:30.839000+00:00
['Version Control', 'Git', 'Software Engineering', 'Software Development', 'Github']
Reinforcement learning and reasoning
Reinforcement learning has seen a lot of progress in recent years. From DeepMind success with teaching machines how to play Atari games, then AlphaGo beating world champions in Go to recent OpenAI’s progress on Dota 2, a multiplayer game where players divided into two teams compete with each other. The common thread is an artificial agent operating in a virtual world, where the prize is clear (e.g. win the game), but strategies to attain this prize are complex and evolving with responses of other players. On the other hand people are experimenting with AI agents operating in real-world. Each clip of Boston Dynamics gets a lot of press, showing robots performing amazing stunts, as you can see yourself here or here. OpenAI has a structured approach to teaching robots particular skills which demand dexterity, like manipulating objects in a hand. @OpenAI 1. Machine reasoning Natural question is what will be next in the development of AI and reinforcement learning in particular. Classical machine learning feeds on large datasets. With deep learning, smart architecture or both (cf. one-shot learning) you can sometimes reduce the amount and quality of data needed to make a machine learn a particular task, however the learning rate is still sub-optimal compared to humans. If one were to create an Artificial General Intelligence (AGI), a machine able to perform any task at least as well as an average human, then in particular one would need to overcome learning difficulties. Those difficulties arise in computing power, transfer learning and reinforcement learning. The reason why humans are able to learn on small datasets is two-fold: They are able to accumulate knowledge across various fields and observe similarities. They can reason upon given data, in particular if they know how to do A, and a task B is somewhat similar, they might be able to do the task B. The equivalent of the first point in AI is called transfer learning (or meta-learning), and currently is still largely underdeveloped. The same goes for reasoning with machine reasoning being in its infancy. Eventually all comes down to reasoning, with transfer learning being a relatively easier task — if you can reason, you can in particular classify and compare, and thus you are able to find similarities in tasks you want to approach, and then act upon it, by training for a new skill (of course, it’s much harder than that, but that’s another post). Machine reasoning is a true bottleneck in the current development of AI. We can’t even understand how context works — just talk with any bot to verify it yourself — not to mention any harder tasks. So far logical reasoning was outside of scope of machine learning. 2. Mathematics A perfect example of pure reasoning to test any machine reasoning capabilities is mathematics. Each proof of a theorem consists of many steps, logically building upon each other, often dependent on already proven facts. Mathematicians write their proofs in natural language, which is to some extent formal, but far from formalized (in the sense of formal languages). Validation of a proof is done by other mathematicians, fellow experts in a given field, after sending a paper to a scientific journal. This procedure has many drawbacks: not only it is slow, but it depends on other experts. Moreover proofs and arguments themselves have often many omissions, after all one doesn’t have to write down what’s widely accepted in a given community (branch of mathematics). Publishing incorrect proofs in prestigious journals is not unheard of. My road to artificial intelligence started by asking whether this procedure can be made more reliable and much faster, especially as my domain of research, Langlands program, was extremely complicated. The first natural thing that comes to mind is that you should made all those proofs formal. That was already an idea of Voevodsky couple of years ago, but more or better formalization is not an answer. Nobody is going to force mathematicians to write their proofs more formally (as it would destroy the whole pleasure of doing research and is widely considered a waste of time). Actually formalizing mathematics is a whole field of mathematics with its own dogma and problems. People rewrite a proof in one of formal languages (Coq, Mizar, Isabelle to name a few) and then can validate the proof automatically. However the labour needed to formalize a proof in one of those languages is huge — for example formal verification of Feit-Thompson theorem took 6 years of collaborative effort. This is what caused me to think that if we are to succeed at formalization and verification of mathematics (a baby step towards reasoning), we need to automate the whole process. I’ve written my ideas down as DeepAlgebra program and kept on looking for how to make it out, with no real success so far (I should explain it in another post). That was before I learned more about reinforcement learning. 3. Reinforcement learning and mathematics I started with reinforcement learning for a reason. I view mathematics as a game with a clear prize (proof of theorem) and a complex strategy (steps of the proof). The way mathematicians do mathematics is similar to how reinforcement learning works: they search for potential proofs, by going deeper and deeper into a tree of the game (all possible proofs), backtracking whenever their intuition (learned by arriving at a contradiction) is telling them to do so. They search for a right path, right strategy to arrive at the proof, by manipulating objects in the formulation of the theorem, introducing new entities, recalling already proven theorems and measuring their progress by evaluating how many more steps they would presumably need (i.e. how many lemmas/facts) to prove their theorem. Mathematicians generally have an intuition how far away they are from a solution. It comes once they worked on a problem for some time and explored a couple of paths. Intuition of mathematicians is similar to an intuition that Go players have. They too explore a tree of the game (in their heads playing moves ahead and thinking about what an opponent might respond and where it leads), they have an intuition about ‘shapes’ (formation of stones on a board) just as mathematicians have an intuition about mathematical entities they introduce. Their goal is to win the game but the game is too complex to determine it before-hand, thus they need to evaluate their position on a board with each move to see whether they are winning or losing, and how that should influence their moves. Go is complex because it allows for local vs global phenomena, you can be losing on one side of the board, but you still win overall. You have the same phenomena in mathematics: ‘local’ are lemmas and already proven theorems, ‘global’ is the theorem you want to prove right now. Strategies play a crucial part in both games as they allow you to have a general direction for approaching the game. Overall it seems like reinforcement learning methods similar to those used in AlphaGo can help in approaching mathematical proofs. It also seems as a natural next step for machine learning research in terms of difficulties arising. 4. Conclusion In order to develop AGI, or at least come closer to building real intelligence, we must teach machines how to reason. Mathematics is a perfect test ground for this task with huge online repositories of proofs and a formal way of reasoning. Moreover its similarities to playing a game, make it a good target for trying out reinforcement learning techniques. It’s the right time to start thinking on how to incorporate reasoning into machine learning. If it succeeds, we will see truly next level of AI revolution.
https://pchojecki.medium.com/reinforcement-learning-and-reasoning-1dad5e440690
['Przemek Chojecki']
2018-11-12 17:30:02.379000+00:00
['Deep Learning', 'Artificial Intelligence', 'Mathematics', 'Reasoning', 'Reinforcement Learning']
The Holy Trinity of Anal Sex
I have been thinking about anal sex a lot lately — I mean, who hasn’t? After careful consideration based on my own personal experience, I’ve come to realize that a successful anal encounter requires three things: communication, lubrication, and consent. Subtract any one of those three factors from the equation and you have a recipe for failure. Trust me. I learned the hard way. I’ve had anal sex with exactly two men. Although I was married to the first, our anal experiment took place long before I had a ring on my finger. While we were still dating, I convinced my ex to try anal in the back seat of my secondhand sports car. He consented. Consent works both ways. With my lower body in the back seat and my upper body dangling into the front, I thought I was ready for anal penetration. We were two consenting young adults who didn’t know the first thing about communication or lubrication. We knew even less about anal sex. Thus, my only communication was saying, “Push harder,” which I realize now needed to be said both because I was a back door virgin and because we didn’t use any lube — not even saliva, which would have been insufficient anyhow. Although we successfully achieved anal penetration, it lasted a split second before I tapped out, wailing like a banshee. It took weeks for my body to recover fully. We never tried it again, despite a nearly five-year marriage and ample opportunity. That fledgling experience was when I learned the importance of lubrication while attempting anal sex. Not using lube was a mistake, but it was my mistake. As such, I learned from it, and who doesn’t love a good learning experience?
https://medium.com/traceys-folly/the-holy-trinity-of-anal-sex-2f35df1224e8
['Tracey Folly']
2020-03-11 17:00:47.598000+00:00
['Health', 'Relationships', 'Sex', 'Sexuality', 'This Happened To Me']
What Netflix’s “The Social Dilemma” Got Wrong
Photo by Sara Kurfeß on Unsplash What Netflix’s “The Social Dilemma” Got Wrong Capitalists don’t care about the social effects of machines — and never have. The Social Dilemma is one in a long line of shows Netflix has put out in 2020 that make me wonder, only half sarcastically, if AI is trying urgently to deliver us a message. Netflix’s latest doc shows, with the use of classic reality TV-style confessional booths, the leaders of Silicon Valley looking back in horror upon their Frankensteinian monsters. Just in time for Halloween. These confessionals make for a frustrating viewing experience, as a parade of tech big-wigs address the symptoms of the real problem — the inherent nature of the profit motive — without ever naming it. Tristan Harris, a former Google employee nicknamed the “closest thing Silicon Valley has to a conscious” fails to articulate what this “social dilemma” is. He lists some issues technology causes — interference with elections, selling data, political polarization, but never arrives at the root of these problems. As he says in his interview: “But is there something beneath all these problems that’s causing all these things to happen at once?… There’s a problem happening in the tech industry and it doesn’t have a name…” He trails off and we cut to the next scene. Let’s be clear: there is a name for this problem. It’s late stage capitalism. Former Facebook and Pinterest exec Tim Kendall says: “Everyone in 2006, including all of us at Facebook, just had total admiration for Google and what Google has built, which was this incredibly useful service that did, far as we could tell, lots of goodness for the world. And they built this parallel money machine. We had such envy for that, and it seemed so elegant to us. And so perfect…” Kendall emphasizes the positive aspects of technology, but this capacity-for-good is always in direct relationship to technology’s capacity-to-profit. The unnamed problem in the social dilemma is that we are incapable, under late capitalism, of thinking the capacity-for-good independent of the capacity-to-profit. This problem did not come from nowhere. The longest chapter in Marx’s Das Kapital already tells us that profit, not capacity-for-good, is the reason machines exist. Marx on “Machinery and Large-Scale Industry” Photo by Peter Gonzalez on Unsplash The first thing Marx does in chapter 15 of Das Kapital is dispel naive technological optimism with a JS Mill quote: “It is questionable if all the mechanical invention yet made have lightened the days toil of any human being” (492). Machines don’t make our work easier because that is not the job they were meant to perform; they were meant to make it easier to produce commodities. And capitalists don’t care about the social effects of machines — they never have. When workers used simple tools, they were the primary agents that produced goods. Then machines replaced them. A human being can only knit one line at once, and they need to acquire the skill to do so. Machines can easily knit multiple lines at once, producing products exponentially faster than we can. Under capitalism, skilled laborers gradually become unskilled automatons, pulling levers, pushing buttons. Marx’s theoretical contribution is less about political economy itself and more about how changing material conditions affect social conditions. Marx on the social conditions of capitalism There are two antithetical qualities that govern our social and economic relationships: conflict and cooperation. Conflict emerges in the form of class struggle. Capitalists, in order to maintain their status, need to constantly accumulate capital and expand their business. They are constantly in competition with other capitalists, trying to force them out of the market. Laborers, too, are in constant competition with each other. Material conditions affect how these class conflicts emerge. Material conditions include how a good is produced, what goods are produced, the machines used to produce it, the natural conditions enabling its production, and socially necessary labor-time. We’re forced to compete in the capitalist economy; we’re also compelled to cooperate. In the Industrial Revolution, multiple people needed to work together to operate heavy machinery. Today, wage earners come together and collaborate on projects in the workplace. These two forces, competition and cooperation, cause the cycle of production to continue ceaselessly. But Marx can’t give us a full picture of this ceaselessness. Our economic and material conditions have significantly changed since his time. Now, the worsening conditions of workers worldwide is not a reflection of our material conditions at all. Is Marx’s analysis of machinery still relevant in the 21st century? Photo by Oleg Magni on Unsplash Yes. Marx already warned us about The Social Dilemma in 1867. He already told us that more machines, better AI, and more ethical technologists will not solve the problem. Nor will less screen time and social media usage! Placing the burden of responsibility on consumers rather than corporations is as criminally negligent as pretending we can solve the climate crisis with metal straws rather than large-scale economic reform. Marx critiques the myth of the single inventor. He says that machines are products of a historical moment in which countless people make miniscule improvements upon a machine. This important idea gets lost in The Social Dilemma, as names and faces are associated with by-lines such as “inventor of the Facebook like button,” or “gmail chat function.” To view these Silicon Valley technologists as evil scientists, or naive geniuses would be a mistake. Don’t make the technology problem the responsibility of individuals. It is neither the technologists’ fault nor the Facebook users’ fault that technology-use is out of hand. That’s a feature of these devices, not a bug. Developers have been pressured by competition and cooperation to make these things as addictive as possible or suffer the consequences. If the main message you get from this documentary is that you need to reduce your consumption of social media, you have missed the forest for the trees. Sadly, that’s how Netflix frames the ending of this documentary. Placing the burden of responsibility on consumers rather than corporations is as criminally negligent as pretending we can solve the climate crisis with metal straws rather than large-scale economic reform. The solution is neither demonizing Silicon Valley technologists nor shaming people into deleting Facebook. The solution must be systemic economic and political reform. We need to hold the Zuckerbergs and Bannons of the world accountable for their actions. We need a more robust notion of the right-to-privacy. We need to combat surveillance capitalism and disaster capitalism. We need to destroy an economic system built on oscillation between competing for our livelihood and cooperation for survival’s sake. We need to question a system which holds “growth” as the highest good. More than anything, we need to value people over profits. Sources Marx, Karl. 1990. Capital Volume I. Penguin Classics Edition. Orlowski, Jeff. 2020. The Social Dilemma. Netflix.com.
https://medium.com/the-anticapital/what-netflixs-the-social-dilemma-gets-wrong-2d3a48f9e488
['Valerie King']
2020-10-29 04:51:43.318000+00:00
['Politics', 'Netflix', 'Facebook', 'Technology', 'Social Media']
Bitcoin: to be or not to be — the future of cryptocurrency!
Bitcoin or Dream of all Pirates Let’s listen to the experts: Valentin Katasonov, the economist and professor at the Moscow State Institute of International Relations, believes that the emergence of Bitcoin is not by accident. It is a sort of test step that will be followed by bigger deals. Now, he is comparing cryptocurrency with gold, building his forecast upon the market price of precious metal. To confirm his words, the economist offers to examine a quite impressive chart. Exchange fluctuations of Bitcoin and gold are almost similar. Coincidence? I think not! What about foreign experts? Some time ago, Paul Krugman, a well-known American economist, said that the usefulness of Bitcoin was more obvious than the usefulness of precious metal. Besides, he stated that Bitcoin could prove to be more useful than “dead gold” for people. At the same time, an entrepreneur and investor John Feffer believes that Bitcoin is the first viable means able to replace gold. Thus, do not consider the coin obsolete for the hundredth time, because the drop in the currency rate can be nothing but a wise step of investors or a lull before the rapid jump. Don’t forget about the boom-and-bust economy. Despite the significant slump of exchange and the absence of coin hype, the number of crypto optimists is not reduced. Success stories just encourage Bitcoin fever. A certain Mr. Smith, a programmer from Silicon Valley, invested in bitcoins in 2010. He spent only 3 thousand dollars. As of today, he is a millionaire who has quitted, converted cryptocurrency into cash, and started traveling all over the world. However, there were opposite examples: the American Laszlo Hanyecz bought two pizzas for 10 thousand bitcoins in 2010. The Briton James Howells is paying lots of money to garbage workers so that they investigate the landfill site. In 2013, he threw away his laptop with 7.5 thousand bitcoins on the hard drive. Prediction of happy future How can the price of Bitcoin grow? Saxo Bank predicts that next year the value of cryptocurrency can exceed 60 thousand dollars and the capitalization can reach 1 trillion dollars. Nevertheless, the bank supposes that, after the dramatic bounce in 2018, Bitcoin will face a rapid fall in 2019 up to the cost of production of 1000 dollars. According to Vadim Merkulov, a Senior Analyst at Freedom Finance, Bitcoin can reach 40 thousand dollars, but its actual value will be several times lesser in the future. “40 thousand is not the limit, but now, the majority of people who just start investing in Bitcoin do not realize its concept,” the expert stresses. Outcome: If you have invested in bitcoins at the proper time, like me, do not hurry up to sell them before it’s too late. The main point here is calm and cold blood. The majority of serious analysts and specialists think that the Bitcoin rate will increase on the New Year’s Eve and will range from 12 000 to 19 000 by March. Moreover, this is just the beginning of the vigorous and continuous rise of cryptocurrency.
https://medium.com/smile-expo/bitcoin-to-be-or-not-to-be-the-future-of-cryptocurrency-41842096620c
[]
2018-10-26 08:17:42.258000+00:00
['Cryptocurrency News', 'Future', 'Cryptocurrency', 'Crypto', 'Bitcoin']
Tips & Features to Enhance Your Google Slides Presentation
1 | Add Icons Icons add a simple visual dimension to help convey meaning and understanding, and they leave an impression with your audience. They’re a subtle and underused detail. Slides Carnival has a post that links to icons you can use in your presentation. There are a few different styles available. These icon sets are incredibly flexible — you can resize and recolour them using the tools available within Google Slides. To resize, simply click and use the squares in the normal way. Some of them will scale automatically, others may need you to also change the line thickness. To recolour, use the fill or line colour tool. The icon sets linked above are free to use but require attribution. Make sure you credit the creators! 💡 Tip: Don’t mix different sets of icons. Find a set that matches your theme/style for a more consistent and professional feel. SlideCarnival pulled together a number of Icon Sets to use. The example above is from the amazing FontAwesome team. 2 | Crop and Mask Images The Crop and Mask tools in Google Slides offer a great way to tweak images quickly to fit the presentation better. Crop allows you to change where the proportions of an image and hide anything outside of that space. For example, imagine you had a landscape image — crop allows you to make that square (or even portrait). Mask applies a shape over the top of the image and hides the content of the image outside of that shape. In the example below, I add a circle mask to the image to soften the edge of the photo. Masks can be customised like any normal shape (e.g adjusting boarder radius or arrow points.) As with any tools like this — use wisely. I find it’s beneficial to keep the shapes simple (circles, part/rounded rectangles) than to get too adventurous with the different options. Masking a photo to soften the edges To find the tools, select the image you want to edit. In the toolbar, around the middle, you’ll find the crop icon. The crop and mask buttons are almost in the same button in the toolbar. Crop is the main button and mask is in the dropdown immediately to the right. After you’ve cropped or masked, if you want to reposition the image inside the cropping, simply double click. This lets you move the image while the crop and mask remain in place. ⚠️ Warning: Something I have discovered in the past, if I have an extremely large filesize image (e.g. 40Mb+) and I crop it, Google does some optimisation of the image in the background that reduces the quality. This primarily affects animated gifs but something to consider. 3 | Create Flat Illustrations with Shapes It may not be as powerful as Sketch, but the number of shapes and formatting options allow you to be really creative. Attributes like the fill and border colours, border radius and shadows layered in the correct way create a world of options. This is a screenshot form one of my Slide decks. The smartphone frame was made from the basic shapes available in Google Slides. I most commonly use this to frame screenshots or mockups of apps so they look like they’re in a phone. It gives a feeling of refinement to your slidedeck. The image to the left was built using the basic google shapes. The phone itself is a rounded rectangle with a transparent fill and a 12px boarder. The notch is a rounded rectangle overlapping the top with a lighter grey circle and rectangle over the top. The buttons on the left, again are rounded rectangles rotated and overlapping the first. Group them together and put your screenshot behind and you’re good to go. ⬇ Free Template: I made this template is available for free here. Simply make a copy of the file and and copy the shapes. Note, I recommend grouping them together first! (Select all the shapes, then click “Arrange” and “Group”) While there are transparent images of phones you can use, I find this provides much for flexibility. Screenshots and mockups often come from a variety of devices sizes and you can accommodation simply by resizing the frame. No special design tools need! P.S. Bonus points for using and animated gif behind it! (See #6 in this list) I’ve also used similar methods for website screenshots/mockups in browsers and even post-its for remote workshops! 4 | Copy Format A slidedeck is not much different to any other digital Product. Content is always king — however, a visual consistency provides an overall more polished and pleasant experience. Copy format is an excellent way build this consistency. It will copy all the set format of an element (essentially, all the things that appear in the toolbar and in the advanced formatting) such as fill, borders, fonts, shadows etc. The Paint Format button appears on the left in the toolbar near the Undo, Redo and Print icons. To paint the format, click the source item (the one you want to copy from). Then select the Paint Format button in the toolbar. It appears near the printer icon and is a paint roller. Click this and then click the item to paste the format to. Paint Format is best used when you’ve already completed work but have subsequently changed the formatting. 💡 Tip: Quickly duplicate an element or group by holding [ALT] while clicking and dragging. It will make a duplicate under your cursor and release your click in the correct position. 5 | Replace Image (from URL/Upload) This is great way to make changes to images without dealing with cropping and formatting from scratch — again, giving you consistency! To do this, select the image you want to update and click “Replace image” in the tool bar (it will be near the centre.) This will give you a few options but I most commonly use “Upload” or “From URL.” I use this primarily for creating team slides and updating screenshots and mockups. In the example above, for a “team slide”, you can quick grab an image from Slack. Always ask permission before using someone’s images though! (Thanks to Nikki Anderson in this example) 6 | Creating Animated Images This is something been using more often to better convey the experience or journey customers go through. It is always helpful to have the individual screens but playing them through really helps your audience visualise what you’re explaining. There are two parts to creating animations in slidedecks: Capture the Content Converting it Gif (fyi, I say gif not jif) The main way I create the content is simply by using the screen recording function on my phone. For iOS, Apple outline the instructions here and the same functionality is available on Android. Once you’ve captured the recording, send the file to your computer. There are some applications that will directly convert the video file to a gif format. However, I actually make a recording of the video using GIPHY Capture. This works for me because of the flexibility in setting loops, export settings and trimming the length of the video. Making a recording with GIPHY Capture Open the screen recording file in Quicktime. Open GIPHY Capture Resize the capture area (the area green) over the part of the video I want to record Start recording on GIPHY Capture Play the video in Quicktime (press the Spacebar to play) Stop the record in GIPHY Capture when I’m done. Trip the clip in GIPHY Capture. Export as Gif 💡 Tip: If the Gif file is more than 40Mb, I’d recommend exporting it again with a smaller pixel size. I avoid changing frame rate as most times I find it too jittery. Now add you new Gif to your slidedeck and wow your audience! I also used the above method for creating the animations used in this article. 7 | Sharing Internally and making it comment-able I mentioned at the start that one of the most powerful parts of Google’s Office Suite is the accessibility. Unless there is a strong reason against it, I’d recommend making your deck Public inside the Organisation with Commenter rights. This information could potentially be valuable to anyone in the company and part of our role is to breakdown silos. However, even if you want to take a more cynical view on this you simply spend less time granting individual humans access which is not an effective use of your time nor the cost-of-delay associated with them waiting for you to share. The Commenter rights is another manifestation of my Default-to-Open to encourage people to ask questions and challenge publicly to allow a broader debate something I’ve touched on before in my Product Manager Bottleneck Article). When I build slidedecks, I build them with self-service in mind. This means, I assume the person reading it won’t be listening to me present it at the same time. Creating decks like this removes myself as a bottleneck in information distribution which is one of the biggest challenges faced by growing companies and it stifles how effective they can be. You can change sharing settings by clicking the “Share” button in the top right of the screen. Inside share, there are two sections, “People & Groups” and “Get Links.” The second lets you set the link to be accessible to anyone in your organisation as a Viewer, Commentor or Editor. It’s important to note this isn’t either / or. My usual approach is to give the organisation comment rights (under Get Link) and my immediate team edit rights (under People & Groups).
https://medium.com/swlh/tips-features-to-enhance-your-google-slides-presentation-2e5a4c858b85
['Curtis Stanier']
2020-11-01 19:53:35.150000+00:00
['Presentations', 'Communication', 'Product Management', 'Business', 'Startup']
Introduction to Weight & Biases: Track and Visualize your Machine Learning Experiments in 3 Lines of Code
Introduction to Weight & Biases: Track and Visualize your Machine Learning Experiments in 3 Lines of Code Seamlessly Compare Different Experiments and Reproduce your Machine Learning Experiments using Python Photo by Solé Bicycles on Unsplash Motivation If you have applied machine learning or deep learning for your data science projects, you probably know how overwhelming it is to keep track and compare different experiments. In each experiment, you might want to track the outputs when you change the model, hyperparameters, feature engineering techniques, etc. You can keep track of your results by writing them in an Excel spreadsheet or saving all outputs of different models in your Jupyter Notebook. However, there are multiple disadvantages to the above methods: You cannot record every kind of output in your Excel spreadsheet It takes quite a bit of time to manually log the results for each experiment You cannot keep all outputs of your experiments in one place You cannot visually compare all outputs What if there is a way to track and visualize all experiments like below in 3 lines of code?
https://towardsdatascience.com/introduction-to-weight-biases-track-and-visualize-your-machine-learning-experiments-in-3-lines-9c9553b0f99d
['Khuyen Tran']
2020-12-27 16:07:34.961000+00:00
['Data Science', 'Machine Learning', 'Machine Learning Tools', 'Python', 'Data Visualization']
Creating A Discord Bot with Python (Part 2)
2. Implementations Static moderation is very simple to implement and requires mainly a proficiency with the various methods of the discord.py library. The commands that are implemented in this snippet are Mute, Unmute, Clear, Nuke, Ban, Softban, Kick, and Unban. There are three categories of commands here: Ban, Mute, and Clear. Ban removes a user from the server. Mute silences or unsilences a user. Clear gets rid of messages. Lets first go over how the Clear commands work. At the heart of the clear commands is the “ctx.channel.purge(limit=num_messages)” method. To understand what this method is, we first need to understand the idea of context (ctx stands for context). Context is essentially a set of attributes that is passed to the function. The way that it is passed is dependent on the command that comes after the “@”. For example, in our mod commands, we pass “@commands.command()” to the function. This passes information about what the conditions of the function call are. It gives us details like the channel that this command was called in. This is important for our Clear commands because we take the attribute “ctx.channel” and use the purge method on that channel. The purge command takes in the argument “limit” and clears that many messages from the channel in order of most recent to least recent. There are also some assorted methods within the command that are added for cleanliness and certain edge cases, but in most cases, you can just use the purge method. With that, we’ve covered the basics of the Clear commands. I encourage you to try and implement this before you move onto the next section. This will help you improve your understanding of the methods as well as allow you time to look at the discord.py docs to understand the cleanliness functions. Moving on, lets look at the ban functions which are three part. We have the Unban method, Ban method, and Kick commands. First, we’ll address the Ban method. The Ban command is powered by the “member.ban” method. What this method does is take in a member object and bans them from the guild where this command was summoned. The reason this works is because we are getting a member object by specifying “member: discord.Member”. If we do not specify that we will get a string that we have to parse. The Unban command is very similar to the ban method in that it is powered by the “member.unban” method. However, because the user has already been banned it is not always possible to get an @mention of them. Because of this, we have to take in a user id. This user id comes in the form of a string which we then have to parse and turn into a user object. From there, the procedure is largely the same as the ban function where you do a user operation. The Kick command is essentially the same as the ban command. You just take in a member object and use a built in method of the member class on the member object. In this case the built in method is “member.kick().” Finally, lets go into the Mute commands. The Mute commands are very simple to implement. All they do is take in a member object in and give it the muted role. There are two chains of logic in the Mute command. The first chain of logic is for when the Muted role doesn’t exist. In that case, we create a Muted role and set the permissions of that role in every channel to disallow speaking. The way we do this is by attaining a list of all the channels of the guild that this command was summoned in. This is done by getting the “ctx.guild.channels” attribute, which is a list. We then iterate over all the channels in the list and set permissions in each to be what we desire in this case muting the users with this role. Then, the role is given to the specified member and all the permissions are applied to them. The second chain of logic is for when the Muted role does exist. This chain of logic is the same as the first except that it forgoes the creation of the Muted role. With that we’ve completed the Static Moderation Commands. All the code is available above so make sure to read it thoroughly to get a better understanding of how everything works and all the cleanliness functions.
https://codexplore.medium.com/creating-a-discord-bot-with-python-part-2-34fcd8041a94
['Yash Semlani']
2020-11-29 01:25:56.059000+00:00
['Programming', 'Discord Bot', 'Software Development', 'Python', 'Discord']
Clocked In Magazine’s Worst Songs of 2020
Clocked In Magazine’s Worst Songs of 2020 Bad year? Well, these songs will certainly make it all worse Image via Google There’s no doubt that we’ve all had a pretty bad year. There’s been some great songs and albums put out this year albeit in a smaller amount. With the good comes the bad however, like a horse to water there’s still been plenty of awful songs released in 2020. It’s time to take out this years garbage, we’ve kept it at 10 because 20 was just too much trash for us to take out. These following songs were happened upon as I spent my time indoors with little to do besides hunt down new music, so of course in that search I’m going to be disappointed. I will say I had a good time looking back on some of these so I hope you all can find some joy in the bad music we come across. Here are Clocked In Magazine’s Top 10 Worst Songs of 2020: 10. Green Day: “Meet Me On The Roof” I didn’t have much faith in a new Green Day album as Revolution Radio was just Green Day confirming they can still play three-chord punk rock to us. On Father of All… however they really outdid their own mediocrity and made the worst album of their career. This song in particular makes use of the albums worst tendencies with the annoying hand claps and overtly radio friendly sound. It just ran every point of defense Green Day has ever had into the ground. The song sounds like a band that has really given up and doesn’t care about their fanbase which epitomizes this album. 9. Black Eyed Peas: “MABUTI” When I first heard this song it reminded me immediately of that Flo Rida, Pitbull collab like ten years ago that was all about ass. As fun as it is to rap about ass it’s also just a desperate plea for attention and in BEP’s case this is the worst example of it. They may be attempting to try their hand at the danceable Latin pop music but they do it so exploitatively and this just feels shallow. 8. G-Eazy: “Everybody’s Gotta Learn Sometime” I always kind of thought Gerald (G-Eazy) was a really boring rapper and this track solidifies that claim. I beg you not to listen to this song while driving because you will crash your car from falling asleep at the wheel. Nothing stands out on it, the drum beat is lazy, Gerald’s raps emit the same emotion of his tired ass eyes and the lyrics speak as though you’re supposed to know what’s going on but this is the album opener. Awful way to start an album but thank you for curing my insomnia Gerald. 7. AC/DC: “Money Shot” This song just confused and scared the hell out of me. The opening guitars sounded like the first one was playing played outside while the other one was being played right behind me and it made me jump out of my chair. After that there’s really nothing of substance on this song or the entire album for that matter, this song just offended me the most (you all know what the title means). 2020 did not need another AC/DC album at all but… here we are I suppose. 6. McFly: Growing Up (feat. Mark Hoppus) I guess you could call it a witch hunt for how far and wide I looked for something blink related to hate on when I found this stinker but there’s good reason behind this. The backing music during the verses are the worst attempt at rap beats and rhythms that I’ve ever heard, and I don’t even think the band intended it that way. This song is just so bad, so lost in gen-x irrelevance and just plain dumb. I think it’s cute how they say the line “The best that you can do is not give a…” with an over emphasis on the following “FUCK!” like they’re so controversial saying a swear word. Mark Hoppus is 48 years old and yet he puts swears into his lines the way a 12 year old would and it’s just sad. That’s what this track is: sad. 5. Vybz Kartel: “Not Ok” One of the worst trends of 2020 was people outside of the pop punk sub-genre trying to revive pop punk. It made for some decent tracks and mashups but we soon realized how cut and paste and generic it all is. This song in particular was the worst offender of the bunch in my eyes because it was not a good mix of the dancehall Kartel reigns supreme in or the pop punk sound. The song features a simple guitar lick on repeat throughout with an accompanying bassline and drum beat and it builds to absolutely nothing. It’s just so dumbed down and it disappointed me in more ways than I expected. 4. Justin Bieber: “Yummy” This makes the list not just because of the desperate attempts the Beliebers tried and failed to get this to number 1 but just because of how painfully average it is. This songs exists in a sea of trap ballads that everyone’s utilizing on TikTok and that’s about as far as this song will reach: background music for kids TikToks. The Biebs always made music that even if you disliked it the song still stayed with you but “Yummy” is just a taste that didn’t sit well with anyone. 3. The Offspring: “Christmas (Baby Please Come Home)” It’s difficult listening to this because Dexter Holland’s voice simply does not work on these overproduced songs, he sounds like he’s trying to reach heights he simply can’t. The guitars on this song sound so hidden by the generic Christmas bells, it’s almost like no effort was put into this. People say Green Day sold out but the Offspring is a band that should also be pointed at in the conversation. 2. Trey Lewis: “Dicked Down In Dallas” I don’t hate country music but I have to say some of the trends within the genre lately are so lame. I can at least appreciate this song for forgoing the often backhanded sexual nature of country music in favor of just shoving it in your face with this song but the problem is this song sounds dead serious. I can’t help but not feel bad for the man who’s girlfriend is getting some dick down in Dallas because Trey Lewis sings it in such a matter-of-fact way that just kills the song for me. The music is obviously not outstanding either it’s just your standard country twang so this song is really nothing more than a gimmick. 1. A Bunch of Celebrities: “Imagine (John Lennon Cover)” This was the most offensive thing that happened this year. Coronavirus may have ravaged this year to no end but this one just kicked us while we were down. A bunch of rich ass celebs trying to relate to the public has never been more insulting. The problem is not just that it was insulting to us but each person singing it gets worse and worse. There’s no flow in the singing because 90% of the people in it cannot sing. This video is without a doubt the most brain dead song that came out this year. We hope you had a good year and that these songs did not plague your year as COVID has. Is there any tracks that you think should be on the list? Then please let us know!
https://medium.com/clocked-in-magazine/clocked-in-magazines-worst-songs-of-2020-d1a01b8f56ce
["Ryan O'Connor"]
2020-12-27 20:40:26.748000+00:00
['Music', 'Magazine', 'Review', 'Articles', 'Funny']
Optimizing Customer Satisfaction With Machine Learning
Optimizing Customer Satisfaction With Machine Learning Analyzing an airline satisfaction dataset. Photo by Chris Brignola on Unsplash Flying can be a hassle. Long lines, uncomfortable seats, bad food… the list goes on. Now, mandates on mask-wearing, physical distancing, PCR tests, temperature checks, health forms, and more make the whole process a dizzying ordeal. This is bad news for airlines. Airline revenue is already way down, and traffic isn’t expected to fully recover until 2024. Flying is more stressful than ever. Given this, airlines need to optimize customer satisfaction to recover faster, and emerge successfully out of this downturn. I analyzed an airline passenger satisfaction dataset using the predictive insights tool Apteo to find what factors impact flyer satisfaction the most. Inflight Wi-Fi is King
https://medium.com/dataseries/optimizing-customer-satisfaction-with-machine-learning-4735956befdd
['Frederik Bussler']
2020-09-30 10:20:43.041000+00:00
['Data Analysis', 'Artificial Intelligence', 'Data Science', 'Customer Experience', 'Machine Learning']
How We Cut Carbon To Net Zero
How We Cut Carbon To Net Zero And why waiting around for technology isn’t the answer. I’ve written extensively about potential solutions to the climate crisis, both in articles and responses to various readers, but a stock-take of how to get to net zero is needed. Crucially, we must consider how we get there quickly, efficiently, and sustainably, and without alienating the public — factors which aren’t discussed often enough by those proffering potential solutions. Let’s take a look at some solutions. There are the typical ones which you’ve likely heard of and understand, such as renewables, nuclear energy, and plant-based diets; there are futuristic and unrefined concepts such as absorbing sun rays before they even reach our atmosphere, and there are promising technologies entering the fray, such as carbon capture. All of these offer different advantages and challenges. For example, renewables cut carbon at source but are less reliable and unlikely to produce enough energy quickly enough. However, they are sustainable and cheaper than many other of the potential solutions, and won’t alienate people as they won’t cause major inconvenience. Indeed, we already rely largely on renewables, and most people haven’t even noticed. I won’t waste your time summarising these for each solution, but I’ll refer back to these different considerations when justifying my proposals for an effective carbon-cutting strategy. Let’s deal directly with the easiest part of the strategy: the energy mix. It is clear non-renewables are polluting and unsustainable, and must be removed completely as soon as possible. To replace them, governments must ensure a balance between reliable and unreliable renewables, as well as the extremely efficient nuclear. Unreliable renewables such as wind are cheap and can be strategically harnessed to ensure efficiency and sustainability, whereas more reliable renewables are often more expensive. Subsidising solar panels for homes and offices may be the single most effective thing that could be done in this regard, due to the economic benefits, sustainability, and speed at which this could be implemented. Nuclear is great and could help temporarily, but a major issue with nuclear is that plants take a long time to build, especially when taking into consideration the huge safety risks that must be mitigated. Furthermore, it has the potential to anger a lot of people who are concerned about disasters, and, as such, should be used minimally and with caution. The last thing that environmentalism needs now is disillusion and alienation, as this will halt the progress in its tracks. Therefore, renewables must, in the end, make up the vast majority of our energy mix, perhaps with some temporary reliance on nuclear energy, which is reduced over time as renewable technologies become cheaper and more efficient. There is, at present, no other clear alternative to this. Of course, individuals must also make changes to their lifestyles, from small things like switching off lights when not in the room, to larger contributions such as switching to mostly plant-based diets. Cutting meat out of everyone’s diet entirely would perhaps be too far, as the general scientific consensus is that humans are omnivorous, even if only small quantities of meat and fish should be eaten. If individuals would all voluntarily make these huge changes, then the effect would be huge. The reality, though, is very different. Governments must incentivise individuals to eat less meat and use less energy, which could be done either directly, by tax credits or some alternative system, or indirectly, by incentivising companies to offer low-carbon products and services, in turn driving the price down for customers. Both could be effective but the latter may be more practical and economically wise, in the sense that it will drive more foreign direct investment by creating more incentives for companies in the country, in turn creating more money for the taxpayer and more jobs. Any strategy to reduce CO2 emissions to net zero must include negative emissions technology, to capture carbon in the atmosphere and store it underground, or to collect carbon produced at power stations and recycle or store it. This technology isn’t too far away from being cheap enough to be used widely, perhaps only a couple of years. Already, trials of it have begun to take place, such as the Drax power plant in the UK, which produces less carbon than it removes from the atmosphere. There is little hope of reaching zero emissions by 2030 whatsoever, but there is absolutely no hope of it without aggressive use of negative emissions technology. It is vital that governments take an active role in research and subsidising production, even if this runs counter to the economic preferences of many governments. Indeed, afforestation, such as the planting of 50 million trees across the North of England, is a great way to use nature to remove emissions and reverse the damage caused by deforestation, and this can be done now, and rather on the cheap. What we cannot afford to do, however, is wait around for better, more efficient, and futuristic technology. We must take action now, not in 50 years’, 10 years’, or even 5 years’ time. Moreover, we already have the capacity to implement an effective strategy to reduce emissions to net zero, and needn’t wait for futuristic strategies.
https://medium.com/discourse/how-we-cut-carbon-to-net-zero-915f7abef7cc
['Dave Olsen']
2019-05-20 10:36:00.846000+00:00
['Nuclear', 'Energy', 'Climate Change', 'World', 'Environment']
Driver Trees — How and Why to use them to Improve your Business
Driver Trees — How and Why to use them to Improve your Business Foster alignment, broaden understanding and find focus I was introduced to the Driver Tree concept a few years ago at HelloFresh. It was introduced by our leadership to support our growth and focus the teams on how they could contribute to the success of the business. It worked. After its adoption, there was a noticeable elevation in the effectiveness of teams across the business. Driver Trees (also known as KPI Trees) are a simple effective tool that can be used in almost any organisation. In this article, I want to explore the concept in more detail, explain how they can help your organisation and give you some tips on using them. What is a Driver Tree? A Driver Tree is a map of how different metrics and levers in an organisation fit together. On the far left, you have an overarching metric you want to drive. This is the ultimate goal you want all the teams to be working towards. As you move to the right, you become more granular in terms of how you want to achieve that goal. Each branch gives you an indicator of the component parts that make up the “what” above them. Let’s apply this to a more tangible example of “being healthy”. Let’s assume I want to become healthier — this is a big goal and actually has many different avenues that can lead me there. Do I want to be more physically fit, more mentally healthy or improve my eating? These are all viable drivers of being more healthy. In the example above, I decided to explore “Mental Wellbeing” which is one of the drivers (to me) of being healthy. The aim is to continue to break each of these drivers into more specific and manageable drivers. Continuing down the tree the next two drivers are “Reduce Stress” and “Improve Sleep”. Getting to the bottom of the tree, you will notice that the examples are given are very specific. These can be tracked and actioned individually. Now, this is the important point - just because I meditated for 10minutes every day I can’t say I’m healthy. However, the compounding effect of all the smaller actions rollup to the driver above. This is where the value of a driver tree comes from — it helps you understand the specifics that you can action. For example, if I want to improve how much I meditate I could add it to my routine, creating a separate space for this activity or downloading an app to guide me. It’s this model and methodology we can apply to the component parts of our business. The Value of Driver Trees Driver Trees have 3 main value components for an organisation: Improve Understanding Foster Alignment Drive Focus Improving Understanding Driver Trees are a useful tool in helping breakdown the complex nature of your business. They allow you to abstract some of the complexity of your business to help everyone understand how the overarching pieces fit together (what is the model? how do we generate revenue? where do our costs go?). This benefits everyone in the organisation, from the leadership and across departments, and builds a map enabling different groups to discuss impact. With that understanding, it is much easier for individual teams to understand how they factor into it the success of an organisation. This is an essential part of shifting mindsets from outputs to outcomes. When teams recognise their work extends beyond closing tickets and understands their connection to the wider mission, you’re going to unlock the real impact of the autonomous team. A clear purpose that has meaning is what most organisations miss when they’re trying to get real and lasting buy-in from their employees. Fostering Alignment Awareness of work of other units in your organisation. From 6 Diagrams I use to explain Product Management Concepts. Humans are notoriously bad at communicating across larger groups — it’s one of the reasons that the Spotify Model promotes smaller teams. As organisations scale, they traditionally struggle with scaling communication channels and information silos form. It can be difficult to understand what another team is working on and, more importantly, why. The Driver Tree acts as a common frame of reference for you and your colleagues during discussions. It can act as a trigger for understanding “why” when discussing requests and opportunities with your stakeholders. It helps help the organisation understand which opportunities may be under-invested. Driving Focus The driver tree abstracts a lot of complexity, however, when combined with metrics, it helps unearth potential opportunity areas and acts as a guide. Let’s take an example. Imagine you’re the Product Manager for a funnel team and your aim is to increase the number of metrics. You see the % of customers that successfully complete the “Email Confirmation” step is only 50%. There are a few potential drivers in this— are the customers even receiving the email? If they are, why aren’t they clicking through? Again, each of these avenues may have further influencing factors. By mapping the process flow onto the Driver Tree, you’re able to break down the topic. It could be that you find your 3rd Party Provider returns an error 10% of the time, meaning the email isn’t even sent — this becomes the opportunity area for your team. You’re now working through a specific problem that you and your team can tackle together. Example of a Driver Tree for a team looking at the Email Confirmation Stage of a Sign-Up Funnel A colleague of mine previously commented that there are similarities between Driver Trees and Terresa Torres’ Opportunity Solution tree. Although there are some commonalities, I see the Driver Trees as a more general overview with the Opportunity Solution being the better tool to direction on how to influence the Driver Tree. Defining Your Driver Tree Let’s take a look at how you could approach a driver tree. Imagine a situation where you’re the Product Manager for the Netflix Homepage Our role is to support the total number of conversions. We affect this by driving the overall conversion rate (CVR) and this, in turn, is affected by each step in the funnel. The higher the CVR of each of those steps the more of our customers will complete the funnel. Start by adding anything you know that affects conversion rate. First, let’s add Page Load Speed. Google have shared extensively on the impact of Load Speed on Conversion. Using this as a starting point, we can explore what affects Page Load speed. Exploring Page Load Speed in our Driver Tree Two examples may be the time spend on Network Requests and another spend on the rendering. Again, following the “Network Time” branch, we can look at: # of Requests being made Average Response Time # of Blocking Requests (requests we need to wait for) I also included Lighthouse Score (as one of the elements is Performance) to give us another metric as a reference point. We can continue to extend this by looking are core metrics we’re getting out of tools like Google Analytics. Are people leaving our site (Site Drop-Off / Exit Rate) or are they exiting our funnel (e.g. Navigating to a ‘Help’ Page). Finally, we can even look for micro funnels on the page itself. In the Netflix example, I’d be interested in exploring how many people start entering their email, how many attempts to submit and what type of errors they’re getting back. These are all aspects that help you map out why “Funnel — Step 1 CVR %” is in it’s the current state. Note, the driver tree doesn’t tell you how to affect or driver lower level metrics. If the Exit Rate (%) is 10%, you still need to discover why this is the case — the tree will only help you understand what is true. Tips for Driver Trees Empower the teams — The core of the tree can be developed by a smaller team with a good and broad understanding of how the business works. However, in larger organisations, it is unlikely that one group will have the knowledge to develop the entire tree alone. In that situation, trust in the individual teams to inherit and extend the tree with their specialist domain knowledge. Involve everyone — There are a couple of dimensions to this. First, when you’re starting out, try to reach broadly across the organisation. When doing this process at Delivery Hero, we cut across the organisation. This is a time when it is useful to have Senior Leadership involvement because they can make connections and introductions that you may not be aware of. The second dimension is continuing this theme in team- or squad-specific workshops. When there is a natural overlap of different teams, involve them together. This allows a sharing of ideas, perspectives and contexts which the driver tree aims to capture. Some of the most effective Driver Tree workshops we did were the ones that involved the squads and their key stakeholders. Start with your goal — The top of the tree should be something that makes sense for all your organisation to work towards. If you’re a for-profit enterprise you can keep this as simple as “Gross Profit,” however, if you have a North Start Metric that you know is tightly coupled to your business and customer success, this would also work. Don’t be dogmatic — During this process, you’ll find that some metrics and indicators could appear in multiple locations. Although you should aim to minimise overlap, I’ve never considered it a major issue. The Driver Tree is a way of abstracting complexity and does not have to be a perfect representation. Use Metrics — Aim to use metrics as the individual drivers (blocks). This makes each element much more specific and helps you understand what impact you’re actually having. The tree will likely include a combination of business (revenue, cost), departmental (Email Open Rate, Ticket Response Time) and technical (% of API 500 Errors) metrics. However… It’s not a dashboard — There will always be an incentive to translate this into a dashboard with red and green numbers. While I agree there is value in that, your initial step should be as an education tool — not a reporting one. Validate — The first draft of your tree will include metrics you know are important and some you assume are important. As part of your Product Development, you will spend time validating (or invalidating them). You don’t have to get it right the first time. Remember, you will still… Iterate — Both the core of the tree and any more granular ones you do will constantly evolve. As you learn more about your product, your business and your customers it is natural for it to evolve. Don’t be afraid of this change (embrace it). Make it public —Do your best to make the Driver Tree public. Make it accessible in your knowledge management tool for anyone in the organisation, have the teams post physical copies of it in their work areas and reference it in their planning and alignment sessions. Include in the onboarding — Organisations have a natural churn in the workforce, some people leave and others join. Be sure that your new joiners are introduced to the concept and how it applies to their role. This is a great way to give context to your new joiners in their early days and provides a skeleton for the rest of the knowledge they will gain during onboarding. We’ve been doing this at Delivery Hero with very positive feedback. Want to see for yourself? We’re hiring Conclusion Introducing Driver Trees takes time and effort, however, I’ve found it’s one of the most effective tools to create a more holistic understanding of how your business works. This is an important building block in creating effective Product Teams that can focus on value creation over just feature delivery. Further, they act as a communication artefact to help you build alignment across your stakeholders.
https://medium.com/swlh/driver-trees-a-tool-to-make-your-teams-more-successful-88f751e86482
['Curtis Stanier']
2020-03-02 10:29:56.481000+00:00
['Product Management', 'Business', 'Work', 'Startup', 'Organizational Culture']
What Libertarians Get Wrong About Human Nature
One of the limitations of the libertarian philosophy is its distinctly (and arguably, solely) economic focus at the expense of loosely-grouped “societal concerns”. For the libertarian, such concentration is exacted on economic liberty because to dabble in “societal concerns” has immoral connotations attached to it. For the libertarian, if the state were to exert any kind of distinguishable influence on society this should be roundly treated as sin. Thus, to use an example from conservatism: To promote so-called “family values” would be perceived as undue overstepping of the ritual freedom of the individual to choose how they want to live. The government should be hands-off when it comes to matters of society, libertarians postulate. It is not their business! In addition, libertarians tend to voice skepticism towards the concept of “promoting a general welfare”. This constitutional clause, they would surely argue, arises naturally from each individual making the choices that they think are best. And through an Invisible-Hand-esque process, the ideal society is cobbled together, organized piece by individual piece. After all, isn’t this the underlying supposition being made? That if government did nothing but allow freedom and adopt a stance of neutrality towards any and all matters of society, that society would find itself divinely ordered? It’s a bit of a romanticized notion on par with the equilibrium models that the abstract world of high finance is fond of using. Libertarianism as a political philosophy is almost entirely underpinned by free-market theory because its logic is inherently economic in nature. You might ask, so what’s wrong with that? Well, if a political philosophy derives its logic solely from an economic theory, it would suggest that humans are solely transactional. And there’s a problem with that. It is not a lack of liberty that causes societies’ ills in a lot of cases; it is moreso what is being done with that liberty that is the problem. The libertarian logic tends to assume that if you get economic liberty squared away, society will automatically be on the road to total optimization (to borrow a term from economics). This could be partially true if we were truly rational maximizers, but we’re not. Or, if it were true we derived meaning from economic optimization, say. But that’s frustratingly not the case either. We’re individuals with souls that derive value from the institutions and social networks that make up our human experience — that act as society’s building blocks. Libertarian’s exclusive focus on liberty — as the fountain out of which everything else good and true will flow — is not only false but inadequate as a societal prescription. Liberty is magnificently great and noble, but it isn’t a panacea. Consider the current murmurings in our culture about societal decay — the repeated remarks that we’ve lost our moorings or that we’re increasingly isolated and unfulfilled in life. These are legitimate concerns and they appear to be corroborated by various studies on rising rates of mental illness, declining rates of religiosity, and surges in political polarization, to name a few. Libertarians don’t usually address vague issues of “societal decay”, because well, it’s “none of their business” — they’re ideologically opposed to getting their hands dirty with that question. Conservatives are less resistant to such a prompt and might bring up the importance of things like virtue, beauty, community, truth, responsibility, and wisdom to sustaining the human soul. These are things that libertarians, in their narrow view of government’s proper purview don’t address. And so, the question warrants being asked: We talk a lot about societal decay in our day and age. Could this be amended, do you think, by the libertarian solution of slathering more liberty on everything? Interjecting more freedom? I’m skeptical. It’s not a lack of liberty that causes societies’ ills in a lot of cases; it is moreso what is being done with that liberty that is the problem. Libertarians don’t always consider how societies get their values other than supposing that maybe they arise out of a spare economic framework, which is a dubious explanation to put forth. You don’t get meaning out of markets. And it’d be wrong to reduce human life down to a transactional model. Consider marriage. Does it abide by the ritualistic libertarian idea of economically-maximized self-interest? Not necessarily. In its contractual binding, it demands sacrifice. It can be emotionally messy and challenging. But nevertheless, marriage persists in Western societies as a union born chiefly of love. Love, you could say, is an ideal that is basic to human nature. Curiously, however, it doesn’t appear to mesh well with the abstract ideas of cold, economic logic that almost completely swallow libertarianism. This is a purposefully blunt thought experiment, because it’s obviously not the case that libertarians have anything against marriage. But in their rejection of deliberately upholding something like the institution of marriage (as the conservative might) they are implicitly discounting the grave importance of the basic units of society that give us purpose and belonging and an avenue to channel our nobler instincts.
https://laurennreiff.medium.com/what-libertarians-get-wrong-about-human-nature-9447d92aad3a
['Lauren Reiff']
2019-10-06 05:17:41.694000+00:00
['Philosophy', 'History', 'Society', 'Economics', 'Politics']
Technology Radar — October 2020 review — Part 1
Technology Radar — October 2020 review — Part 1 A review of the recent Technology Radar October 2020 update — I review at least three items from Techniques and Tools in this part Yes! Vol. 23 is out now and this is my review. The Tech radar provides the Software Engineering community, a very good glimpse of what technologies, techniques, patterns, tools, languages, frameworks are recommended for Adopt, Trial, Assess and Hold in four quadrants. You can also create your own radar, here These are, however, only guidelines as they stand, based on the research performed by ThoughtWorks. Needless to say, these recommendations doesn’t suit every organisation depending upon your needs. What you are encouraged to do though, is to create your own Technology Radar; see thoughtworks.com for more details. Photo by William Topa on Unsplash This article gives you my perspective of the techniques that I identify as ready to be adopted and fit into the current architectural/system design needs of many organisations; no matter the size/team, how disruptive or what you are building. You can also subscribe to the radar so that you won’t miss the radar as it is published. Check out the interesting themes for this edition; new normal REST APIs with GraphQL, IaaC and low-code, if you are into that type of thing. The Radar is a document that sets out the changes that we think are currently interesting in software development — things in motion that we think you should pay attention to and consider using in your projects. It reflects the idiosyncratic opinion of a bunch of senior technologists and is based on our day-to-day work and experiences. While we think this is interesting, it shouldn’t be taken as a deep market analysis. Birth of Technology Radar As a supplement, if you want to know about the history of Technology Radar, this will help. Photo by Yves Alarie on Unsplash Techniques Interactive radar: https://www.thoughtworks.com/radar/techniques Trial: CD4ML As machine learning models evolve within respective domains, it is ever more important to enable continuous delivery as part of the MLOps. Continuous Delivery for Machine Learning (CD4ML) is the discipline of bringing Continuous Delivery principles and practices to Machine Learning applications. While the concept is in the trial, it is worth evaluating the tools that support the CD4ML from the start as you look to continuously improve ML models, from idea to value. Photo by Arseny Togulev on Unsplash TRIAL: Event Interception This concept is very simple; intercept events and make a copy of it elsewhere so that you can replay and build a new system using the strangler pattern, thus retiring your Legacy. In the SQL world, Change Data Capture or CDC has been exactly that which lets programmatically intercept the events based on the transaction log and perform actions based on the output. I worked on a simple solution in an Asset Management solution long ago, if you want any insights, please reach out to me. ASSESS: Kube-managed cloud services If you are reading this, you are aware of the power of Kubernetes in orchestrating containers, in the cloud and in on-prem. Also being used by teams in Terraform and Pulumi for provisioning infrastructure, the new custom resources definitions supported by the Kubernetes-style APIs are now available in the cloud and offered by AWS, Azure and GCP. You should try if this is something for you if you afford to accept the fact that, it tightly couples your Kubernetes cluster with the Infrastructure. Photo by Pierre Bamin on Unsplash Tools Interactive radar: https://www.thoughtworks.com/radar/tools Tools quadrant is looking good with no items in the HOLD which means it all up for grabs in terms of any R&D to discover anything suitable for your team or organisation. Here is my review: Photo by ThisisEngineering RAEng on Unsplash ADOPT: Dependabot The idea of automatically sends your pull requests to update your dependencies to their latest versions is a dream come true! It is integrated with GitHub for you try to and also consider Renovate which supports a wide range of services, including GitLab, Azure DevOps. A DOPT: Helm A package manager for Kubernetes and it has greatly simplified the application lifecycle management in Kubernetes, with its dependency management, templating and hook mechanism. A SSESS: Litmus Chaos engineering tool for Kubernetes, with a low barrier to entry. It offers beyond random pod kill, including simulating network, CPU, memory and I/O issues. It is also interesting to learn that it supports tailored experiments to simulate errors in Kafka and Cassandra. You could try Gremlin, too. Kubernetes overview Principles of Chaos Engineering; courtesy: Gremlin Photo by Ashim D’Silva on Unsplash A SSESS: OSS Index It is super important for development teams to identify whether the dependencies of their application have known vulnerabilities. OSS Index could be used to achieve this goal. It is a free catalogue of open-source components and scanning tools designed to help developers identify vulnerabilities, understand risk and keep their software safe. It is fast, vulnerabilities are identified accurately and only a few false positives occur. Supported Ecosystems; courtesy: OSS Index Rest API documentation: Photo by Romain Chollet on Unsplash Create Your Radar You can create your own technology radar and see where the blips are compared to the ones published by Thoughtworks. It is important for you to understand the differentiator and what makes sense for you and why. There is also constant review needed in order adjust your radar when there is a need for a new framework or techniques that your team want to adopt and they have a credible reason/business case for it. Also, be mindful that you’d also need to create some artefacts including a lightweight Proof of concept to ensure that you are not leaving it too far to figure out any major constraints with the items from our Radar and perform a durable Market scan(s). Photo by Mike Petrucci on Unsplash Have you created and used your own Technology Radar for your project/organisation? It’d be great to hear your feedback and experience (comments welcome)!
https://medium.com/cloudweed/technology-radar-october-2020-review-part-1-5e958c4a456a
['Karthick Thoppe']
2020-12-24 11:07:52.290000+00:00
['Cloud Computing', 'Software Development', 'Technology Radar', 'DevOps', 'Technology']
The Market will always have room for Cryptocurrencies that are more Centralized than Bitcoin
Landon, you mentioned in your article: “ETH is a Bitcoin offshoot with debatable credibility, due to its centralized organization structure.” It’s interesting that you mention this, because I think it helps to prove my point that a semi-centralized blockchain can work. Even though Ethereum is not as big as Bitcoin, its current market cap is about one fourth the size of Bitcoin… so I think that’s proof that some centralization (like the centralization that I proposed that Facebook initially take for their coin) can be accepted by the market. A 43 billion dollar market cap for Ethereum is hard to argue with. That cap will probably go down at some point, but it could also go up. I think the market of cryptocurrencies, which currently consists of over one thousand varieties of them, will always consist of variants that include more centralization as well as ones that try for the most pure decentralization. The beauty of a free market is that consumers get to enjoy tremendous variety, and while one variety may generally be favored and given a premium by the market, it doesn’t mean that other varieties will totally die out. Those other varieties may serve a particular purpose or be especially useful for a specific use case.
https://medium.com/predict/the-market-will-always-have-room-for-cryptocurrencies-that-are-more-centralized-than-bitcoin-8a9cd4e6accb
['Eric Martin']
2018-03-09 07:15:17.546000+00:00
['Facebook Cryptocurrency', 'Bitcoin', 'Blockchain', 'Cryptocurrency', 'Facebook']
Dealing with List Values in Pandas Dataframes
Problem 3: Individual Columns for All Unique Values At this point, things are getting advanced. If you are happy with the results we got before, you can stop here. However, a deeper level of analysis might be required for your research goal. Maybe you want to correlate all list elements with each other to compute similarity scores. E.g. do kids who eat bananas typically also like mangos? Or maybe you want to find out which fruit has been ranked as the top favorite fruit by the most kids. These questions can only be answered at a deeper level of analysis. For this, I will introduce two useful methods. They differ in complexity, but also in what you can do with their results. Method 1 This is a shockingly easy and fast method I stumbled upon. And it is so useful! All you need is one line of code. fruits_expanded_v1 = fruits["favorite_fruits"].apply(pd.Series) Figure 5 — Expanded version of the fruit lists using method 1. As you can see, this one-liner produced a dataframe where every list is split into its single elements. The columns indicate the order, in which the fruit was placed in the list. With this method, you will always get a dataframe with a shape of (n, len(longest_list)). In this case, two of the 10 children named five favorite fruits, which results a 10x5 dataframe. Using this, we can find out which fruit was named most often as the number one favorite fruit. fruits_expanded_v1.iloc[:,0].value_counts(normalize = True) ## OUTPUT ## banana 0.222222 pear 0.111111 watermelon 0.111111 blueberry 0.111111 strawberry 0.111111 apple 0.111111 peach 0.111111 mango 0.111111 We can see that bananas are most often kids’ absolute favorite fruit. Alternatively, we could target single fruits and find out how many times they were named at each position of the lists. This is the function I wrote for that: def get_rankings(item, df): # Empty dict for results item_count_dict = {} # For every tag in df for i in range(df.shape[1]): # Calculate % of cases that tagged the item val_counts = df.iloc[:,i].value_counts(normalize = True) if item in val_counts.index: item_counts = val_counts[item] else: item_counts = 0 # Add score to dict item_count_dict["tag_{}".format(i)] = item_counts return item_count_dict If we apply it, we get: get_rankings(item = "apple", df = fruits_expanded_v1) ## OUTPUT ## {'tag_0': 0.1111111111111111, 'tag_1': 0.1111111111111111, 'tag_2': 0.2222222222222222, 'tag_3': 0.2, 'tag_4': 0} As you can see, we can perform rank-based analyses very well with this approach. However, this method is near useless for other approaches. Because the columns do not represent a single tag, but a rank, most tag-based operations can not be done properly. For example, calculating the correlation between bananas and peaches is not possible with the dataframe we got from method 1. If that is your research goal, use the next method. Method 2 This method is more complex and requires more resources. The idea is that we create a dataframe where rows stay the same as before, but where every fruit is assigned its own column. If only kid #2 named bananas, the banana column would have a “True” value at row 2 and “False” values everywhere else (see Figure 6). I wrote a function that will perform this operation. It relies on looping, which means that it will take lots of time with large datasets. However, out of all the methods I tried, this was the most efficient way to do it. def boolean_df(item_lists, unique_items): # Create empty dict bool_dict = {} # Loop through all the tags for i, item in enumerate(unique_items): # Apply boolean mask bool_dict[item] = item_lists.apply(lambda x: item in x) # Return the results as a dataframe return pd.DataFrame(bool_dict) If we now apply the function fruits_bool = boolean_df(fruits[“favorite_fruits”], unique_items.keys()) we get this dataframe: Figure 6 — Boolean dataframe. From here, we can easily calculate correlations. Note that “correlation” is not really the correct term, because we are not using metric or ordinal, but binary data. If you want to be correct, use “association”. I will not. Again, there are multiple ways to correlate the fruits. One straight forward way is the Pearson correlation coefficient, which can also be used for binary data. Pandas has a built-int function for this. fruits_corr = fruits_bool.corr(method = "pearson") Figure 7 — Pearson correlation dataframe. Another way is to simply count how many times a fruit was named alongside all other fruits. This can be solved using matrix multiplication. For this, we will need to convert the boolean dataframe to an integer based on first. fruits_int = fruits_bool.astype(int) Then, we can calculate the frequencies. fruits_freq_mat = np.dot(fruits_int.T, fruits_int) ## OUTPUT ## array([[5, 3, 3, 2, 2, 1, 1, 1, 0, 2, 0, 1], [3, 4, 2, 1, 1, 1, 1, 2, 1, 0, 1, 1], [3, 2, 4, 3, 1, 2, 0, 0, 0, 1, 0, 0], [2, 1, 3, 4, 2, 2, 0, 0, 0, 1, 0, 0], [2, 1, 1, 2, 3, 1, 0, 0, 0, 1, 0, 0], [1, 1, 2, 2, 1, 3, 0, 0, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0, 2, 1, 1, 0, 1, 1], [1, 2, 0, 0, 0, 0, 1, 2, 1, 0, 1, 1], [0, 1, 0, 0, 0, 0, 1, 1, 2, 0, 2, 0], [2, 0, 1, 1, 1, 0, 0, 0, 0, 2, 0, 0], [0, 1, 0, 0, 0, 0, 1, 1, 2, 0, 2, 0], [1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1]]) All we need to do now is add labels and transform it back to a dataframe. fruits_freq = pd.DataFrame(fruits_freq_mat, columns = unique_items.keys(), index = unique_items.keys()) Figure 8 — Frequency dataframe. If you are looking for a nice visualization, you can create a heatmap with the seaborn library. import seaborn as sn fig, ax = plt.subplots(figsize = (9,5)) sn.heatmap(fruits_freq, cmap = "Blues") plt.xticks(rotation=50) plt.savefig("heatmap.png", dpi = 300) Figure 9 — Heatmap. With the Pearson matrix, we can easily build a fruit recommender system. For example, if you input that you like bananas, it will recommend you a maracuja, because those two have the highest correlation (0.67). You will be surprised by how powerful this simple approach is. I have used it successfully multiple times. If you want to do something like this with the frequency dataframe, you need to normalize the data first. However, that is a topic for another post.
https://towardsdatascience.com/dealing-with-list-values-in-pandas-dataframes-a177e534f173
['Max Hilsdorf']
2020-09-06 17:26:25.635000+00:00
['Data Analysis', 'Programming', 'Python', 'Pandas', 'Data Science']
Cornell scientists develop “killer cells” to destroy cancer in lymph nodes
Cornell biomedical engineers have developed specialized white blood cells — dubbed “super natural killer cells” — that seek out cancer cells in lymph nodes with only one purpose: destroy them. This breakthrough halts the onset of metastasis, according to a new Cornell study published Nov. 2 in the journal Biomaterials. “We want to see lymph node metastasis become a thing of the past,” said Michael R. King, the Daljit S. and Elaine Sarkaria Professor of Biomedical Engineering and senior author of the paper, “Super Natural Killer Cells That Target Metastases in the Tumor Draining Lymph Nodes.” King worked on the study with lead author Siddarth Chandrasekaran, Ph.D. ’15; Maxine F. Chan ’16; and Jiahe Li, Ph.D. ’15, biomedical engineering. For tumor cells, the lymph nodes are a staging area and play a key role in advancing metastasis throughout the body. In the study, the biomedical engineers killed the cancerous tumor cells within days, by injecting liposomes armed with TRAIL (Tumor necrosis factor Related Apoptosis-Inducing Ligand) that attach to “natural killer” cells (a type of white blood cell) residing in the lymph nodes. These natural killer cells became the “super natural killer cells” that find the cancerous cells and induce apoptosis, where the cancer cells self-destruct and disintegrate, preventing the lymphatic spread of cancer any further, said King. “In our research, we use nanoparticles — the liposomes we have created with TRAIL protein — and attach them to natural killer cells, to create what we call ‘super natural killer cells’ and then these completely eliminate lymph node metastases in mice,” said King. In cancer progression, there are four stages. At stage I, the tumor is small and has yet to progress to the lymph nodes. In stages II and III, the tumors have grown and likely will have spread to the lymph nodes. At the stage IV, the cancer has advanced from the lymph nodes to organs and other parts of the body. Between 29 and 37 percent of patients with breast, colorectal and lung cancers are diagnosed with metastases in their tumor-draining lymph nodes — those lymph nodes that lie downstream from the tumor, and those patients are at a higher risk for distant-organ metastases and later-stage cancer diagnoses. In January 2014, King and his colleagues published research that demonstrated by attaching the TRAIL protein to white blood cells, metastasizing cancer cells in the bloodstream were annihilated. “So, now we have technology to eliminate bloodstream metastasis — our previous work — and also lymph node metastases,” King said. This study was funded by the nonprofit organization Lynda’s Kause Inc., started by Lynda King, who died from cancer in July 2014. Lynda King is no relation to Michael King. The foundation funds metastatic cancer research and patient support, and Michael King’s laboratory received the organization’s first two research awards.
https://medium.com/cornell-university/cornell-scientists-develop-killer-cells-to-destroy-cancer-in-lymph-nodes-1e2684a4473a
['Cornell University']
2015-11-13 00:18:22.625000+00:00
['Cancer', 'Research', 'Health']
The truth about political violence
Very often — almost always, in fact — mainstream forms of media remind us of who really IS hateful. Republicans are party of bigotry and racism. Conservatives are sexists and hateful. Right-wingers are so hateful and intolerant that we need the site Right Wing Watch to keep an eye on them… It might seem obvious that one side of the aisle has a near-monopoly over hate. After all, right-wing acts of violence are consistently being talked about, while you hear nothing about “socialist terrorism” or “far-left violence”. It’s safe to say that right-wing violence should be our main concern. Or is it? Coming to that conclusion seems like the sensible thing to do if you’re not very in-the-know, so to speak. However, we should take into consideration two factors: First, that most of the mainstream media is heavily biased in favor of the left. And second, that many people are trapped in their own echo-chambers and therefore are only exposed to one viewpoint. With this in mind, we need to dig deep into the statistics ourselves to see if this conclusion holds up. The claim that “right-wing extremism” is on the rise comes from domestic terrorism statistics from groups such as the Anti-Defamation League. These statistics show that right-wing terrorism is on the rise and is the biggest threat of violence in comparison to other ideologies. The problem with this data is the way these crimes are often categorized under “right-wing violence”. For example, some cases in which the perpetrator’s motivations are uncertain are still considered right-wing terrorism, instead of being labeled under a more general term. The same is true for cases that are labeled as “anti-government extremism”, they conveniently end up backing up the supposed “right-wing terrorism” trend. Image from New America’s report on terrorism. Surprisingly, it shows that Islamic terrorism is still a greater threat than right-wing terrorism. There is also the fact that definitions are not applied equally to both sides. For example, they often consider anti-Semitic hate crimes as right-wing, whilst similar crimes committed by black people are not considered left-wing. Also, the data usually begins after 2001, conveniently omitting the 9/11 attack which killed nearly 3000 people. There is enough reason to consider these statistics to be self-serving and disingenuous. Even if they were completely credible, the idea that right-wing violence poses a grave threat to our safety is ludicrous. In none of these reports do the yearly number of deaths attributed to right-wing extremism surpass the number of those caused by bee stings. Not even half of that number, in fact. That said, statistics can only help us so much. They can be very useful in identifying trends, but with such low numbers of incidents even that proves to be difficult. It takes more than data to quantify ideological and narrative changes. This is why keeping up with the news is so crucial. Clairvoyance doesn’t exist, but keeping up with the latest current events can give you an idea of what’s going on in our society. That is, of course, limited to what your sphere or influence is feeding you. An echo chamber problem? Or… One could boil the problem of only right-wing outrage being talked about down to a mere echo-chamber issue. This isn’t the case at all. Both sides of the aisle suffer from polarization and see the world through their biased lens, but they are in no way on the same footing. Leftist/progressive views will always take precedent because, as previously stated, the mainstream media is overwhelmingly biased towards that side. The average person won’t go out of their way to find alternative, non-biased sources of media that will inform them about left-wing violence. Those that do so are often labeled as consumers of fake news, even when their sources provide clear pieces of evidences such as video footage. For example, the beating of journalist Andy Ngo by Antifa protestors was seldom covered by mainstream media sources. The attack was brutal, to say the least. Antifa protestors threw cement-filled shakes at him, leaving him hospitalized with head contusions and a brain hemorrhage. Nevertheless, it seems that the episode wasn’t relevant enough in the eyes of the mainstream media. Some people on the left did talk about it on their own, mostly on Twitter. Instead of condemning the attack, many people on the left excused it by saying he “had it coming’. Slate writer Aymann Ismail even said he “deserved worse”. There has been a very worrying amount of leftist apologists since 2016 election. He who falls silent concedes, and a growing number of leftists are refusing to outright, firmly condemn acts of violence such as these. Political violence is not a partisan issue. Blaming the victim just because their political views don’t align with yours enables more of these attacks to happen. Recently, an Antifa terrorist was killed by the police while attempting to blow up an ICE facility. Although he received almost universal bashing from both sides, there were still people who called him a “martyr”. Surprisingly, this story also wasn’t covered by any mainstream media outlet, let alone by CNN or The New York Times. Apparently not relevant enough either… Not as relevant as covering a random act of violence by a Trump supporter, anyway. Or petty controversies and events that turn out to be hoaxes. Or, god forbid, a widespread conspiracy theory about Russian interference in our elections. Even more recently, there were two deadly mass shootings in Ohio and Texas. Both happened in less than 24 hours. The Ohio shooter was extremely liberal, pro-Antifa and pro-communism, as evidenced by his social media. He thought of the ICE bomber as a “martyr”, expressed support for Elizabeth Warren, and said that he “wanted socialism” and that he would “not wait for these idiots to finally come round to understanding” An image showing Connor Bett’s Twitter account retweets Devin Patrick Kelley, the Texas Walmart shooter, also left a piece of his mind for us to see. In his manifesto, which is something mass shooters seem to have every time now, he expresses many of his political opinions. Among some of them is the support for universal basic income, universal healthcare, and climate change awareness. Of course, his manifesto also had its fair share of far-right support. He stated his support for people like the New Zealand gunman, as well as promoting the idea that there is an ethnic “invasion” of Hispanics going on. The point of knowing these facts is not to blame the other side, but to realize that the caricature of the crazy right-wing shooter doesn’t represent reality. In general, people — Kelley included — have a complex set of beliefs that are often contradictory. Otherwise known as cognitive dissonance. It seems that both the far-left and the far-right have contributed to this man’s radicalization, not just the far-right. But yet again, these facts are ignored in favor of pushing the far-right fearmonger narrative. As the next election is taking place relatively soon, ignoring facts like these can prove to be very dangerous. What’s especially telling about the state of the left-wing is not the one-sided coverage by news outlets, though. Pushing a narrative that lines up with your political goals is one thing. Endorsing or failing to denounce violent groups is much more pathological. Unfortunately, this is what we’ve seen from certain groups in society. It’s no news that academia is mostly supportive of progressive, often straight-up socialist ideologies. This is so much so that students often get indoctrinated into becoming activists and radicals. This is not a recent occurrence either. Organizations such as Campus Reform, the National Association of Scholars, and independent reporters have been covering these educational problems for a long time now. Former John Jay professor and supporter of Antifa’s tweet on policemen The problem is that many college professors and faculty often fail to condemn groups like Antifa and their violent or disruptive acts. Some college professors even call for discrimination and even violence against conservatives themselves. In an environment which shames and censors dissenting ideas, it’s inevitable that people will get radicalized. Yet this is an ongoing issue that won’t get better anytime soon. A similar problem occurs with one of the most important political institutions in the country. The Democratic establishment has been largely silent on far-left violence, even as recent attacks unfold. Following the attack on the ICE facility, figures such as Alexandria Ocasio-Cortez and Ilhan Omar (the same Ilhan Omar that routinely expresses anti-Semitism) were asked if they condemned the ICE attacks. They stayed silent. When Antifa attacks police and Trump supporters in places like Berkeley, Democrats stay silent. When they Riot in other Democrat-run cities, the police are told to stay on the sidelines. Despite how many times they and other leftists instigate violence, Democrats never deem any far-left group as a threat. Some Democrats are actually open about their support to them. Rep. Keith Ellison Tweeted out a photo of himself holding an Antifa “Anti-fascist handbook”. He claimed that he had “found the book to strike fear in Donald Trump’s heart”. A morbidly similar phrase to “striking fear in the hearts of the unbelievers”. Former Vice President Joe Biden opened his presidential campaign by calling Antifa a “courageous” group of Americans. Picture of the Ellison’s deleted tweet These and more examples suggest something troubling about the Democratic grassroots support. It shows that a lot of it might come from a fundamentally radical constituency. Despite all the claims about the Republicans and conservatives having ties to white supremacy, Republicans unilaterally condemn white nationalism and far-right extremism whenever they can. The same cannot be said about the left and Antifa. And as much as you want to criticize Trump for saying there are fine people on both sides, trying to argue the opposite is falling onto a divisive, partialized, and dishonest narrative. A greater evil doesn’t take away from the lesser one. They’re both evil. Terrorism should be condemned regardless of who is the perpetrator. Bad actors should be shamed regardless of who they support. This is why it’s so important to acknowledge that hatred from the left is also on the rise. Not only from people who support far-left groups, but also, to a lesser extent, liberals in general. People with MAGA hats being harassed has become a common occurrence. Violence against political opponents has practically become glorified in many spheres. Intolerance towards dissenting opinions has become so bad that your livelihood can be put in jeopardy if you don’t fall in line. It’s time to put a stop to hate, and the first step to doing that is to realize that both sides are guilty of contributing to the problem. One side controls the popular narrative and mostly gets free rein to do what they wish. The other side reacts to an almost systematic shutting-down of their speech. This is not to paint the right wing as the victim here. Any act of extremism or blatant bigotry is unacceptable, and there is no justification for it. Nonetheless, we need to realize one thing, which is the topic for a future article: radicalization doesn’t happen in a vacuum. The question the left should be asking is not “What can we do to stop far-right extremism?”, it’s “What can we do to stop contributing to both far-left and far-right extremism?” To wrap it all up, the takeaway from this article should be clear by now: the left needs to recognize that a portion of its crowd is going down a terrible path. A path of intolerance and disdain, which is often encouraged by public figures and political figures. If the left wants to “heal” this country’s divide, they have to start by healing their own bloc.
https://medium.com/discourse/the-truth-about-political-violence-91d20672e004
[]
2019-10-24 20:32:25.109000+00:00
['Politics', 'Society', 'Antifa', 'Terrorism', 'Violence']
15 Things I Wished I’d Learned Earlier as a Software Developer
Lessons You’ll never know everything about anything Programming, algorithms, frameworks, libraries — they’re all too vast for any one person to understand the whole system. Swallow your ego and accept that you don’t know most of the things out there. Learn how to use Git properly The worst programmers are the ones who don’t actually know how to use Git and don’t ask for help, messing up the Git tree and causing hours of unnecessary work. Don’t be that person — learn Git. Learn shortcuts in your IDE You’ll be surprised how much time you can save if you’re effective in your IDE. That means knowing where all the menu items are and shortcuts to the most common ones, to spend less time having to click around and more time coding. Stay physically active Coding is a very sedentary activity. Staying physically active, whether walking around for half an hour a day or going to the gym, will do wonders for your productivity. Plan before coding I see too many experienced developers rush headfirst into programming without doing the proper preparation to ensure that they aren’t wasting time. Yes, I understand that we’re software developers because we love coding, but some nice flowcharts, feature requirements, and other preparations can be done that’ll make the programming that much easier and faster. For the love of god, use a linter Style consistency is a big deal in any software application where you’re working with others. Using a linter is a great way to ensure that you write code that follows the latest and greatest standards. Contribute to open source Simply put, contributing to open source gets your name out there, gives you experience working on large projects, and hopefully makes you feel good about giving back to the community. Stop binge-watching tutorials and start coding I’ve fallen into this trap many times myself, constantly watching tutorial after tutorial but never taking the step of creating something. That chasm is one that has to be jumped and, once jumped, will make you feel so much better. If you have time, blog Blogging is a great way to practice technical writing, get your name out there, and make people happy with great articles! Create your developer portfolio If you’re trying to get hired or show off your skills to friends, a portfolio is essential. This is a fun weekend project that will make your life so much easier when trying to show others your developer talents. Try to learn something new every day Don’t ignore the power of compound habits. Think of how much more knowledge you’ll have in a year if you commit right now to learning something new daily. Don’t take code critiques personally This is one that I struggle with myself, but when someone is criticizing your code, it’s not a personal attack. Have the matureness to step away and view what you’ve created from an unbiased view. It’ll help you write better and code faster as a developer. Don’t compare yourself to those around you Imposter syndrome is a big problem in the software developer community. Don’t make it worse by comparing your skills and talents to others’. Everyone has unique experiences. Recognize that your path has led you to where you are today, and appreciate that. Don’t be afraid to say no Don’t overcommit yourself and be firm in your dedication to a singular focus. Don’t be afraid to turn down offers to work more or on different projects, and prioritize yourself above your work. Learn basic DevOps DevOps, though often considered boring, is critical to building any kind of application. Take some time and learn the basics of how DevOps works so you can leverage it effectively for your next project.
https://medium.com/better-programming/15-things-i-wished-i-learned-earlier-as-a-software-developer-73d515a61aba
['Caelin Sutch']
2020-09-10 15:29:51.587000+00:00
['Software Development', 'Software Engineering', 'Habit Building', 'Habits', 'Programming']
Move over CIOs. Designers are taking over corporate boardrooms.
Design has emerged from the original architecture and furniture studios, automobile factories and Silicon Valley’s computer labs, and is heading to a corporate boardroom near you. Its new form is not a designer chair, handbag or technology. It is human. This new type of designer is equally comfortable in a navy suit or black turtleneck. Fuelled by top selling business books and management consultant reports, this new design movement is all about customer-tailored companies thriving in today’s uncertain economic and political climate. Design and Business Books over the past 10 years Over the past 15 years we have seen an exponential growth in new design-related jobs — from UX designer, service designer, customer experience designer, business designer and chief design officer. Over the past five years we have seen job ads pop up in unexpected places. Designers are now inside banks, accounting firms, telecommunication departments and manufacturers. What is driving this design renaissance? It is a combination of influence, proof and timing. Early influences can be attributed to a series of published works over the past ten years, particularly those authored by big thinkers like Roger Martin, design consultancy leaders like Tim Brown, and design executives such as John Maeda. They, along with a growing academic and industry community, have long connected design to business processes, operations and strategies. The proof would be collected over many years and finally published in 2013 by the Design Management Institute (DMI). Their Value of Design report aimed to nudge the capital markets to invest in design-infused companies, as they were surpassing traditional firms with an average of 220% return on their share price value. The report was the first to offer proof that a well-designed product, service or experience sells itself. Top business magazines such as Forbes followed, supporting DMI’s findings in their 2014 article ‘What Is Behind the Rise of the Chief Design Officer’, explaining why design is moving into the C-suite. In 2017, the Harvard Business Review provided more reasons for the need for design leadership, with their article on how CEOs were admitting to costly over-engineered processes, products and business models resulting in loss of customers, jobs and brand loyalty. A few weeks ago, global management consultancy McKinsey published their “The Business Value of Design” report, making the case for how integrating design across an entire company will have positive impact on employees, customers and the bottom line. It is perhaps this most recent report, authored by trusted management consultants that is creating the real design buzz in the boardroom. If you weren’t paying attention, you may have missed the business transformation activities by the world’s top management consulting firms — they have been actively acquiring design agencies, creating their own design-leadership practice, placing Chief Design Officers and even offering design-thinking training for their multinational clients. Design has officially emerged beyond products and services (e.g. Apple and Starbucks), to experiences (e.g. Amazon and Uber) and strategies (e.g. Designed in China). Design and its cousin ‘design-thinking’ are now being lauded as a much-needed mindset for leaders — those seeking a customer-centred approach to business innovation, reimagining operations and rethinking supply chains and financial models. Why? Design is proving to be extremely effective as a creative problem-solving approach for business, and appears to be an antidote to the over-engineering mistakes of the past. Package goods corporations are seeking to understand how Spanish clothing brand Zara is able to get street fashion trends into the hands of retail customers in record time. Manufacturers are watching Amazon’s bold and encroaching actions in redefining supply chains. Financial institutions are following Apple and Google as they are competing with tech companies for mobile payment transactions. In Canada, designers are finding their way to corner offices. IBM is growing their design leadership studios, Scotiabank is expanding their Digital Design Factory and Deloitte is establishing their Greenhouse design advisory group as customer insight departments. Make no mistake these are not your typical designers, they are armed with graduate degrees in business, strategy and design. In early 2018, the University of Toronto’s Rotman School of Management created a new professorship in Business Design (the first of its kind in the world), to teach and research the next generation of design leading MBAs. These graduates are uniquely positioned to make a business case for design’s ROI for their organizations while integrating customer needs. To better understand customers, companies are starting to rethink their processes and management teams. Designers are now heralded as those who will guide global corporations and local government organizations in offering services, experiences and strategies will both delight customers and shareholders. Interestingly, Canadian design educator Robert Peters once stated “Design creates culture. Culture shapes values. Values determine the future”. It appears companies are finally responding. A version of this article was published on The Conversation: https://theconversation.com/why-designers-have-arrived-in-corporate-boardrooms-106437
https://angelsun.medium.com/move-over-cios-designers-are-taking-over-corporate-boardrooms-30a59bfefda5
['Angele Beausoleil']
2019-01-03 19:19:47.477000+00:00
['Strategic Design', 'Design', 'Business Design', 'Business', 'Innovation Management']
React Native Can Help You Cut Your Mobile Development Costs By Up To 33%
The framework implements a new way of building native apps, thanks to which you’re able to save development hours. React Native allows you to build native mobile apps using JavaScript and declarative UI components. The components correspond to the native UI components of iOS and Android. One code So, in development for both platforms, we can use Javascript instead of e.g. Swift and Java. This is the biggest advantage of React Native, and thanks to this feature we’re able to share the code between both mobile platforms. This is why you can use the code developed for iOS on Android also. Of course, you’re not able to share the whole codebase, because you use native modules for specific platforms. Still, with React Native you can significantly reduce development time building an app which will be used on both iOS and Android. One team Having one team for both mobile platforms gives you flexibility in creating and managing the team. One lead developer in place of two in native approach. As a result, you can have a smaller team that is less costly and easier to manage. Especially, when you decide to simultaneously develop a web app using React.js. You can share some parts of the code between three platforms instead of two. Android, iOS and Web. Then, you can use the approach to have separate modules based on features instead of a specific platform. Faster results React Native is a real alternative to native mobile app development. We’ve trusted React Native with one of our mobile apps that took 1.5 months to develop. Based on our experience we’ve been able to estimate that using the native approach would have cost us 0.5 month more, which equals 33%. Summarising, shorter development time, reduced manpower and an easily manageable project are good reasons to consider React Native for developing your next mobile app for both iOS and Android. This article was originally taken from Briisk Blog.
https://medium.com/briisk/react-native-can-help-you-cut-your-mobile-development-costs-up-to-33-3821560cd665
['Lukasz Sajkowski']
2017-12-12 14:11:17.898000+00:00
['React', 'React Native', 'Business Development', 'Software Development', 'Web Development']
Should I Kill My Roommate?
Okay, it might be inapt to compare cases like mine with a wild case of extremism. But to Kierkegaard, and to me, there isn’t much of a distinction. Not because he isn’t a consequentialist (he isn’t), but that he noticed that behind every moral decision lies an idea that moral philosophers often overlook: faith. Faith isn’t easy. That’s why moral decisions in the real world are never easy. Think about it. Faith asks us to believe in something in the future now. For those of us who go to religious training camp every now and then, it wouldn’t be that hard. For my encounter, I would’ve needed to believe that the man was telling the truth, that my money wasn’t being cheated, and importantly, that I was right in giving money to a shady stranger with a half-decent story. To give or not to give, it’s difficult either way. Photo by Ethan Elisara on Unsplash Moral Actions Require Faith Kierkegaard thinks that behind every significant moral decision in our life lies a paradox. This paradox is precisely the question I set out: should I do the things I’m not supposed to? Here, Kierkegaard juxtaposes what we are objectively (or socially) obligated to do and what we are subjectively (or personally) obligated to do. The truth is, these two hardly ever align. Kierkegaard notices this. Let’s think about another popular philosophical argument utilitarian Peter Singer made. He argues that we are obligated to donate a portion of our money to charity regularly. First, we contribute to the overall well-being of our world. Second, it doesn’t cost us greatly to do so. But most of us don’t do this. If we understand Singer’s reasoning, it would be close to infallible. We have very strong reasons to contribute to charity regularly. We would do well in living less lavish lives so that others can have barely liveable lives. Do we have any reason to do so? Yes, plenty, in fact. Do we have any reasons not to? Intuitively, not really. I say ‘intuitively’, as most of us don’t genuinely believe that our donation would contribute to anything. Sure, you’d probably say it’ll help some random starving kid get a week’s worth of food. But in the grander scheme of things, I wouldn’t stake my life on that donation having any significant impact. I would think neither do you. That’s why I don’t give to charity. Or rather, that’s why I don’t subscribe to any charity programmes. This is also a reason why many people are reluctant to vote. They do not believe in the significance of their votes. It is also why people are reluctant to recycle or turn vegan. I’m not vegan, because I sincerely think that my consumption of animal products has nothing to do with their suffering — fight me. But I do vote. On occasions where we are detached from the outcome, we often find it hard to believe that a single person can change the outcome — despite being constantly reminded that it does. As we can see, one of the greater motivations behind action and change is belief. It even trumps reason. But it’s not just any belief. It’s faith in particular. It’s the belief in the unknown. Precisely because we cannot experience the consequences of some of our actions, we find it difficult or impossible to believe in our actions. We lack faith. So, we lack conviction. Thus, we don’t do it. Faith is generated when we believe in our personal obligations. Vegans genuinely believing that they not consuming animal products spares them the responsibility of animal cruelty (it does), so they don’t consume. People who regularly and indiscriminately give to charity genuinely believe that their actions can save lives in some distant place, so they do it. In cases like these, our objective obligation to do certain things exists; our personal obligation doesn’t. These tend to lead to frustrating outcomes where people aren’t doing what they should be doing. It’s also the reason it’s difficult to convince people to do supererogatory tasks. You got to make them believe in something they genuinely don’t — tough luck. This is Kierkegaard’s critique of modern ethics. It tends to forget that moral actions and decisions require moral agents to have faith in what they do. More interesting cases happen when objective obligations contradict with our personal obligations. It’s when Timmy wants candy, but mommy says he shouldn’t. It’s when I’m inclined to give that man my money when I really shouldn’t. It’s when Abraham attempting to sacrifice his son for God when he really shouldn’t. Should I Kill My Roommate? Studying abroad for many years taught me one thing: it’s better to have a room to yourself. I’ve had several roommates, I’d thought of killing most of them on many of my sleepless nights — I’m a very light sleeper. They’ve all done crimes that went undetected by man. One liked to drag chairs, creating a choral of screeches. One liked to sing (if ‘sing’ is even the right word) at 3 am while doing his economics assignment. Another liked mangling plastic bags when I’m asleep. They must be brought to justice. Photo by Quin Stevenson on Unsplash I’ve had voices in my head telling me to sacrifice them to the God of Slumber in exchange for good rest. To assure you my sanity, my conviction wasn’t as strong as Abraham’s. Any sane person would deem that a preposterous thing to say or to even think about. But the interesting thing about thinking about dark things like these makes you wonder why, indeed, do I not kill my roommate? I had very strong reasons not to kill him. But I also had very strong reasons to kill him. Philosophers like to say that if the overall reason to kill my roommate outweighs the reason not to kill, then I can or ought to kill. When it comes to evaluating the reasons, I couldn’t be sure which one actually takes precedence. I really want my beauty rest. See, if I kill him, I’d end up in jail — if I get caught. I’d have a guilty conscience, and I’d have more sleepless nights. If I don’t kill him, I’d definitely not get any rest any time soon. I couldn’t be sure of either one of the outcomes. Which one was actually important to me? So here, I’d need to believe in something that is unbeknownst to me: that I’d get my rest eventually. That’d the torment of lying awake at 2 am when I should be asleep will end soon. I wouldn’t have to kill him. I believed in not killing my roommate (fortunately, but boringly). And I don’t believe that killing my roommate would indeed give me a good rest. But it’s not surprising that some egomaniac or religious fanatic in my position would have killed their roommates. We’ve had people blowing up buildings and killing magazine authors because they’d believe murdering people would give them ‘good rest’ when they go upstairs. Retrospectively, I wasn’t so different from them only that my conviction wasn’t as strong as theirs.
https://medium.com/illumination/should-i-kill-my-roommate-1b95473f0961
['Wei Xiang']
2020-12-13 10:06:35.769000+00:00
['Self-awareness', 'Faith', 'Philosophy', 'Humour', 'Ethics']
The singular value decomposition in a nutshell
Matrix decomposition is a ubiquitous technique in mathematics with a wide variety of practical and theoretical applications. Here a matrix is decomposed or factorized as the product of two or more matrices where the factors satisfy some desirable properties, depending on the nature of the original problem. So by matrix decomposition we mean a multiplicative decomposition, not an additive one (although additive decompositions can be also useful in some cases). For instance the LU-decomposition and its variants (LDU, LUP, LLᵀ) are useful to solve linear equations of the form Ax = b, where A ∈ ℝⁿˣⁿ is a given square matrix, b ∈ ℝⁿ is a given vector. Here L is a lower triangular matrix with all 1’s in the diagonal and U is an upper triangular matrix. By knowing that A can be written as A = LU, we can transform the above equation to LUx = b which is equivalent to the system Ly = b, Ux = y. The solution of this system is fast since U and L are triangular matrices, the computational cost for solving these systems (usually referred as forward and backward substitutions) is O(n²). The matrices L and U are essentially “by-products” of the Gaussian-elimination, where — loosely speaking — L stores the steps of the elimination process and U stores the resulting matrix after the elimination. So to solve Ax = b using Gaussian elimination directly requires 2/3 n³ flops, whilst creating A = LU and solving the two triangular systems result in 2/3 n³ + O(n²) flops that have the same magnitude, so it is not clear why this extra effort would make a difference. The gain we obtain is more clear if the task is to solve many linear equations having the same matrix A and only the right hand side b changes. In this case the elimination procedure, that is the creation of the LU-decomposition needs to be done only once. Another nice application of this decomposition is to calculate the determinant of A. Because of the forms of L and U: so det(A) can be calculated in O(n³) flops as well (instead of using Cramer’s rule that has O(n⋅n!) asymptotic complexity which makes it practically useless). This decomposition is one of the most widely used matrix decomposition technique used in applied mathematics. However, it is much lesser known in data science than in numerical analysis. The decomposition method that is as widespread in data science as LU in numerical analysis is something else which I would like to introduce in the following sections. The singular value decomposition of an arbitrary matrix What data scientists use quite often is the singular value decomposition which can be found behind linear regression and least square methods, and a useful technical tool for solving linear systems that have no unique solution (Moore-Penrose pseudoinverse), performing principal component analysis, calculating low-rank approximations. There is also a plethora of real world applications of singular value decomposition such as image compression, recommender systems, numerical weather forecast or natural language processing. In what follows we would like to introduce the concept of the singular value decomposition (SVD for short) and illustrate it by showing some applications. Let A ∈ ℝⁿˣᵐ be an arbitrary (not necessarily a square) matrix. It can be complex valued as well, but in the examples we are going to deal with real matrices only. Then there exist matrices U ∈ ℝⁿˣⁿ, D ∈ ℝⁿˣᵐ and V ∈ ℝᵐ ˣᵐ, such that where U and V are unitary matrices, that is U*U = UU* = Iₙ and V*V = VV* = Iₘ and D is a diagonal matrix, that is dᵢⱼ = 0 if i ≠ j. The star operation means conjugate transpose, that is but since we are dealing with real matrices now, this is the same as the transpose of the matrix. The diagonal elements in D are nonnegative numbers, in decreasing order: dᵢᵢ = σᵢ, σ₁ ≥ σ₂ ≥ … ≥ σᵣ > σᵣ₊₁ = … = σₘᵢₙ₍ₙ,ₘ₎ = 0, where r is the rank of the matrix A. These σ values in the diagonal of D are called the singular values of A. Before we would go into more details, I would like to show how this decomposition can help to compress an image. We will rely on the following property of the SVD-decomposition. Low-rank approximations of A Let k ∈ ℕ a given natural number, where k ≤ rank(A) ≤ min{n,m}. What we look for is a matrix Aₖ having rank(Aₖ) = k which is the best approximation of A among the matrices that have rank equals to k. To formulate the low-rank approximation problem, we would like to solve the following minimalization problem: Here ||X||ꜰ denotes the Frobenius norm of a matrix X which is the squareroot of the sum of squares of the elements of X. The solution of this problem can be obtained from the SVD-decomposition of A. If A = UDV*, then we keep the first k values in D as is and set the subsequent singular values to zero. Let us denote the resulting diagonal matrix by Dₖ. It is easy to see that we only have to keep the first k columns of U and the first k rows of V, since their other columns would be multiplied by zeros anyway. To sum up, the matrix Aₖ := UₖDₖVₖ* is the closest matrix to A (in Frobenius norm) having rank k, where Uₖ and Vₖ consist of the first k columns and rows of U and V, respectively. How can this knowledge be useful? Well, if A is a large matrix, that is n, m are large and k is relatively small, then the information we need to store to approximate the information content stored in A is much smaller. That is, we can reduce the storage space significantly and we are still able to store almost the same information that the original matrix has. Welcome to the lab I would like to stop here, as the number of mathematical expression becomes intractable, and let’s do some experiments with a computer. We will see how the SVD-decomposition can be created in Python, how to compute the best k-rank approximation of a matrix and later, in the second part we will see how to use this machinery to compress an image. I would like to illustrate the above concepts on a toy example. We define a matrix of size 4 × 2 which has rank 2, and we create its rank-1 approximation using the SVD-decomposition. For this purpose we have created an Ipython notebook where all these steps described above can be followed step by step. The notebook can be downloaded from this location. In the first screenshot we have created our test matrix with rank 2 and performed the SVD-decomposition on it. Low-rank approximation — creating the SVD decomposition of A Then we check if we can restore the original matrix from the factors (up to round-off errors). We also define another matrix B which has rank 1 and seems to be a good candidate for a rank 1-approximation of A. Low-rank approximation — introducing the matrix B Finally, we create the real rank 1 approximation of A, calculate the Frobenius norm of the residual matrix. We can then see that by using the matrix that minimizes the objective function the minimum value of this function is much smaller than using the naive approximation matrix B. Low-rank approximation — computing the solution to this problem Next week we will continue with an interesting application of the SVD-decomposition, namely, how to compress images.
https://medium.com/balabit-unsupervised/the-singular-value-decomposition-in-a-nutshell-4141bf2b74cd
['Unsupervised Blog']
2018-07-10 15:03:52.472000+00:00
['Data Science', 'Jupyter Notebook', 'Matrix Factorization', 'Mathematics']
The Lure of the Sensational and the Extreme
I went into my MONA journey knowing very little about it — I had few details about the city, the space, or the story. It’s unlike me, and probably not entirely recommended. But there I was, landing in Tasmania on a bright Thursday morning in October, still busily preparing for my 30-hour visit. I wasn’t sure it was worth adding the extra leg to an already-short trip to Sydney. The location was enticing: Tasmania sounded like the actual end of the world to this American Midwesterner. And the museum founder’s commitment to his vision and hometown is impressive. Ultimately, I determined I couldn’t not go, if only to avoid ever having to say, “I went to Australia and had the opportunity, but didn’t make it down to the MONA.” Plus, once you’re halfway around the world, what’s another 2-hour flight? The Museum of Old and New Art (MONA) has an expansive campus just outside Hobart, Tasmania. The founder, David Walsh, is a billionaire Tasmanian who built his wealth as a professional gambler and “gave back” to his hometown by creating a destination museum that doesn’t take itself (or art?) too seriously. The brand — of the person and the museum — is very over-the-top and almost campy in nature. This was definitely a case of reputation preceding information: While I knew of the museum, that was all I really knew about the museum before buying a plane ticket three days before my departure. The museum was designed to be approached by water, so my official MONA experience began there. As I sipped bubbly and nibbled on a plate of prepared snacks, I listened to a brief Hobart history lesson over the loudspeaker and admired the carefully considered details throughout the cabin: old fashioned looking leather armchairs, whimsical window drawings, buttoned velvet pillows, a laughably oversized bench on the bow, and pink cartoonish bullets-on-steroids stools on the second story deck. The “not taking things too seriously” attitude was apparent in everything from the decor to the menu copy to the uniforms. So far, no regrets and all fun on my part. We sped up the River Derwent for 25 minutes and approached the waterside museum from a nearly perpendicular angle. I could make out a few core components — a roof here, a glass panel there. But there wasn’t much to latch on to. There was no obvious building that rose from the landscape. There was nothing to view, really. But, then, as I descended from the ferry toward the pier, more hints at the MONA experience emerged. A veined sandstone monolith, patinated steel panels, poured cement retaining walls, and leafy foliage filled the scene. A narrow, high-walled staircase led me from the waterfront to the main entry point. With each step, both the view ahead — corners of sculptures (and an actual tennis court) — and behind — an expansive landscape — seemed to get better. This was the experience you had by choosing the ferry approach: the building didn’t rise to meet you, but elements that slowly introduced what you were about to see did. Leading up to my visit, I heard descriptions from people who had visited that included “irreverent and fun,” entertaining, and “the best museum on earth.” On the flip side, numerous articles poked and prodded at the founder’s over-sized personality, at the scale and seemingly never-endingness of the project, and at the controversial art inside (will the general question, “Is this art?!” ever go away? It’s really the least interesting question one can ask.). Needless to say, my curiosity was piqued and yet I contained my expectations — it’s just a museum after all. This museum didn’t let my curiosity down. The building is strange and unpredictable. It’s not designed to wander through sequentially. It’s a gesamtkunstwerk that gives you very little to guide you. Upon entering, I got an iPhone loaded with their custom O app and a paper map. These became my compass — providing just enough information so I mostly/kind of knew where I was, but not enough to lead me along any particular prescribed path. A wall and hallway on the lowest level of the museum. Forget the white-walled, austere spaces you’re used to seeing. Here, some walls are literally the earth into which the building was constructed. The galleries and floors are intentionally disjointed and connected here and there, but not logically or predictably. The primary entry point is a spiral staircase that corkscrews through sandstone to deposit you on the 1st floor, which is the lowest floor (you actually can’t get to every floor from this stairwell…I think). From here, you make your way up erratically — there are no central stairs, instead, there are a few ways to get to each floor and half-floors from the center of the galleries and at the peripheries. The paper map is helpful, but also just abstract enough that you will get lost. A map that makes the interior space look simple, but it’s not. Tunnels connect spaces and inside the tunnels are artworks, or maybe the artwork is a tunnel or the tunnel an artwork? You often have to double-back on your route because you reach a dead-end of sorts. In one case it was an atrium sized Anselm Kiefer installation with all the tragedy and beauty typical in his work. As someone who prides herself on a rat-like memory for directions, the layout threw me for a loop. The museum has no wall labels, that is partly what the iPhone is for. With eerily accurate geo-location technology, the app gives you options to learn varying degrees of information about the works around you. You can get the basics: artist, date, materials. But via two other curatorial tabs within the artwork profile — the Art Wank and Gonzo — you could dive in further. With irreverent yet useful information and perspectives, each made me laugh and think. The Art Wank voice is similar to a traditional curatorial voice, while Gonzo is an assortment of voices ranging from the founder’s son to artists. Every time you click into a profile, the device adds it to a list that they will email you when you leave (a feature I loved). Interestingly, the lack of labels made me look at the artworks more closely. I overheard a few complaints from fellow visitors that the iPhones were a distraction, but most visitors I observed were looking at art more than their phones, and maybe even more than at most museums where wall labels become a central focus. I love learning about what I’m about to look at, but with by-default-there wall labels, it is easy to succumb to reading before looking. And it’s hard to separate that historical, educational, or op-ed content from your own reaction. At MONA, looking and being were the default. The downside to a viewing-centered experience is that it privileges people who either know about or are really comfortable around art. There were a few interactive pieces that weren’t obvious and visitors less familiar with art or particular artists may overlook some pieces. For example, the Pulse Room by Rafael Lozano-Hemmer is most fully experienced if you interact with the heartbeat monitor handles (a la an exercise machine) at one end of the room. These capture your heartbeat and transmit it to the hanging light bulbs which display your beat in undulating waves. It’s very cool to see something that you typically don’t even notice inside your body externalized into a visual pattern. But if you didn’t really know what the hell those handles were, you might miss it altogether. The collection is vast. The through-line I connected with was that each piece felt “of the body.” Some made you physically do something with your body, like the Lozano-Hemmer piece above. Some were places to put your body, like Randy Polumbo’s Grotto, a fun-house-like room “carved” into the side of a hallway. The layers of silver foam seating rise to meet plastic mirrors lining the organically shaped ceiling to create a bright, reflective nook. Over-sized plastic flora punctuates the walls, adding dimension. Instead of an of-the-earth feel, it has an other-worldliness to it. In MONA fashion, the grotto doesn’t replicate its namesake reference, it’s a tilted, twisted, version. Grotto at the MONA. Some art made you participate more than “traditional” artwork — like Gianluca Gimini’s bicycle-themed galleries. Upon entering, there is a small set of instructions posted on a desk with paper and pencils. Visitors are asked to draw a bicycle from memory, which quickly turns from feeling doable to being impossible. As Gimini has discovered over the years, people are really bad at drawing bikes (myself included). Upon completion, visitors add their name, age, and profession to the paper and drop the drawing into a bin. In the next gallery, it becomes clear: lining the walls are hundreds of small, framed drawings of hilariously inaccurate bicycle drawings. After viewing a few, I realized that I forgot to draw a seat on mine. But that’s not all, that’s regular museum stuff. MONA and Gimini worked together to fabricate five drawings into actual “bikes.” The final gallery features a collection of poorly proportioned, non-sensical bicycles that are expertly crafted and perfectly displayed. The juxtaposition of absurdity and precision is perfect. Gianluca Gimini’s bikes And the four site-specific James Turrell pieces are about vision, but are also so soulful, that they enrapture your entire body. I planned ahead and reserved a spot for Turrell’s Unseen Seen, which you view and then proceed immediately to the Weight of Darkness. Unseen Seen is a large, enclosed circular pod that sits in the middle of one of the MONA restaurants. You sign a waiver (I’m not pregnant, check; I’m not epileptic, check; I’m not drunk, check; I won’t engage in anything sexual, check.) and then they lead you to a spaceship-like door that opens, revealing a warm interior. A maximum of two people can take part at the same time, but I was the only reservation for my time slot, so I had a solo experience. You ascend the stairs and lay on a bed at the very top of the globular space. You’re not supposed to move too much lest the infrared sensors light up and the host thinks you might be up to something…naughty. I chose the “soft” sequence (the other being “hard”) and the 14-minute experience kicked in and pulsating light washed over me. I’ve heard Turrell comment about his works with something like they allow you to observe yourself seeing. I have seen many Turrell works and this particular one was the most intense and all-encompassing. The light screen wraps around you like an Omni theater, so there is no edge. It’s like looking at nothing and everything. There is nothing to “see” and yet your eyes work very hard to figure out what’s there. Immediately after departing the pod, my host escorted me to the Weight of Darkness, about 100 feet away. She handed me over to a new host who explains how this piece worked. Actually, how I will have to work to access the piece. To get into it, visitors have to walk into and navigate a pitch-black hallway that snakes around twice before ending in a pitch-black room with two armchairs. The armchairs are directly next to the door, the host says, so they won’t be hard to find in the dark. The host claims he will retrieve me after fourteen or so minutes. Then, he basically says, “go on now!” I place both hands on the wall as my guides and proceed into darkness. With the remnants of Unseen Seen still floating around in my visual memory, the blackness felt extraordinarily dense. It was like that dream we’ve all had in which you’re running but in slow motion. I inched along, groping my way through the hall and eventually into an armchair. The darkness was very relaxing with an oh-so-slight undercurrent of fear (what if they never came to get me?!). I honestly can’t say whether I kept my eyes open or not, it didn’t really matter. At the very end, I swear I saw something in the distance, but it wasn’t light nor was it an object. I’ll never be sure. I wandered a little more aimlessly after that. But got back on track in the Nolan Gallery, which houses an immense piece titled Snake, which fills a wall that is so large it looks as if nothing could fill it. The 30x150-foot piece is by Sidney Nolan, an Australian artist. The scale is so vast that Walsh had to redesign part of the MONA to accommodate it after its acquisition. The 1,620 small compositions that makeup Snake hang in a grid that collectively creates a larger composition. Each discrete and yet part of a whole. While impressively sized, it feels particularly straightforward for a museum from which I had already come to expect something else (more?). But that might be the power of the piece — a nod to local history and an unexpected reminder of the outlandish size of the space. At 3:54 I found myself at Wim Delvoye’s Cloaca Professional that was at the far corner at the end of whatever floor I happened to be on. I entered a room with a foul smell and five sacs hanging in a very laboratory-like setting. A small sign near the tubes and sacs said there would be a feeding at 4:00. I thought, “What timing! And what is a feeding?” Just then, at 4:00, a woman popped out to answer my question and many more. Wim Delvoye’s “Cloaca Professional” in the gallery. Cloaca Professional simulates a human digestive system. On the left, an employee adds real food to the system (that’s the feeding part). The machine mixes that food with some other liquids that our human system needs — like water — via a collection of tubes and dispenses it all together into the first sac. Each sac replicates a microenvironment that food moves through within our bodies (think: stomach, the intestine, and lower intestine) and the contents of the feeding move through each sac over the course of a cycle. At the very end is a container into which the machine, you guessed it, poops. It also poops on a schedule: every day at 2 pm. Except for the day I was there. It was constipated, so there was no feeding and no poop to see. The employees were working on restoring the chemical balance to get it un-constipated. What a job! (There is an amateur recording of the MONA presentation and explanation on YouTube if you’d like to watch.) I later read that the museum focuses on art about “sex and death.” That seems too sensational and shallow to me. And very much in keeping with how the founder talks about the museum — I get the sense that he wants it to appear like a giant wind-up toy when he knows that it’s more, or could be more if you — the visitor — let it. Walsh, the owner, likes flair and clearly enjoys being awed, immersed, and swept away in art. He seems to enjoy grandiose ideas and art that is an extreme commitment to those ideas. He tried to imitate that with his work of art — the museum itself. I can see the boyish, immaturity in some of the works and in his “my museum is bigger and weirder than yours” approach to the MONA. But some of those same pieces — and the experience of the space — also draw out a sense of childish joy or “oh damn” factor that make you feel something in your body, in your bones. Maybe his biggest weakness or the museum’s most common criticism is also his, and its, strength. I keep thinking about one of the last works I saw, Queen (a Portrait of Madonna) by Candice Breitz. It was in a closed-off room with a door you had to have the confidence to open and enter. Inside were 30 television monitors displaying close-ups of 30 different people singing the song Cherish by Madonna. Each is seemingly doing the same thing — singing the lyrics — and yet each screen was wildly different. Different sounds, different movements, different expressions; some danced, some were concentrating so hard on trying to do it well and correctly, and others were barely even trying to find the right lyrics. Cherish the thought Of always having you here by my side (oh baby I) Cherish the joy You keep bringing it into my life (I’m always singing it) Cherish your strength You got the power to make me feel good (and baby I) Perish the thought Of ever leaving, I never would At the end of the song, the 25-screen cacophonous scene dissolves. The voices taper off, the subjects laugh and look away, and they slowly end their performance and move away from the camera. We all enter a museum with the same general instructions: Walk around! Look at the art! And in the end, an infinite variety of experiences unfold every time. Fractured and put back together again by our own stories and impressions and opinions. We’re there in the space at the same time, and then we disperse to go back to our everyday lives. I had a lot of questions going into this trip and about the museum in particular: Is a museum like this worth the hype? Does it live up to the hype? Is this a giant egotistical pet project or something that contributes to the place? Or both? Can a museum be about the experience and the art? What’s the tipping point and when does it become absurd? Even if it becomes absurd, is that bad? In the end, I didn’t think about most of my original questions while I was there. I was too lost, literally and figuratively, in the experience. I missed some of the major artworks, I spent an hour and a half of my 5 hours eating, and never quite figured out the stairs. But I also laughed, got scared, felt sad, and was mesmerized. It is a physical journey to get there, but that is only the beginning. The real journey starts when you turn yourself over to the feelings and fun that go along with the visit. For an afternoon I was an explorer, a witness, and an accomplice. The MONA required is not a typical white-walled museum experience. It requires more from you. The silent, contemplative Western visual art experience that is so common to me felt far away. I had to move, engage, listen, hear, find my way, learn new things, observe, and seek. It didn’t strip away elements to highlight fine art — it piled things on and activated my entire body and mind. I experienced the art, and myself in relation to the art, very differently. It provoked me, it provoked more, and you could feel it on a cellular level.
https://medium.com/dose-of-daily-design/the-lure-of-the-sensational-and-the-extreme-2feb03efefff
['Lyz Nagan']
2019-12-05 15:21:56.676000+00:00
['Australia', 'Tasmania', 'Design', 'Travel Writing', 'Art']
How To Become Self-Aware By Indulging In This Beloved Pastime
How To Become Self-Aware By Indulging In This Beloved Pastime And without meditation, drugs, or therapy Photo by Helena Lopes on Unsplash It’s the purest form of self-discovery. It’s not meditation or journaling though those things help. Introverts are naturally better at it than extroverts, but this skill requires development and fine-tuning regardless of your raw aptitude. I call it active observation. You might call it people-watching, but it’s not the passive form of people watching you associate with this activity. Think of it as people-watching with purpose. Improving your observational powers may not sound compelling. It might even seem boring. I call it my secret weapon of personal development. How often do you notice, question and probe the experiences that happen around you? Most of us barely notice the treasure trove of learning opportunities from professional people watching. A sample from a recent people-watch experience Improving your power of observation will change that. Here’s how to sharpen your skill. Pick a location Find a location with lots of people. I prefer coffee shops because you often find people engaged in discussion. You don’t have to wait for something to happen. There’s almost always a fascinating engagement to observe: relationship talk, family, work, and business to customer meetings. I particularly like to observe job interviews. I’m not sure when job interviews at Starbucks became a thing, but you can learn a lot about power dynamics from observing these interactions. If you have the time, snag a spot on a park bench, dog run or stroll through a busy museum. The frequency of the activity won’t be as concentrated as a coffee shop or a bar, but it’s helpful to change things up and observe people in different environments. No distractions Take off your headphones. Put your phone and laptop on airplane mode. You can keep your laptop open to give the impression you are working — as opposed to spying — but leave it on a page that won’t interest you or distract you from your mission. Multitasking will stifle your efforts. Your eyes should toggle between your notebook and subjects. Background noise is unavoidable and won’t interfere with your task. Keep a notebook handy This might be a personal preference, but I advise using a pen and notebook. You’ll record less, but you’ll record only the most pertinent information. You can find numerous studies on the value of handwriting in comprehension and learning. From my own experience, I find that writing by hand allows for a stronger understanding of the experiences I observe, which lead to more profound conclusions in the analysis phase. Put on your blinders In horse racing, jockeys will often put blinders on the horses to prevent them from looking to the rear or side to side. You need your own pair of virtual blinders when you engage in this exercise. Zero in on a person or small group of people. You’ll be tempted to peek over at other people or groups as the target you observe meanders through its spicy and mundane points of conversation. Don’t stare at your subjects; that’s creepy. Glance every so often and use your peripheral vision. Maintaining a laser focus for long periods will challenge you, but it is a skill that will help you in any endeavor that requires concentration. Be attentive Now we get to the fun part. You’ve met the pre-requisites. You’re ready to people-watch. Make use of your senses: sight, sound, scent, and intuition. You’ll have to do without touch for obvious reasons. That said, if you’re actively engaged, you can feel the tension, connection, and disconnect. If you’re observing two people in conversation, it’s natural to focus on their words. Pay attention to changes in pitch. Notice the subtle changes in their body language. Did someone frown after a comment or arch an eyebrow? Did one of the participants place his coffee cup on the table a bit harder than necessary? Did you notice a power dynamic? Perhaps someone sunk into their chair after a disparaging comment. We often allow these queues to pass with barely a hint of conscious awareness, but this information will prove critical when you get to the next stage. Scribble all of this information in your notebook. Be sure to write in chronological order. If you don’t, you’ll be sifting through a jumbled mess. I prefer to write on every other line in my notebook. This allows me to insert notes or fill in something later. I’ll use the “^” character to identify where the extra verbiage fits in. See my example below (excuse my sloppy writing). I used the “^” to insert a question Read through your notes when finished and fill in the gaps or expand on some of your shorthand. It’s critical to do this step now while the information is fresh in your memory. Analyze I journal at night and use that time to analyze my observations. It’s this phase where you draw conclusions and glean the lessons. Read over your notes. Don’t fill in details at this point. Too much time has elapsed. If you find a significant gap, make a note of it so you’re more attentive on your next effort. Glean the lessons Summarize the key points of your observation. Condense your earlier notes into a few lines about your experience. What emotions did I observe and what triggered them? If you paid attention to body language and the nuances of speech, you should be able to discern general emotions. You may not be able to determine specific labels (shame, anger, joy), but you should be able to pinpoint degrees of positive, negative and neutral. How would I have responded in the same situation? Pretend that you were one of the participants and run through the scenario. How would you have reacted to the same stimuli? How have I responded in past similar situations? This is a reality check. The answer to the previous question often yields how you wish you would respond in that circumstance. You may not find a situation that exactly matches what you observed, but you should find similar conditions if you peer into your memory banks. Compare your answers to the previous questions. Answer two of the following questions. What did you learn, confirm or disprove? What did it show you about human nature? Did it make you question a current belief? How so? What can you conclude as a result of this experience? Bonus: Write about it In my early days of writing, I would use the results of this exercise as input for my stories. I still do that though not as much. Writing helps crystallize the lessons. You’ll also notice that patterns emerge. Situations repeat themselves. The individual details won’t match, but generalized situations recur. You will also find most people react the same way to similar circumstances. Not always, but it happens enough to give the appearance of a pattern. You’ll also understand how you act in these situations. By playing these scenarios in your head, you’ll recognize them when they occur. You can then act with intention rather than reflex.
https://barry-davret.medium.com/how-to-become-self-aware-by-indulging-in-this-beloved-pastime-7758e2e5cf01
['Barry Davret']
2019-03-31 01:26:00.754000+00:00
['Life Lessons', 'Inspiration', 'Self Improvement', 'Mindfulness', 'Creativity']
Common mistakes to avoid when developing your app
1. Including too many stakeholders in the decision-making process Gathering insights is a critical part of making an app that satisfies the needs of various types of users. But trying to leverage input from a dozen stakeholders, all with different opinions, makes your decisions unbelievably tough. You’re stuck taking everyone’s input into consideration and trying to cover all the edge cases, all before the initial launch. To quote Seth Godin: “As long as you want to please everyone, you won’t please anyone”. Focus on your main target group, and the main need you are addressing, and meet that need. Your app requires a solid foundation to add features enhancing your product, making it better and more usable. If you’re struggling to find one common vision, a Product Design Workshop might be a solution to sit down with your stakeholders and brainstorm. But this time, with a UX design expert. An unbiased, impartial expert is able to elicit viewpoints on your product that you haven’t thought of before and give you a bird’s eye view of your business, helping you set the project priorities. 2. And simultaneously, ignoring user research Founders launch products, because they identified a problem and they found a new or a better solution to approach this problem. Often, this is in a field they are experts in, and the app’s initial scope and features are decided based only on their assumptions. It’s a natural flow of things, but at some point, these assumptions need to be evaluated and adjusted to your actual users’ needs. For a start, a group of 5–10 prospective users for an in-depth interview is enough and better than no research at all. The sooner you do it, the better for your app and for your budget. A case from my personal experience: we were building a platform to ease out the process of business travels. We began with an assumption that the travelers wanted assistance with both: the search and the booking. We decided to go live with the simplest version of the product, and it quickly became apparent that the users don’t mind the searching part (they actually loved it!), but the booking part is their main pain. Had we not known that, we would have spent a lot of time and money on enhancing the feature that wouldn’t bring any value to the end-user. Lessons learned: get feedback from prospective users, and be open to change your concept based on that feedback. 3. Being caught up in a never-ending improvement loop An 80% product that is out there, is better than a 100% product that is not released at all. The hard truth is that it’s impossible to have a perfect digital product anyways. We learn things after the launch, once the product is actually being used by its users. So we have to adjust and change things in any case. In my experience, the drive for perfection often manifests in the product team going from iteration to iteration, improving, changing, and tweaking, without the courage to set a stop and actually launch the product. Take that leap of faith and go for it. And if you don’t believe me, trust Reid Hoffman, the founder of Linkedin: “If you are not embarrassed by the first version of your product, you’ve launched too late.” 4. Going cheap. And at the expense of everything else. Understandably, budget, and money are a huge concern. And with the Agile framework that most software development companies work in, it’s clear you want to squeeze in as many features as you can into your backlog for the least price. But in practice, it can turn into yet another outsourcing horror story. Let me tell you one. One of our clients outsourced their app to a contractor promising to deliver all their desired features within a fixed budget. They got a product that looked different to what they had agreed on and had several features that simply didn’t work. We took over and decided to go for the simplest version of the app, but a working one, so it could be readily tested with the users. My advice is: go simple over cheap. Have a few features less, but working. So you can go live and actually sell a product that doesn’t crash all the time. 5. Neglecting to think how you will be making money You may be surprised how many of the people I walk to who want to build an app have no or only a faint idea how they will make money. This often gets forgotten amidst the excitement present when creating a new product which “users will definitely love”.
https://medium.com/elpassion/common-mistakes-to-avoid-when-you-develop-your-app-d137edf623b4
['Natalie Pilling']
2020-07-28 11:07:11.959000+00:00
['Startup', 'App Development', 'Business']
Why Monoliths in the Middle of Nowhere Won Marketing This Year
The Possible Truth The story behind the monoliths has likely been solved, and here’s a possible explanation for these mysterious events. After weeks of popping up, then disappearing without a trace, somebody has finally claimed responsibility for the Monolith. The Most Famous Artist took to Instagram to seemingly claim responsibility for the installation of the monoliths, while also flogging each one for a whopping $45,000 USD. So, if you’re in the market for a big silver monolith in the middle of nowhere, you know who to call. Image by The Most Famous Artist’s Instagram Page The Instagram account is a collective of stunt artists who all work under the title ‘The Most Famous Artist’, which was originally founded by Matty Mo (the guy behind the “Hollyweed” stunt) I’m not quite sure who the target demographic is for the $45,000 hunk of metal, or if there has been any luck selling them yet. But got to give it to them for trying. “I am not able to say much because of legalities of the original installation,” Mo told Mashable when asked about the monoliths. “I can say we are well known for stunts of this nature and at this time we are offering authentic art objects through monoliths-as-a-service. I cannot issue additional images at this time but I can promise more on this in the coming days and weeks.” Obviously, there’s no real way to prove if Mo and his team are actually behind this stunt, but this is the best possible explanation we’ve got so far, so I’m going to accept it.
https://medium.com/better-marketing/why-monoliths-in-the-middle-of-nowhere-won-marketing-this-year-6bdb67018bba
['Nitish Menon']
2020-12-09 16:32:19.660000+00:00
['Memes', 'Business', 'Marketing', 'Brands', 'Social Media']
Sports Needs a World Sustainability Agency
Sports Needs a World Sustainability Agency The World Anti Doping Agency showed the world how it could look like Photo by Jesse Collins on Unsplash In Germany, my home country, the state is governed by three pillars: Legislature, Executive, and Judiciary. They build the foundation of the state and ensure democracy. Because they are distinctively separated, they can self regulate the state. Contrary to politics, sports has a much different structure. The legislature, executive, and Judiciary lie mostly in the hands of the clubs and associations. When it comes to sustainability, sports does not have an agency that controls their actions and hands out punishments for their inaction. However, when one looks at how successful the fight against doping in sports has been, it gets clear what needs to be done: Sports needs an independent World Sustainability Agency. In my previous article, I have discussed many instances where the biggest sports organization of the world, the International Olympic Committee (IOC), has failed to deliver the promised actions concerning sustainability. Recently people have voiced doubts about the current system mainly because multiple big sports associations have been repeatedly rocked by scandals left and right. From the Salt Lake City Bribery Scandal in the Olympic Games in 2000 to the FIFA corruption case in 2015 and the Festina Doping Scandal in 1998, sports has a rich history of power abuse. While the former two scandals concerned individuals, the doping scandal attacked the integrity of sports itself. The Tour de France and the entire sport of cycling still suffer from the aftermath of the doping case. And while the cyclists were the scapegoats, they were far from the only athletes that systematically doped. The IOC knew then that if they would not act now, sports would lose the trustworthiness and integrity it needed to attract viewers. Therefore, in 1999 they founded the World Anti Doping Agency (WADA), an independent organization with the task to combat systematic doping. The vision of WADA is to create “a world where all athletes can participate in a doping-free sporting environment”. Continuing, they made it their mission “to lead a collaborative worldwide movement for doping-free sport”. Keep that in mind, I will come back to that later. Since then the IOC has evolved from being a passive observer of doping towards actively working to combat it. They have since made it mandatory for all sports Olympic sports federations to join the WADA Code. Furthermore, they have even installed an independent court of justice: The Court of Arbitration for Sport (CAS). That court is crucial because it can decide on important cases and also hand out binding punishments. If an athlete tests positive for doping (s)he can choose to appeal the decision at the CAS. If the appeal is denied the CAS has the right to sanction said athlete. After I learned about these existing structures during my studies, I noticed a pattern. The fight against doping and for sustainability has more similarities than you think. When the public criticism around doping reached its height in the 1990s and threatened the integrity of sports, sports federations began to actively fight doping by establishing a certain set of binding regulations (WADA Code) and installing an independent court to decide about the punishment. Concerning sustainability, sports is desperately missing a set of binding rules, but it already has an independent court. Therefore, I propose that the IOC needs to create a pendant to the WADA. To put it more simply, sports needs a World (Pro) Sustainability Agency. The structures already exist. During the end of the Netflix movie “A life on our planet” (10/10 movie, would recommend) the British explorer) the British explorer and new Instagram influencer David Attenborough tried to answer one important question: What can we do to protect the wilderness of our world? You ought to remember that many sports will go down together with the ecosystem. There is no surfing, sailing, or even swimming without clean waters. You can not run or bike when the air is polluted. The list goes on and on. Sir Attenborough thought about the question for a second and then took all his 94 years of experience in this world and answered that we need international agreements. That is what the World Sustainability Agency is and therein lies the power of such an organization. The repeated failure of non-mandatory sustainability strategies in recent years is living proof that we need a universally applicable Code of Conduct. The number one advantage this hypothetical organization would have is that nearly all structures already exist. As I laid out before, sports already has experience with a mandatory Code (WADA Code) and a court of justice has also already been created. This greatly reduces the starting cost. All the sports federations have to do is to appoint a working group. Controlling is key One reason why sustainability is hard to enforce on a global scale is that most of the time it includes a political decision. Especially once penalties are handed out, things can get chippy. Sporting events provide one of the biggest platforms for countries to present themselves. No country wants to miss out on the opportunity to participate in events like the World Cup or the Olympic Games. If everybody wants to partake in those mega-events then it gives the organizing body (in this case: the International Olympic Committee) the power to define the rules about participation. Following that logic, if a country’s sports federation does not meet the sustainability standards it cannot participate in the Olympic Games. It is like standing in front of a club and talking to the bouncer. It does not matter if you like it or not, to get in you must play by his rules. That being said, sports has a rich history when it comes to political decisions. Numerous examples range from earlier actions such as the IOC’s ban of South Africa due to their political system to the recent ban of Russia due to their systematic doping. These examples show that the world of sports is willing to make political statements if necessary. Thus, when the World Sustainability Agency flags a country for its insufficient sustainability practices there will be a political debate. But as soon as the CAS pronounces a legally binding judgment the punishment will be carried out. The World Sustainability Agency must have this legitimacy. Otherwise, there would not be any improvement over today’s situation where no federation is held accountable for their failings. No one will take the agency seriously if it is not mandatory for all federations to join. But if the newfound agency can make the rules as well as control their implementation then a sustainability standard could become as powerful as the WADA list. Photo by Taylor Simpson on Unsplash The way to integrity Finally, there is one more thing a World Sustainability Agency can do for sports: Get back its integrity. That being said, there are many great sustainability initiatives in sports. But for every good initiative, there are as many negative headlines. One example is the German national soccer team that was flying from Stuttgart to Basel even though it is only a three-hour train ride. What are the fans supposed to think? This negative publicity deprives all integrity sports currently has when it comes to sustainability. We have to acknowledge that the young generation and increasingly more people care about sustainability. It is no coincidence that groups like Fridays for Future gained so much traction. Those groups will call the sports federations out on their failed sustainability targets. And once they make those failings public the fans and viewers will take notice. Good luck explaining to climate-conscious people why you need to chop off trees in a protected rainforest area in Brazil to build a new golf course. Admittedly, it may take some time but once the majority of people care about sustainability sports will have a similar problem to that of the 1990s. Its most valuable asset, its integrity will be lost. Nobody will want to watch World Cups or Olympic Games for the same reasons why nobody wanted to watch the Tour de France after the doping scandal. Revenues and spectator numbers will collapse, and the bottom line will be affected. However, if sport federations and leagues recognize their potential to act as pioneers in society and starts being proactive when it comes to sustainability, those dystopian scenarios can be avoided. Sport is a big part of many people's lives, and by installing an independent World Sustainability Agency the federations would take an enormous step in the right direction. Countries will not want to miss out on the big sporting events. By having a binding Code of Conduct they will be forced to increase their measures to stage more sustainable sporting events. I want to finish this by proposing a vision and mission statement for the World Sustainability Agency.
https://medium.com/climate-conscious/sports-needs-a-world-sustainability-agency-ff720f3b86d4
['Tom']
2020-10-30 06:57:33.225000+00:00
['Sports', 'Sustainable Development', 'Sustainability', 'Climate Action', 'Climate Change']
How One DC Engineering Team Helped Hundreds of Businesses Access the PPP Loan
How One DC Engineering Team Helped Hundreds of Businesses Access the PPP Loan Tim Winkler Follow Jun 4 · 7 min read Engineers around the globe are utilizing their tech skills to solve new problems brought about by COVID-19. We spoke with a local engineer at Upside Business Travel to get a pulse on how their team has adapted. The Coronavirus pandemic is causing mass disruption for individuals and businesses alike. In the tech world, teams are using their coding expertise to solve previously nonexistent problems. For example, one local DC startup that works in the travel industry, Upside Business Travel, has been impacted by the decrease in business related travel. Their team quickly sought new revenue generating projects and pivoted to support these goals. We spoke with Christopher Rung, a Senior Site Reliability Engineer at Upside, to hear about his experience. Christopher has worked at Upside since 2017, building AWS infrastructure with Terraform and Kubernetes. He joined a team at Goal Financial that built an application to help small businesses apply for the Payroll Protection Program (PPP), established by the CARES Act, from the Small Business Administration (SBA). The goal of the application was to enable businesses to quickly submit their application and receive much-needed money in a timely fashion. We interviewed Christopher to learn more about how and why his team created the app and learned how it enabled hundreds of businesses to apply for a PPP loan. He told us about the problems they solved, the technologies they used, and how Upside’s culture allowed them to succeed. The conversation below has been edited for length and content. How did the team at Upside kick off this project? My whole world changed April 2 ndwhen I started a brand new, never-touched-before project with a team of eager, intelligent, and hardworking colleagues. The team from Upside consisted of 2 infrastructure folks, 1 Product Manager, 1 Quality Engineer, 5 Full Stack Engineers, 1 Designer, 1 UX Writer, and 5 Customer Support/Marketing Specialists. Since I’m on the infrastructure side, I was involved with the initial setup of the application and onboarding. There are many immediate needs that arise when you are starting from scratch. How should we communicate? Let’s setup a Slack workspace and sign up for G Suite. Which cloud hosting provider should host our service? Let’s use AWS. How do we track objectives to keep everyone sane? Let’s use Notion. Each of these services needs to be configured, and everyone needs an account to access them. We worked 12 to 14-hour days, seven days a week. It was exhausting, but I woke up feeling excited each morning. We iterated quickly and made great progress with such a lean and nimble team. I learned a tremendous amount. What problems did the team solve by building this tool? We are facing the largest spike in unemployment since The Great Depression. People have bills to pay and need to collect their paychecks. The PPP grants 100% forgiveness for employers who use their loan money to keep their employees on the payroll. Unfortunately, it has been a stressful and confusing nightmare for many to apply for their desperately needed loan. Banks are unprepared to handle the flood of applications and are disproportionately favoring their biggest clientele (e.g. The Lakers). The Mom-and-Pop shops who have had decades-long relationships with their banks are now being turned away, just as they need assistance the most. We want to help these people. The tool that banks use to apply for a PPP loan, SBA’s E-Tran system, is largely written in COBOL, a language invented in the 60s. It suffices to say that COBOL wasn’t designed with modern scaling requirements in mind. There’s a great article in Forbes that discusses the rocky start of funding and challenges that the banks are facing. After an employer fills out an application, the banks try to submit it, only to be met with timeouts and other errors from E-Tran. Due to these challenges, many banks were only able to process a few loans per day. Congressional Bank, which serves the DMV area, is our lending partner, and had a backlog of a few hundred applications that they were struggling to submit. With the tool that Goal Financial built, Congressional Bank was able clear out their backlog and process all of them. Since the focus was on small companies and independent contractors, it felt great that our work was helping those who need it most. I wasn’t involved in our application’s development, so I can’t take credit for it, but I understand that we have been successful due to a robust queuing and a retry system. We were seeing 80%+ error rates when requests were submitted, so we implemented things like queuing and retrying logic to deal with the incredibly high error rates we saw with our submissions to E-Tran. There is an interesting parallel between solving the challenges of this legacy system and our work at Upside. The business travel world is also reliant on legacy systems. Airline and hotel suppliers book registrations using SABRE, a technology that was also built in the 60s and written largely in COBOL. Upside uses SABRE to fetch air, hotel, rail, and rental car availability, and we’ve become used to these types of challenges. Our team was just switching from one crazy old system to another. What tech stack did you all use to build the system? Most of the team had expertise in AWS, so we chose it as our cloud provider. We used Terraform to build our AWS infrastructure, which includes EKS (Elastic Kubernetes Service, AWS’ managed Kubernetes solution), networking, permissions, and databases. We used GitHub to host our code, CircleCI to build it, and host the artifacts on AWS’ ECR (Elastic Container Registry). Monitoring is handled by DataDog, and we have VictorOps setup to handle alerting. This kind of experience-to build a production-ready system from scratch that will be used to make a meaningful difference in the world-doesn’t come often. I joined Upside after much of the groundwork was in place, so this was a great opportunity for me to fill knowledge gaps, especially since we leveraged much of what we built at Upside for our new work. How did you balance creating a quality product, while also pushing something out quickly? Since a big focus of this project was to iterate quickly, we built two environments: a development environment, used for testing; and a production environment, which hosts the public site. Typically, there is a third environment, staging, wherein QE tests the product before promoting it to production. While the engineers and designers cranked out new releases to our continuously-deployed development environment, we relied on an incredible quality engineer to test these changes before promoting them to production. One of the things I set up as soon as the infrastructure was in place was a “deployer” tool to allow the engineers to promote something from Dev to Prod independently. This improved the team’s velocity, and freed up infrastructure time that would be spent on deployments. Our team was also very good at being communicative on Slack. We kept statuses up to date and weren’t afraid to tell the team when we needed to take a break. It was very much appreciated to have an awesome project manager who had our back and was good about keeping the work-life balance reasonable, despite the intense timeframe. Where does the project currently stand? It was a crazy few-week sprint upfront on the infrastructure side. Largely thanks to the magic of Kubernetes, things have been running smoothly since then. All the product people and engineers are now iterating and improving upon the actual product. Goal Financial was initially focused on providing people with an easy way to apply for their loan, but we also recently rolled out a tool that allows borrowers to track the forgiveness status of the loan. After submitting their application, employers want to know when they can expect their loan money to deposit into their account. What aspects of Upside’s culture allowed your team to succeed? The culture at Upside is very special. It’s a huge reason why I came on board and continue to stay on. Many companies claim to be “one big happy family,” but talk is cheap. Especially during this pandemic, our leadership team has demonstrated extraordinary care and compassion for us. From the start, they have made sacrifices to make sure we can keep our amazing team together. We were one of the first companies I know that closed our office to keep everybody safe. There’s an incredible level of respect and freedom that the company affords its employees. As an example, employees get unlimited PTO. There’s the freedom to take a vacation and come back when we’re ready. Having this freedom makes it easier to work hard when necessary. Since we know the company has our back, we are more willing to push through a busy stretch. Our People Ops team is fantastic at hiring people who are driven, self-motivated, and kind. They also help to foster a culture where co-workers can get along, be themselves, and truly enjoy working with one another. I am proud to call myself an Upsider.
https://medium.com/hatchpad/how-one-dc-engineering-team-helped-hundreds-of-businesses-access-the-ppp-loan-hatchpad-d2e93b503c10
['Tim Winkler']
2020-06-04 19:32:40.201000+00:00
['Startup', 'Software Development', 'Programming']
Even Moderate Drinking Is Damaging Our Health, So Why Do We Do It?
Even Moderate Drinking Is Damaging Our Health, So Why Do We Do It? Refinery29 UK Follow Oct 14 · 8 min read By Eleanor Morgan PHOTOGRAPHED BY MEG O’DONNELL During lockdown, Georgie Hodge* turned her front garden into a ‘pub’. Neighbours would come with their own glasses and sit in spaced out chairs in front of the house. “For those first six weeks, it was a lifeline,” she says. “We’d get the music out, laugh and be silly while the world felt completely out of time.” Georgie, 34, has always considered herself “middle of the road” with drinking, which she qualifies as drinking regularly but moderately, with both dry spells and binges. “I am a bit of a gannet with booze, so I try to be careful,” she explains. As a freelance graphic designer whose workflow has completely shrunk because of the pandemic, the total loss of structure has tested her restraint. While on ‘pub’ duty one evening, Georgie drank five margaritas on an empty stomach. “I’d set out to drink one,” she says, half laughing. “But clearly it was about abandon. I wanted a break from reality. My hangover the next day was biblical. I lay in bed until 3pm, unable to move my head.” During this episode, her partner said, “I’m worried about you.” Georgie winces as she recalls it. “Her saying that destroyed me. I don’t consider my overall drinking habits to be problematic but maybe they are. Clearly, feeling that terrible isn’t healthy.” Over the summer, a large survey by King’s College London (KCL) and Ipsos MORI found that nearly a third of the UK public reported drinking more alcohol during the pandemic than they normally would. An increase in loneliness and emotional distress are likely drivers. It is understandable that loss of purpose and structure could lead to craving a break from reality. It is understandable that loss of purpose and structure could lead to craving a break from reality. Bucking the trend of both my Scottish and French ancestry, I have a woeful constitution for alcohol. I love the headiness and creeping warmth of one negroni; two will blur my vision and probably leave me green-gilled the next day. But during lockdown I drank every night: two canned G&Ts or two beers, usually. I live alone and, for eight weeks, it became a ceremonial coping mechanism; in those endless, stretchy afternoons, anticipating the buzz kept me going. Really, though, it was making me more anxious and my guts the site of warfare. I stopped. ( My grandmother died of alcoholism in a terrible way and I have an irrational fear that finally ‘getting into’ booze will take me down the same path.) My concern is not misplaced. In June, Colin Drummond, professor of addiction psychiatry at KCL, responded to the Ipsos MORI study: “There is extensive evidence that the population level of alcohol consumption is highly correlated with health harm. With a substantial increase in alcohol consumption during the COVID-19 pandemic, we can expect a surge in alcohol related ill health including alcohol-related liver disease admissions and deaths. This will place an increased burden on our already overstretched NHS.” This sounds alarming. So, too, does his warning that increased alcohol consumption is likely to increase mental distress and lead to an increased demand for mental health services. But maybe we should be alarmed. With ‘real’ life on pause, self-reflection is inevitable. Along with modern rituals like Dry January and Sober October, the pandemic may be an opportunity to properly observe our relationship with alcohol. Georgie suggested that Sober October encourages an “all-or-nothing attitude to booze, because it’s like a binge in itself — not that people would be comfortable admitting that.” This stayed with me, because talking about our own or others’ drinking can be very thorny. The great unknowns of COVID-19 are leaving us stressed and exhausted, so picking at something that brings people happiness — or helps soften the edges of a fraught mind — means that, understandably, people become defensive. The historical ubiquity of alcohol is an argument in itself: we’ve always drunk, so why the fuss? Is now really a good time to examine our drinking? Some scientists would argue that now is the perfect time. In August, a longitudinal study showed that moderate alcohol use is associated with decreased brain volume in early middle age. This is worth paying attention to: any notable loss of brain tissue will reduce the brain’s ability to function optimally. Former government drugs advisor Professor David Nutt has been researching the effects of alcohol for decades and memorably said that alcohol is more dangerous than crack cocaine. He is trying to invent a synthetic healthy alternative. “There is no level of alcohol consumption that is without risk,” he writes in his recent book, Drink? The New Science of Alcohol and Your Health. Interestingly, Nutt says that in societies which revolve around alcohol, only a small percentage of people develop a damaging relationship with it. “Typically, in first world Western countries, alcohol is consumed by over 80 per cent of all adults. Of that 80 per cent only about one one-fifth get into problems with it.” Given how conflicting the information we receive about how harmful moderate drinking really is, the narrative of what is or isn’t ‘right’ is often written by the individual — particularly where stress is concerned. We may know, fundamentally, that bingeing regularly isn’t good but drinking for stress relief is utterly normalised; as instinctive as putting on a coat to go out in the cold. The physical image of a cold beer or an immaculate martini can wield power in the mind as a bookend to a stressful day, a symbol of enjoying friends’ company or a gentle domestic ritual with a partner. Where’s the harm? Again, the answer is rationalised by the individual. But if one drink becomes six on a regular basis, there is a deeper emotional motivation. There is no level of alcohol consumption that is without risk. The story of human beings’ love affair with alcohol goes back to a time before humans and talk of emotion. Our fondness for booze is rooted in evolutionary hardwiring linked to our fruit-guzzling primate ancestors. Ethanol released from rotting fruit on the forest floor would have been appealing in many ways: the funky smell made the fruit easier to find, the fermented flesh was easier to digest (meaning more precious calories absorbed) and the antiseptic qualities of the microbes in it would boost the primates’ immune systems. In essence, we have evolved to consume alcohol. Of course, our relationship with booze goes far beyond these innate urges. Alcohol alters our minds: that’s why we like it. Ethanol causes the release of serotonin, dopamine and endorphins in the brain: compounds that make us feel happy and less anxious. As a species, we turned ourselves from hunter-gatherers into farmers — some 12,000 years ago — because we wanted to get pissed. In his book A Short History of Drunkenness, author Mark Forsyth writes: “We didn’t start farming because we wanted food — there was loads of food around. We started farming because we wanted to booze.” Alcohol is an integral part of humanity; a cultural status symbol. From early human evolution, it has strengthened social bonds and tempered inhibitions. Gossiping and laughing with friends also triggers the production of endorphins in the brain, which, along with the alcohol itself, makes us feel great. But alcohol’s intoxicating power has always caused concern. Most societies have struggled to find a balance between drinking for pleasure and the often damaging effects of drinking too much. The social aspect of alcohol is very powerful, particularly for those who are introverted or socially anxious. Georgie feels she has “internalised a sense that I am more ‘fun’ if I am drinking; that people are more comfortable with the drinking, ‘fun’ Georgie.” Evidence that even moderate drinking can be harmful is mounting but Georgie’s is a familiar rationale. Matthew Birke* is 32 and works in advertising. He has always been quite socially anxious. For him, drinking and socialising are inextricable. “Meeting new people makes me feel on edge and tired. Booze has always mitigated that a bit,” he says. “But I had a shock when I started working in a media organisation at 23. The entrenched culture around drinking was new to me. It was a given that you’d be out most nights and hungover most days and I don’t think anyone really liked it. ‘The Sesh’ terrified me then and does now.” I ask why he thinks people still booze, even if they aren’t really enjoying it and find they struggle with the after-effects. His response will ring true for many. “The embarrassment of being called boring was a huge reason for taking part.” I’ve lost count of the number of times I’ve been called ‘boring’ in pubs, restaurants or at people’s dinner tables. Hearing the brayed phrase, “Oh, come on” was a reliable aspect of nights out for me for years and, of course, I internalised the idea that I am dull. But over time I realised that it wasn’t about my choices; rather, what they reflected back at people. At a dinner party last year, I felt a funny validation in replying to someone asking why I wasn’t drinking with, “Because I’m boring!” I wonder how many people have felt like Matthew in his workplace. We talked more about shame, which underpins many conversations about alcohol. “I see friends of mine operating socially, in a freer and more fluid way than me, and judge myself negatively as a result.” Again, booze helps. “The three-pint buzz is a very real thing,” he says. “But I think a lot of men, including me, are so dependent on booze to feel okay socially that they make ‘liking beer’ a big part of their projected personality. Shame abounds, but men often turn drink into a status symbol to cover it up.” This makes me think about how people talk about hangovers on social media. The common channelling of sweaty despair into memes and pithy one-liners is partly why I wanted to write this piece, because it sometimes seems like the dressing up of difficult emotions. Being catatonic on the sofa because you drank too much (two cans of Gordon’s ‘Pink and Tonic’ for me) the night before may, reasonably, make someone feel desperate to connect: with others feeling the same way so we don’t feel deviant, with feelings that eclipse the shame, with anything but our own thoughts. There can be camaraderie in hangovers, just as there is in getting pissed. There is no camaraderie if you’re alone. If we are to tackle harmful drinking behaviours in a meaningful way, public abstinence campaigns clearly start conversations. A collective call to pause and reflect is, broadly, a good thing, but it’s too binary. If we talk about behaviour we have to talk about emotion. Uncomfortable emotions — the urge to escape them, a difficulty in just ‘being’ — underpin many people’s relationship with alcohol and we have to think about why so many people are struggling to manage, or sit with, how they feel. It is a profoundly complex issue but the pandemic has shone a spotlight on what causes human beings the most distress — lack of money, ill health, lack of purpose, loneliness, the absence of community connections — and there is a live lesson here, at a private and public level, if we choose to listen. Looking at the potentially harmful, if understandable, things we use to help cushion distress is another step but an important one, because binge-shame-binge-shame cycles are emotionally corrosive. Perhaps the bigger question if we are assessing our relationship with alcohol is: what am I trying to escape? We might drink to forget that we’re anguished but, unlike the alcohol, the feelings won’t be flushed from our system. As Georgie reflects: “My therapist once said, ‘All the anxiety you feel before you drink is still there afterwards, but amplified.’” If you’re worried about your drinking, or about someone you know, you can talk to an advisor on DrinkAware’s live chat service . Alternatively, you can call Drinkline free on 0300 123 1110 (weekdays 9am-2pm, weekends 11am-4pm).
https://medium.com/refinery29/even-moderate-drinking-is-damaging-our-health-so-why-do-we-do-it-198efffc3437
[]
2020-10-14 16:02:30.670000+00:00
['Health', 'Alcohol', 'Covid 19', 'Drinking', 'Alcoholism']
Streaming Music is Ripping You Off
If you subscribe to a subscription music service such as Spotify or Apple Music you probably pay $10 a month. And if you are like most people, you probably do so believing your money goes to the artists you listen to. Unfortunately, you are wrong. The reality is only some of your money is paid to the artists you listen to. The rest of your money (and it’s probably most of your money) goes somewhere else. That “somewhere else” is decided by a small group of subscribers who have gained control over your money thanks to a mathematical flaw in how artist royalties are calculated. This flaw cheats real artists with real fans, rewards fake artists with no fans, and perhaps worst of all communicates to most streaming music subscribers a simple, awful, message: Your choices don’t count, and you don’t matter. If you love music and want your money to go to the artists that you listen to, consider this simple hack. It’s easy to do, breaks no laws, does not violate any terms of service, directs more money to your favorite artists, but doesn’t actually require you to listen to any music, and best of all, it could force the music industry to make streaming royalties fair(er) for everyone. Sounds good, right? So let’s cut to the chase. Here’s the hack: This September, when you aren’t listening to music, put your favorite indie artists on repeat, and turn the sound down low. You might be saying “Wait a second, turn the sound down? How the heck does that do anything?” Good question, let me explain. The Flaw in the Big Pool Streaming services (Spotify, Apple, etc.) calculate royalties for artists by putting all of the subscription revenue in one big pool. The services then take out 30% for themselves. The remaining 70% is set aside for royalties. Data: Actual Spotify numbers for premium subscribers in December 2014, per Section 115 disclosures. Source: Audiam This giant bag of royalties is then divided by the overall number of streams (aka “plays” or “listens”). The result is called the “per-stream royalty rate”. Data: Actual Spotify numbers for premium subscribers in December 2014, per Section 115 disclosures. Source: Audiam The problem lies in the fact that this “Big Pool method” only cares about one thing, and one thing only: the overall number of streams. It does not care even a tiny little bit about how many subscribers generated those streams. So why is this bad? You Are Worthless Imagine a hypothetical artist on a streaming service. Which do you think that artist would rather have: 10,000 fans who stream a song once, or one fan who streams it 10,001 times? Seems obvious, right? 10,000 fans is much better than one fan! But the Big Pool method, which only cares about the number of clicks, says the single person is worth more! So this guy… …is worth more than this huge crowd? The message to artists and fans is crystal clear: the only fans that matter are the ones who click a lot. Everyone else can suck it. Ass-Backwards This is bad for the artist, but astoundingly it’s even worse for streaming services: if each subscriber is paying $10 a month then those 10,000 subscribers would generate $1.2M in annual revenue, while the single user only generates a measly $120. Clearly the services benefit from getting more subscribers, not more streams, so why are they incentivizing streams and ignoring subscribers? Even more backwards, the Big Pool method encourages the acquisition of heavy-usage subscribers, who are the easiest customers to get and retain (in fact most “music aficionados” are already subscribers), but offers little for light-usage subscribers, who are not only the hardest customers to get and retain, but are more profitable (by not requiring as much bandwidth) and most importantly dramatically greater in number. It’s as if a car dealership paid the biggest commissions to the employees who sold the fewest number of cheap cars, and completely stiffed the employees who sold lots of expensive ones! But Wait, It Gets Worse If the Big Pool rewards artists who get lots of streams, major labels can sign artists who can get a lot of streams. But what if artists aren’t the only ones getting lots of streams? Click fraud is rarely discussed in the context of streaming music, but it’s fairly simple for a fraudster to generate more in royalties than they pay in subscription fees. All a fraudster has to do is set up a fake artist account with fake music, and then they can use bots to generate clicks for their pretend artist. If each stream is worth $0.007 a click, the fraudster only needs 1,429 streams to make their $10 subscription fee back, at which point additional clicks are pure profit. But that’s assuming they even paid $10 for the subscription in the first place: it’s possible to purchase stolen premium accounts on the black market, making the scheme profitable almost immediately. The potential profits are substantial: At Spotify it only takes 31 seconds of streaming to trigger a royalty payment, which means as many as 86,400 streams a month can be generated, resulting in over $600 of royalties. At Apple Music the threshold is just 20 seconds, making it hypothetically possible to clear 129,600 streams and $900 in royalties in just one month! Awareness of click fraud in streaming music is so widespread that developers make apps to facilitate it. The services will tell you they work hard to make their systems secure, they pay bounties for people to find bugs, and once in a while they even catch and ban click frauders. But security researchers are not impressed, many people are not getting caught, and ultimately we have to confront the simple fact that there is no such thing as a foolproof way to prevent click fraud. If the amount of click fraud activity on Google, Facebook, and Twitter is any indication (estimated to be over $6 billion a year), the problem could be far worse than any of the services will admit, or possibly even realize, and there’s no way for artists or fans to determine how much revenue has been stolen. It’s like someone sucking the oil out from under your property: you don’t even know it’s happening. Click fraud is not the only way to cheat the system. One band made an album of completely silent tracks and told their “fans” to play the blank album on repeat while they slept. If a subscriber did as instructed the band earned $195 in royalties from that single subscriber in just one month. But if each subscriber only pays $10 in subscription fees, then where did the other $185 come from? It came from people like you. The media suggests that Spotify was the one being “scammed” by this “clever” and “brilliant” stunt, but in reality Spotify suffered no financial loss at all. The $20,000 that the band received didn’t come out of Spotify’s pockets, it came out of the 70% in royalties earmarked for artists. In essence what happened is every artist on Spotify got paid a little less thanks to an album with no music on it. To understand why, we need to talk about how “average” can be an illusion. Average Does Not Mean Typical One of the most misleading words used in the streaming music industry is the word “average”. You’ll often see streaming services bragging about how their “average” user is streaming x number of hours per day, particularly when they are pitching advertisers. But don’t be fooled by the word “average” here — it’s an illusion. Average does not mean typical. Think of it this way: imagine you are in a room with a random group of people. What is the average income of everyone in the room? It’s likely that roughly half will be above average, and the other half will be below average. Now what happens when Bill Gates walks into the room? Everyone in the room is below average now, thanks to Bill. The same effect is happening in streaming music: a small number of super-heavy-usage subscribers have raised the “average” usage to the point that most subscribers are now below average. We can illustrate this with a graph: To understand how heavy-users wind up in control of your money, it helps to look at how royalties flow at the individual level: Every user pays $10 a month, which generates $7 in royalties. If the per-stream rate of $0.007 is determined by dividing overall revenue by overall plays, then simple math tells us the “average” subscriber is streaming 1,000 times (1,000 * $0.007 = $7.00). So if you stream 200 tracks in a month you will send $1.40 to the artists you listened to (200 * $0.007 = $1.40), and the remaining $5.60 of your $7 is now up for grabs. So who’s grabbing it? Well, let’s imagine a heavy-user who streams 1,800 tracks in a month. As a result of all this streaming they send $12.60 in royalties to the artists they listen to (1,800 * $0.007 = $12.60). Since they only contributed $7 towards royalties, they are $5.60 short. Guess where that money comes from? You. It’s worth noting that many (if not most) of these heavy-usage “subscribers” are probably not individuals at all. They are actually offices, restaurants, gyms, hair salons, etc. Businesses like these can stream up to 24 hours a day — far more than you as an individual could ever hope to do. And they probably don’t share your taste in music either. But they pay the same $10 you do, so why do they get to decide where your money goes? It’s like you bought a CD and the store told you that you had to listen to it 1,000 times, or they will give your money to Nickelback. That’s fucked up. The Subscriber Share Method There is a better way to approach streaming royalties, one which addresses all of these problems, and it’s called Subscriber Share. The premise behind Subscriber Share is simple: the only artists that should receive your money are the artists you listen to. Subscriber Share simply divides up your $7 based on how much time you spend listening to each artist. So if you listen to an artist exclusively, then that artist will get the entire $7, but if you listen less they get proportionately less. As an example, if you listen to Alt-J 25% of the time, then Alt-J would get $1.75 ($7.00 * 25% = $1.75): Let’s compare this with the Big Pool: if you typically stream 200 streams per month (that’s roughly 13 hours of streaming), then playing Alt-J 25% of the time would equal 50 streams. Since each stream gets a flat $0.007 per stream, the band will recieve just 35 cents. (50 * $0.007 = $0.35) Click here to see how this looks in real life, with a real subscriber. But What About Click Fraud? A nice feature of Subscriber Share is that it is very difficult to turn a profit with click fraud: instead of turning $10 into $600, a fraudster would be turning $10 into $7, and would waste a lot of bandwidth while doing so. If the fraudster used stolen premium accounts (reducing their cost from $10 to $1 per account), they could still make as much as $6 per account, but that is nowhere near as attractive as making $600 is it? And the difficulty level to do this at scale goes way up. If the industry switched to Subscriber Share most click frauders would move to greener pastures. Mission Impossible: Minimum Wage Subscriber Share can also be a huge benefit to small bands just starting out. If a band has a respectable fan base of 5,000 fans then they need $12.06 from every one of these fans in order to earn the federal minimum wage for four people, $60,320. In years past they would sell their fans a CD. But now under the Big Pool they need an ungodly number of streams to make minimum wage: 8.6 million streams. This means every single fan has to stream the band’s music 1,716 times. Assuming a four minute song that’s over 114 hours of listening, and if their fanbase averages 200 streams per month then that means their fans would need to listen to the band 71% of the time for an entire year! Subscriber Share only requires the fans to listen to the band 14.36% of the time, so if the typical fan averages 200 streams a month, then just 29 streams a month is sufficient, and the fan will only spend 22 hours in total listening to the band’s music. This is far more plausible for a new artist. But intriguingly, Subscriber Share also enables fans to financially support an artist using even less effort: If a band can convince their 5,000 fans to listen to them exclusively for two months, the band will earn $70k, and the fans will only have to click once each month in order to do this. Subscriber Share enables listeners to directly support the artists they care about without having to expend extraordinary amounts of energy to do so. The result of Subscriber Share is that each and every fan winds up being far more valuable to artists. It honors the intent of the listener, and incentivizes getting more fans, bringing the goals of everyone (services, labels, artists and fans) into alignment. If you think about it, this is how most of the genres we love got started in the first place. Hip hop, jazz, blues, reggae, punk, grunge, etc, all came from a small group of musicians, and a small group of fans, supporting each other. Who was the biggest beneficiary of this in the end? The music industry. What Are We Waiting For? It boils down to two big obstacles: fear, and inertia. To be fair, the music industry has been on the wrong end of the economic stick for well over a decade now, and talking about changing royalty methods just as it seems like things are about to get better is understandably scary. The other problem is inertia. Institutions hate change, it’s expensive and hard, and you have to rethink everything attached to that change. Inevitably various special interests will arise and fight for the status quo. It can be very tricky to overcome their objections. So it is difficult for the music industry to change, even when they know it’s in their best interest. They are like a cat stuck in a tree. They got themselves up, and can’t figure out how to get down. If the industry is immobilized by fear, and can’t be persuaded to move in the right direction with logic, then one possible way to get them to get them out of the tree is to make it even scarier if they don’t move. In other words: We need to scare the cat out of the tree. And that’s where our little hack comes in… A Silent Protest This September A critical aspect of streaming music services is that the services can’t tell if the volume is turned down. If the music is playing the “clicks” still count, even if no one is listening. This can be used to our advantage. Normally a typical subscriber can’t keep up with heavy users, in part because many of these heavy users aren’t even individuals to begin with: they’re actually offices, hair salons, gyms, yoga studios, and restaurants. But if typical subscribers streamed music 24/7, and just turned the volume down when they weren’t listening, then maybe they could catch up! And if these silent protestors streamed strictly independent artists, major labels would have to worry about the value of their streams decreasing! That could be enough to persuade them to reconsider the use of the Big Pool method, and if the major labels jump out of the Big Pool tree, the rest of the music industry will follow. Even a small number of people engaging in this silent protest will have a measurable impact: just doing it for one day will double most people’s monthly consumption, and doing it for one week will result in more streams than a typical subscriber consumes in a year! But obviously the more the merrier. So let’s throw the idea out there and see what happens: For the month of September, let’s stream indie bands 24/7 non-stop, with the volume turned down to one. Note: It’s recommended that you turn the volume low, but not all the way to zero, and you should change your selected indie artist on a daily basis (or even better use playlists with multiple artists), so that you aren’t mistaken as a bot by the services. If this works the music industry will be forced make royalties fair(er) for all musicians and fans. If it fails a couple of indie bands will get a bigger check than usual. What have we got to lose by trying?
https://medium.com/cuepoint/streaming-music-is-ripping-you-off-61dc501e7f94
['Sharky Laguana']
2017-01-09 19:49:56.474000+00:00
['Streaming', 'Music Biz', 'Music Business', 'Music']
The Real Tragedy in the Death of Ruth Bader Ginsburg
The Real Tragedy in the Death of Ruth Bader Ginsburg Why must the death of one 87-year-old woman throw the entire US political system into irreconcilable crisis? Because of the death of Justice Ruth Bader Ginsburg last Friday, one of the nine seats on the United States Supreme Court is now vacant. To fill this vacancy, the President of the United States must first nominate a candidate who then must be confirmed by the Senate. This period of selection and nomination is typically a months-long process involving several rounds of hearings for different candidates, leading up to a final confirmation vote. Nevertheless, it now appears the GOP intends to rush to fill the vacancy prior to the 2020 presidential election which is only 44 days away. Indeed, less than two hours after Ginsburg’s death was reported, the Trump administration had been reportedly making preparations to announce a nominee for the seat prior to the first presidential debate. This harkens back to a similar scenario which occurred during the runup to the 2016 presidential election, when a GOP-controlled Congress successfully barred the Supreme Court nomination of Merrick Garland under the pretense of ‘letting the voters decide’ who should appoint the vacancy left in the wake of the death of Justice Antonin Scalia. Senator Mitch McConnell, one of the most adamant voices against nominating a Supreme Court in an election year back in 2016, is now singing a different tune and promising that Trump’s nominee will receive a vote on the Senate floor. The unabashed hypocrisy appears to matter less to GOP leaders than taking advantage of an opportunity to pack the court. After all, appointing Supreme Court justices yields perhaps the most long-lasting impact sitting elected officials can have on the US political system. But right there’s the rub: we all have seemed to take for granted the inchoate, even self-destructive nature of our democracy. Now, I should be very clear that the GOP representatives now attempting to rush through a Supreme Court confirmation are no doubt unprincipled cynics worthy of the utmost derision. But these sorts of character condemnations do not take us very far. Instead, what we must face up to is that they are allowed to be unprincipled because of the political system that we also tacitly support — even and especially in our critique of their abuses of it. Put simply, when your political system runs on layers and layers of byzantine codes and procedures, miles of bureaucratic red tape, and an exceedingly arbitrary relation between the representers and the represented, you will have people at the top empowered to make decisions without recourse or accountability. My point, then, is that focusing too much of our energies on depicting the GOP as the bad guys tampering with our fragile democracy elides us to the fact that the system is fundamentally, constitutively broken. For one, the Democrats have given absolutely no reason they wouldn’t do the exact same thing should they be in the position the GOP now finds themselves in. The weak-willed insistence on “consistency” as the “basic principle of law” coming from former President Barack Obama notwithstanding, Democrats are hardly innocent when it comes to exploiting our political system in pursuit of their own self-preservation. We are talking about the party of corporate bailouts, mass deportation, drone wars, and the biggest oil and natural gas boom this country has ever seen, after all. But again (!) the point is not to simply tar and feather Democrats either. In their own special brand of unprincipled cynicism, they are being the politicians which this broken system allows them to be. It is useless to attack the characteristics or behaviors of individuals in lieu of an attack on the system itself. You will vote them out of office and they will be replaced with new representatives who will be compelled by this very same structure to act exactly the same as who they replaced. If you are upset with the behaviors of either the GOP or the Democrats, there is no other option: you have to attack the structure. In other words, you have to agitate in ways that makes the entire system tremble. You have to exceed the forms of ‘doing politics’ which the state has sanctioned and sanitized for you (voting, protest, etc.) and, in turn, invent new ways of making oneself heard, holding your political representatives accountable, and at the most fundamental level reshape the structures of power which oppress and exploit you. And, given the inertia of the structures of power you are faced with, the only way to meet it with an even greater force is all together, in shared struggle. As I see it then, the real tragedy is that, in mourning for what Ginsburg’s untimely death will mean for not simply the upcoming election but the next several decades of Supreme Court decisions, we treat as immutable the very system that makes her death such a valuable political token. So, instead of bemoaning the exploits of the current ruling party, I insist we must analyze and critique not simply their exploits but the conditions of possibility of those exploits. We have to, in other words, view present GOP behavior as symptomatic of a political system which in the first place allows it. What we should want, above all, is not to trade places with bad actors but more fundamentally for their acts to be structurally impossible to reproduce. Regardless of how anyone feels about Ruth Bader Ginsburg, I hope we can all agree that the death of one 87-year-old woman shouldn’t yield an irreconcilable juridical crisis warranting arbitrary and unaccountable political maneuvers from our nation’s leaders. In turn, I hope that acknowledgment of this fact can widen our perspective beyond ‘getting the bad guys out’ toward confronting and transforming the structures of power which let them be ‘bad guys.’ This means having a grander and longer-term political vision than simply getting Biden in office in November. We cannot let the election sap us of all our energies. We cannot let an only very slightly more respectable sitting president breed complacency for a system which is still, with or without Biden as president, fundamentally broken. We cannot, in short, get caught up in another round of musical chairs, replacing nominally bad guys with nominally better guys while leaving the conditions of corruption and cynical self-interest intact.
https://medium.com/discourse/the-real-tragedy-of-the-death-of-ruth-bader-ginsburg-af02e1b35569
['Aidan Hess']
2020-09-22 15:20:52.785000+00:00
['Society', 'Ruth Bader Ginsberg', 'Politics', 'Current Events', 'Congress']
‘Laughing Stock’: The Timeless Appeal Of Talk Talk’s Final Album
Tim Peacock Guided by their single-minded frontman, Mark Hollis, Talk Talk recorded a trio of career-defining albums during the late 80s and early 90s. The band hit on a winning formula in 1986 with the sublime The Colour Of Spring, but they took a radical turn into leftfield with 1988’s Spirit Of Eden and travelled even further out on 1991’s otherworldly Laughing Stock. Listen to Laughing Stock right now. Widely regarded as Talk Talk’s holy trinity, these singular, pigeonhole-defying albums are thrown into even sharper relief when you consider that EMI initially marketed Hollis’ team as a glossy, synth-pop act akin to labelmates Duran Duran. However, after the Top 40 success of 1982’s The Party’s Over and 1984’s It’s My Life, Hollis asserted creative control for The Colour Of Spring: a gloriously-realised widescreen pop record which spawned the band’s two signature hits, ‘Life’s What You Make It’ and ‘Living In Another World’. “The band locked themselves away” Talk Talk’s commercial peak, The Colour Of Spring yielded worldwide chart success and sales of over two million. However, the band shunned such materialistic concerns for 1988’s Spirit Of Eden, which was edited down to six tracks from hours of studio improvisation by Hollis and producer/musical foil, Tim Friese-Greene. A truly groundbreaking album flecked with rock, jazz, classical and ambient music, Spirit Of Eden attracted critical acclaim and cracked the UK Top 20, but Mark Hollis remained adamant that Talk Talk wouldn’t be touring the record. After dealing with time-consuming business-related issues, the band then left EMI and recorded their final album, Laughing Stock, for legendary jazz imprint Verve Records. As manager Keith Aspden told The Quietus in 2013, Verve offered Hollis and co the opportunity to further embrace the experimental approach they’d adopted while piecing Spirit Of Eden together. “Verve guaranteed full funding for Laughing Stock, without interference,” he said. “[The band] took full advantage of that situation and locked themselves away for the duration of the recording.” “It took its toll, but it got great results” By this stage, Talk Talk were ostensibly a studio-based project centred upon Hollis and Friese-Greene, but augmented by session musicians including longterm drummer Lee Harris. As Aspden suggests, they holed up in north London’s Wessex Studios (previously the birthplace of The Clash’s London Calling) with one-time David Bowie/Bob Marley engineer Phill Brown, where they stayed for almost a year honing the six tracks that make up Laughing Stock. The methodology involved was truly arcane, with windows being blacked out, clocks removed and light sources limited to oil projectors and strobe lights in an attempt to capture the correct vibe. “It took seven months in the studio, though we took a three-month break in the middle,” Brown recalled in 2013. “I guess from getting involved to studio recording, mixing and mastering took up a year of my time. It was a unique way to work. It took its toll on people, but gave great results.” “The silence is above everything” Brown wasn’t joking: Laughing Stock was painstakingly edited down to its 43-minute running time from a series of lengthy improvisational sessions. Hollis cited other genre-defying masterpieces such as Can’s Tago Mago, and Elvin Jones’ drumming on Duke Ellington and John Coltrane’s 1962 recording of ‘In A Sentimental Mood’ as influences upon the album, and his quest for perfection was further fuelled by his desire to capture the magic of spontaneity in the recordings. “The silence is above everything,” he told journalist John Pidgeon at the time of the record’s release. “I would rather hear one note than I would two, and I would rather hear silence than I would one note.” Less is certainly more where Laughing Stock is concerned. Opening track ‘Myrrhman’ commences with 15 seconds of amplifier hiss; the enigmatic closing number, ‘Runeii’, features swathes of ambient space; and the fascinating nine-minute centrepiece, ‘After The Flood’, is underpinned by droning, ethereal strings which only gradually drift into focus. However, while these tracks are arguably even more minimal in design than Spirit Of Eden, they’re offset by more quixotic songs such as ‘Ascension Day’ and ‘Taphead’, which make sudden, jarring leaps from gentle, quasi-ambience to rushes of coruscating noise. Taken as a whole, Laughing Stock can initially be a disorienting listen, but with repeated plays its bewitching beauty steadily seeps out, perhaps nowhere more so than on ‘New Grass’, the record’s most bucolic and linear-sounding track, which alone is worth anyone’s price of admission. “It will be valued long after” Housed in a memorable sleeve designed by long-term collaborator James Marsh, Laughing Stock was first released by Verve on 16 September 1991. Even though it didn’t contain a radio-friendly single or support from live shows, the album still briefly sneaked into the UK Top 30. With little fuss, Talk Talk disbanded shortly after, with Mark Hollis later releasing one final understated masterpiece, his self-titled 1998 solo album. Sadly, it proved to be the last album bearing his stamp before his untimely death, aged 64, on 25 February 2019. As is often the case with forward-looking artistic statements, Laughing Stock polarised critical opinion on release. However, a few of the more perceptive reviews, such as Q’s (“It might put Talk Talk heavily at odds with the commercial charts… but it will be valued long after such superficial quick thrills are forgotten”) proved prescient, as the album’s reputation has grown steadily with the passing of time. In recent years, artists as disparate as UNKLE, Elbow and Bon Iver have sung Laughing Stock’s praises, and it’s not hard to hear why. This bold, indefinable record is both a poignant swansong and very possibly Talk Talk’s crowning glory. Laughing Stock can be bought here. Join us on Facebook and follow us on Twitter: @uDiscoverMusic
https://medium.com/udiscover-music/laughing-stock-the-timeless-appeal-of-talk-talk-s-final-album-27d314c44a1a
['Udiscover Music']
2019-09-16 09:36:39.708000+00:00
['Culture', 'Features', 'Alternative', 'Pop Culture', 'Music']
Industry 4.0: Manufacturing Post-Pandemic
In this article, we are going to discuss the benefits, challenges and ethics of automation at scale. Image credits to Forbes Last December the world celebrated and we all were anxious to get started with 2020, as a new year comes with the promise of better fortune but little did we all know that we would be in this situation. Highly social beings being forced to keep social distance, wear masks and develop hygiene routines better than hypochondriac — basically going against our nature. I would like to take a moment to express my condolences to the families who lost their loved ones to COVID-19, I can personally relate because I too lost a dear family member to it. Now, after being in the eye of the tornado we the survivors are rebuilding. In the wise words of Napoleon Hill: “When defeat comes, accept it as a signal that your plans are not sound, rebuild those plans, and set sail once more toward your coveted goal.” The pandemic made us completely rethink how we live and do business, there are many examples of changes that COVID-19 enforced across multiple industries but we are going to focus on manufacturing as it provides an interesting test case of an industry where the players in it will have to re-create themselves or they will not survive as well as an industry that due to the pandemic everyone now is on a level playing field and now the underdogs, the Davids of it have a chance to take on Goliath. The Evolution of Manufacturing Let’s backtrack and understand the history of the evolution of manufacturing. According to the article “Industry 1.0 to 4.0: The Evolution of Smart Factories” by Apics and “Industrial Revolution — From Industry 1.0 to Industry 4.0” by Desoutter, the evolution manufacturing is as it follows: Industry 1.0 The First Industrial Revolution began in the 18th century through the use of steam power and mechanisation of production. Water and steam-powered machines were developed to aid workers — big emphasis on “aid workers”. From the beginning, the goal was never to replace humans with machines but to aid. Continuing… While before we used to produce threads on simple spinning wheels, the mechanised version achieved eight times the volume at the same time. Steam power was already known. As production capabilities increased, the business also grew from individual cottage owners taking care of their own — and maybe their neighbours’ — needs to organizations with owners, managers and employees serving customers. The use of steam power for industrial purposes was the greatest breakthrough for increasing human productivity. Instead of weaving looms powered by muscle, steam-engines could be used for power. Developments such as the steamship or (some 100 years later) the steam-powered locomotive brought about further massive changes because humans and goods could move great distances in fewer hours. Industry 2.0 The Second Industrial Revolution began in the 19th century through the discovery of electricity and assembly line production. It was easier to use than water and steam and enabled businesses to concentrate power sources to individual machines. Eventually, machines were designed with their own power sources, making them more portable. This period also saw the development of a number of management programs that made it possible to increase the efficiency and effectiveness of manufacturing facilities. Division of labour, where each worker does a part of the total job, increased productivity. Mass production of goods using assembly lines became commonplace. Lastly, just-in-time and lean manufacturing principles further refined the way in which manufacturing companies could improve their quality and output. Industry 3.0 The Third Industrial Revolution began in the ’70s in the 20th century through partial automation using memory-programmable controls and computers. Since the introduction of these technologies, we are now able to automate an entire production process — without human assistance. Known examples of this are robots that perform programmed sequences without human intervention. This period also spawned the development of software systems to capitalize on electronic hardware. Integrated systems, such as material requirements planning, were superseded by enterprise resources planning tools that enabled humans to plan, schedule and track product flow through the factory. The pressure to reduce costs caused many manufacturers to move component and assembly operations to low-cost countries. The extended geographic dispersion resulted in the formalization of the concept of supply chain management. Industry 4.0(Post-Pandemic) We are currently implementing the Fourth Industrial Revolution. Industry 4.0 connects the Internet of Things (IoT) with manufacturing techniques to enable systems to share information, analyze it and use it to guide intelligent actions. It builds on the developments of the Third Industrial Revolution. Production systems that already have computer technology are expanded by a network connection and have a digital twin on the Internet so to speak. It also incorporates cutting-edge technologies including additive manufacturing, robotics, artificial intelligence and other cognitive technologies, advanced materials, and augmented reality, according to the article “Industry 4.0 and Manufacturing Ecosystems” by Deloitte University Press. This is the next step in production automation. The networking of all systems leads to “cyber-physical production systems” and therefore smart factories, in which production systems, components and people communicate via a network and production is nearly autonomous. Why is it crucial? As problems soar the market so as the opportunities, entire industries and the players in it no matter the size are being forced to rapidly adapt technology to ensure their success as well as their survival. In the wise word of Jean Piaget: “Scientific knowledge is in perpetual evolution; it finds itself changed from one day to the next.” According to this Forbes article by Rohit Arora, we can clearly see that most of of the organizations that fared best before, during and will continue post-Covid-19 were the ones that were either prepared with their digital transformation or were forced undergo one during the pandemic and embraced it completely. And this theory holds true not only for fortune 500 companies. From this statement above we can infer that organizations need to learn how to acquire data, process it and derive actions from the insights discovered. Data is growing exponentially to the point that we can surely say that there is no other point in time in the history of mankind that data was produced this way. While before the different countries focused their economies on mining raw materials from the earth, now we are heading to a world where these same countries are shifting their focus to a more valuable and perpetually growing raw material(data). Absurd amounts of data are produced daily, and we have only started to scratch the surface with what we can do with it. “We’ve been merging with tools since the beginning of human evolution, and arguably, that’s one of the things that makes us human beings. “— Franklin Foer This also serves to say that even legacy businesses that either aren’t publically identified as tech companies or don't identify themselves as one, have to change the way they see and present themselves. A great example is auto shops, they are not seen as a tech company but as cars become more and more technologically advanced and electric vehicles become more common auto shops need to change the tools they use from pure manpower to data-driven digital tools. Major Benefits The major immediate and long term benefits are many but we will only focus on a handful of them: Automated human and machine monitoring Increased production Improved quality Reduced costs and defects. According to this article by Tommy Palladino, GE aviation has a great story that sums the points above. In collaboration with their software partner Upskill, they gave their mechanics Google Glass paired with a smart wrench to assist them in the assembly and maintenance of aircraft engines which resulted in a 16% speed increase. Upskill has software called Skylight that provides hands-free instructions, in the form of process flows, images, videos, and animations, via the Glass display as well as video Challenges Just like any change, this one doesn’t come without resistance. Organizations are going to face one or more of the following challenges: Lack of skillset Performance Privacy & Security Over automation Infrastructure In my opinion, the very first thing organization should be looking into is re-skilling of their current staff(for those areas which are dying) and acquisition of new talent pool among which are data scientists and developers so they can take maximum advantage of data they produce or get and software that can help interpret the insights derived from data. One key component is performance, some data have a limited useful lifetime so real-time processing is a must. An example of the importance of performance is Google’s initiative to leverage an AI system from its child company Deep Mind on their data centres which they claim the use of this AI system led to 40% reduction in energy used to keep servers cool. This system uses IoT edge devices to gather that and the algorithms learn heat patterns and predict how hot the servers will be hour-by-hour and then the system uses its insights to supply the right amount of cooling. Here performance matters any delays can equal bigger costs. Another challenge is privacy and security because now we have IoT devices scattered throughout the factory or building and the old school Private Network infrastructure doesn’t work anymore. We have an infrastructure made of a network, OS, applications, data and physical devices sending and receiving data from public servers to protect. This setting naturally raises various privacy and security issues, that's where I believe Cloud and Cloud Hybrid(synced cloud and on-premises infrastructure) services shine the most because they provide you with a way to connect everything together in a safe, secure and cost-effective way. Finally, I want to express my concern about the risk of over-automation. The idea is to help humans work more efficiently and not to replace them, I know it’s enticing for businessman and managers to maximise profits — AI and Automation are words highly associated with those 2 words. For more information, I go over the topic of over-automation in detail in my previous work: “AI & Automation: A Take Over or Symbiosis?”. Ethics of Automation Why organizations should be careful with biased production processes. AI can help reduce bias, but it can also increase and scale bias. Bias is present everywhere and it's well documented in humans. We all have bias and we act on it either consciously or unconsciously. For example, diagnostic errors are the most common cause of medical errors reported by patients. They account for the largest fraction of malpractice claims, the most severe patient harm, and the highest total of penalty payouts. It can be caused by multiple factors the doctor might be tired, under a lot of stress and so on that might lead him towards making a certain diagnosis. Much of the conversation about definitions have focused on individual fairness, or treating similar individuals similarly, and on group fairness — making the model’s predictions or outcomes equitable across groups, particularly for potentially vulnerable groups. Bias has been identified in facial recognition systems, hiring programs, the algorithms behind web searches, recommendation systems and so on. In contrast there multiple examples of great implementation of AI systems that improved decision making and also increased fairness. For example, Jon Kleinberg and others have shown that algorithms could help reduce racial disparities in the criminal justice system. Bias comes mainly from the data used to train the AI system and not from the AI system itself. Organizations have to make sure that their data has the least amount of bias that allows for great decision making and fairness. Several approaches to enforcing fairness constraints on AI models have emerged. The first consists of pre-processing the data to maintain as much accuracy as possible while reducing any relationship between outcomes and protected characteristics, or to produce representations of the data that do not contain information about sensitive attributes But the effort shows that removing bias from AI systems remains difficult, partly because they still rely on humans to train them. In the wise words from Olga Russakovsky, Princeton: “Debiasing humans is harder than debiasing AI systems.” Closing Thoughts I believe that the future looks even brighter than the current times. But first, they have to become thoroughly versed on technologies such as cloud, IoT, data science, software engineering and so on. The year of 2020 put organizations of all sizes on a level playing field and all must fight to survive and thrive and by changing how they see themselves, how they interact with their customers and finally by becoming more data-driven because data is the new gold, diamond and oil. References
https://medium.com/digital-diplomacy/industry-4-0-manufacturing-post-pandemic-9093e504fa13
['Prince Canuma']
2020-11-13 17:26:04.492000+00:00
['Industry 4 0', 'Manufacturing', 'Deep Learning', 'AI', 'Manufacturing Processes']
Four shades of greenhouse gaslighting
Four shades of greenhouse gaslighting Identifying ways climate action opponents attack our psyches Nowadays, gaslighting isn’t just a term for brightening street corners. (Source: Wikimedia Commons) From Casablanca to Notorious, the great actress Ingrid Bergman left her mark on cinematic history. But few, if any, would have guessed that her work would also contribute to the 21st century political phenomenon known as “gaslighting.” In 1941, she co-starred with Charles Boyer in the film Gaslight, which focused on how a husband (Boyer) tried to drive his wife (Bergman) insane, in part, by constantly flickering the gaslights in their house and then denying it was happening. The term started entering the linguistic bloodstream in the 1950s. At that time, television sitcom writers coined the gaslight treatment or gaslight bit to describe a scene wherein a character was fooled. But the expression took a dark turn in the following decade, when it was used to describe brazenly lying with designs to get others to either accept a fundamental untruth — or else drive them crazy. As one psychologist put it in 1969: “It is…popularly believed to be possible to ‘gaslight’ a perfectly healthy person into psychosis by interpreting his own behavior to him as symptomatic of serious mental illness.” Nowadays, “gaslighting” is a common term used to describe verbal duplicity by politicians. Opponents of climate action often employ the tactic. Gaslighting is particularly effective on this topic because as one article on the World Economic Forum website put it, we are “wired to fear only short-term threats.” Our unwillingness to look to the far horizon, the story explained, is a key reason “why we ignored climate change.” In other words, if the danger is not immediate, many are more than willing to be sucked into gaslighting rhetoric. We recently saw this path of least resistance in Australia, where the recent election was expected to be a referendum on global warming. Instead, voters “shrugged off the warming seas killing the Great Barrier Reef and the extreme drought punishing farmers,” according to the New York Times. Rather, they “re-elected the conservative coalition that had long resisted plans to sharply cut down on carbon emissions and coal.” Global warming is broadly credited with endangering the iconic Great Barrier Reef. But Australians recently opted to vote for a coalition that doesn’t prioritize climate action. This suggests how many might be susceptible to gaslighting on this subject. (Source: Pixabay.com) Perhaps, many of us are unconsciously allowing ourselves to be gaslighted on this existential issue. But that’s not acceptable. “Gaslighting only works when a victim isn’t aware of what’s going on,” psychologist Dr. Marie Hartwell-Walker explained. “Once you become alert to the pattern, it will not affect you as much.” For those who want to fight back against assaults on our collective understanding of global warming, the first step is to identify the different varieties of climate gaslighting. To that end, here are four shades of global warming gaslighting you should look out for: 1. Straight up denial This is the most obvious and well-recognized form of climate gaslighting. Take, for example, the rhetoric of House Majority Whip Steve Scalise. Earlier this summer, when Rep. Scalise went on CBS This Morning, anchor Tony Dokoupil questioned him about how the Gulf of Mexico engulfs a football field’s worth of land in his home state of Louisiana every hour. When the interviewer connected this topic to the scientifically documented climate crisis, Scalise pushed back with long-debunked arguments. “First of all, we do know that the Earth’s temperature changes–it goes up and down,” he said. He then doubled down, claiming that, rather than climate-induced rising sea water, this land loss was mostly due to “coastal erosion.” Rep. Scalise is not the only member of Congress to maintain the denier’s position. Sen. Jim Inhofe of Oklahoma is most notable for taking climate gaslighting to the extreme. Beyond once brandishing a snowball on the House floor to argue that a spell of cold temperatures in D.C. proved climate change didn’t exist, Inhofe, who wrote a book called The Greatest Hoax: How the Global Warming Conspiracy Threatens Your Future, has relied on classic gaslighting language to explain that it’s us, not him, who are foolish, because we believe in global warming. “You know, our kids are being brainwashed?” Inhofe said in an interview in 2016. “The stuff that they teach our kids nowadays, you have to un-brainwash them when they get out [of school].” Sen. Jim Inhofe of Oklahoma (above) is a staunch opponent of climate action. (Source: Wikimedia Commons) 2. Questioning humanity’s role For those who recognize that denying climate change is a credibility bridge too far, another gaslighting option is to suggest that global warming exists but claim that it isn’t humanity’s fault. Sen. Marco Rubio, whose home state of Florida is heavily impacted by the climate crisis, went that direction in a CNN interview last year. Rubio said: “I can’t tell you what percentage [of humanity’s behavior] is contributing and many scientists would debate the percentage is contributable to man versus normal fluctuations.” This approach relies on a popular gaslighting maneuver: throwing in positive reinforcement to confuse you. Agreeing that the climate is warming appears to offer a concession. But pivoting to the question of humanity’s precise contribution to the problem is disorienting. “This is a calculated attempt to keep you off-kilter–and again, to question your reality,” Dr. Stephanie Sarkis, author of Gaslighting: Recognize Manipulative and Emotionally Abusive People–and Break Free, wrote about this tactic. The reality check here is that scientists do agree on humanity’s role in global warming. “Nothing is 100 percent certain in science, but the reports from the Intergovernmental Panel on Climate Change (IPCC), which summarize the state of science, express a 95 percent confidence that humans have caused more than half and most likely all…recent global temperature rise,” said Vox.com’s climate maven David Roberts. “That is about as close to certain as scientists ever get about anything.” 3. Inevitability of climate change Rather than discuss root causes, there’s always this posture, which science journalist Erin Biba suggested in June is currently in vogue. “Climate inevitability is the new climate denial,” she wrote on Twitter. “Don’t fall for it.” This throw-your-hands-up-in-the-air position is best reflected in a September 2018 headline from the publication The Week: “Trump administration argues that Earth will inevitably be ruined by climate change, so we might as well keep using fossil fuels.” At the time, an environmental impact statement from the government conceded there’s global warming but argued that since we’re on pace to see temperatures rise seven degrees Fahrenheit by the end of the century, we might as well just keep burning oil until we reach the end of times. The Week distilled the report’s message this way: “So if that’s our fate … what’s the point in trying to fight it? It would be much more fun to go out with a fossil-fueled bang …” Certainly, most reputable scientists worry we are on this course. But very few believe that we should just keep running on coal, oil, gasoline and natural gas because we have no chance of averting the worst effects of climate change. From Scientific American’s “10 macro solutions to the climate crisis” to Curbed’s “101 tips for how to join the battle at home,” numerous ideas and resources contradict the inevitability contention. 4. Personally condemn climate action advocates If attacking the science or just giving up doesn’t work, dismissing environmentalists—and those who listen to them—as fools is an alternative. This is a popular gaslighting maneuver. “You will always be dismissed, judged, or told that you are crazy or a liar,” one Psychology Today article explained about this method. The goal here is to make anyone who believes in consensus science feel stupid. This effort occurred when 16-year old Nobel Peace Prize nominee Greta Thunberg visited England in April to speak out on global warming. At the time, “Eco-denialists” described Thunberg as “that weird Swedish kid” and referred to those who believed her position as “imbecilic” supporters, The Guardian reported. This effort to discredit Thunberg as a “millenarian weirdo,” among other things, only appears to be increasing. Greta Thunberg (above) has been attacked by numerous people who oppose climate action. (Source: Wikimedia Commons) Similar efforts took place in the United States after the Green New Deal was announced. Whether you support that endeavor or not, Rep. Alexandria Ocasio-Cortez wasn’t fixated on getting rid of “farting cows” as her opponents continually claimed. But the purpose of focusing on this statement was to gaslight her potential supporters into believing they’d be insane to back her. So what can we do? One of the most important things to recognize about gaslighters is that they aim to wear you down. “This is one of the insidious things about gaslighting–it is done gradually, over time,” wrote the Gaslighting author Sarkis. “A lie here, a lie there, a snide comment every so often…Even the brightest, most self-aware people can be sucked into gaslighting.” As such, environmentalists must be vigilant, remain strong and always be armed with the facts. Without that approach, the gaslighters may very well win.
https://medium.com/the-public-interest-network/four-shades-of-greenhouse-gaslighting-ed2492a3dd4a
['Josh Chetwynd']
2019-08-14 19:16:49.260000+00:00
['Language', 'Climate Change', 'Environment', 'Global Warming', 'Politics']
Never Experienced These Things? That’s White Privilege
You come to school for the first day of the new year. You’re entering into the third grade, excited to spend time with your old friends and make some new ones, too. The teacher seems nice and welcoming enough, and he or she starts calling out the attendance of the class. When they get to your name, there are a couple of very wrong pronunciations, followed by an apology, and then the teacher asking you how to say your name properly. Once you do, they still can’t get it right and say that they will learn it as the year goes on. Embarrassed and disappointed, you have just come to accept that your name, the name given to you by your parents with meaning and love is just never going to be given its due in a classroom. You will always be the kid with the “weird” name. The name that is not “Steve” or “Jessica”. This should make you feel unique, but in America, it makes you feel like an outcast. Having your name pronounced with ease and respect is white privilege.
https://medium.com/an-injustice/never-experienced-these-things-thats-white-privilege-52ddf0cc34af
['Shawn Laib']
2020-12-18 18:21:55.545000+00:00
['Politics', 'Society', 'Race', 'Racism', 'White Privilege']
Unhappy Endings
Let’s take our dataset for a test drive by first looking at something very simple. In our database of ‘finished’ TV shows, which ones have the highest overall series rating? Even if its ending did sour things, Game of Thrones still enjoys a healthy series rating of 9.4, surrounded by the kind of shows you’d expect to see at the top of such a ranking (Breaking Bad, The Wire, and The Sopranos), as well as some you may not have even heard of (Leyla and Mecnun is a Turkish dramady currently available on Netflix, whilst Avatar, Death Note, and Fullmetal Alchemist are all Japanese animé). Let’s go deeper on the top five series and track the individual episode ratings of each show’s run. The first thing that jumps out, as we might expect, is the precipitous fall in ratings at the end of Game of Thrones. We should also note that this drop was not inevitable — there are plenty of examples of high quality shows not fluffing their lines at the very last. Indeed, the other shows here pretty much hit their respective peaks during their final episodes (with The Sopranos’ slight downtick perhaps caused by its infamous ‘cut to black’ ending). The other less obvious thing that presents itself here is the relationship between the overall series rating (as defined in the original dataset) and the average of the episode ratings. We would expect these to be very similar, if not identical, though this doesn’t seem to be the case. For example, The Wire’s episode ratings, though consistently high, still average out at a whole point less than its overall series rating (8.25 vs 9.30, respectively). Indeed if we compare these two measures for the whole dataset, we see that the correlation between series ratings and average episode ratings is actually quite weak. The purple line here represents x = y. Very few titles sit on (or indeed especially near) it Weirdly, we see that six out of the seven shows with the biggest discrepancies between overall series and episode ratings are animations, and five of these are Japanese Animé (Naruto, Cowboy Bebop, Death Note, and two helpings of Dragon Ball Z). Looking at the episode trackers of the shows with the largest discrepancies, we see high standard deviations (and in the case of Batman and Dragon Ball Z, rather inconsistent quality on an episode by episode basis). We can conjecture that there’s a certain ‘nostalgia factor’ at play here — people might have fond ‘overall’ memories of a show that aired more than a decade ago, which would result in high ratings for the series as a whole. However, time might have erased memories of dud episodes, which then get lower individual episode ratings (we can assume that people rating on an episode-by-episode basis are more likely to have watched them recently with a more objective eye). The data does bear this out to an extent. There is a negative correlation between a show’s end year, and the discrepancy between its overall series rating and average episode ratings. The further towards the top of the chart a title is, the higher its series rating is compared to the average rating of its individual episodes. Older shows are likelier to have such discrepancies. So, given we’ve seen inconsistent series with high variance in episode ratings, which shows manage to keep up a consistent quality throughout their entire runs? We can answer this question by plotting a show’s episode ratings’ mean against its standard deviation.
https://towardsdatascience.com/unhappy-endings-36e5fd157703
['Callum Ballard']
2019-07-16 15:13:40.161000+00:00
['Data Science', 'Python', 'Game of Thrones', 'Data Visualization', 'TV Series']
Five lessons from Sitra’s circular economy roadmap workshops
A national plan for implementing circular economy — easy, right? In 2016, the Finnish Innovation Fund Sitra launched a circular economy roadmap outlining the actions, pilot projects and goals to be reached by 2025 to make Finland a globally competitive leader in circular economy. In the fall of 2018, a review of this roadmap took place in a series of workshops to ensure that the roadmap was taking the nation in the right direction. I was invited to be part of the group of professionals participating, and together with representatives from ministries, cities, municipalities, industry federations, research groups as well as some of Finland’s largest companies, spent several fall mornings assessing the previous roadmap priorities, filling in gaps, and proposing new pilot projects to help Finland get swiftly on its way towards a truly circular economy. Photo by Štefan Štefančík on Unsplash So, what did I learn from sitting in a room with the key stakeholders making circular economy a reality in Finland? 1. The private sector is moving and the public sector needs to pick up the pace. Enticed by the business potential of being a leader in sustainability, the private sector seems to be chomping at the bit to get started on pilot projects and partnerships for circular economy. Meanwhile, the public sector lags behind in concrete action, seemingly because it is still unclear what that action should be. Many recognize that the public sector wields great influence over how circular models will play out in our society, and are crying out for more legislation, taxation or subsidies to make experimentation in circular models easier. It’s time for the public sector to take a page out of the private sector’s books, and move from planning to doing. 2. More cooperation is needed across sectors, but few want to take ownership. Regardless of sector, everyone recognizes that no one alone can build a circular model: the key is partnerships, across industries, government ministries, cities and between the private and public sector. However, there is an inherent problem with this: dilution of ownership and therefore, accountability. In the first few hours of being together, the discussion threatened to devolve into finger-pointing over who wasn’t doing enough or monologues on how much one’s own sector had done. Later, the proposed pilot projects and action items were, for the most part, dumped onto various ministries or public funding agency Business Finland (with a few notable exceptions, where private sector companies nominated themselves to take the lead on pilots closely affecting their industry). Is this a symptom of the public sector not pulling their weight or a general unwillingness of organizations to take ownership? Regardless, it’s time for players across industries and sectors to stop looking around at others. Instead, they must stick their own neck out for circular economy, even when it might seem daunting. After all, that’s how real leaders are made. 3. Circular economy education and assessment must be incorporated into all project funding processes. Perhaps the most crucial action items that emerged from the workshops were a need for more funding for circular economy projects and for more circular economy education to a wider audience. It is critical to assess all spending, public and private, on the basis of its contributions to circular economy and climate action. This would mean getting rid of specialized “green funds”, recognizing that if we want the country to be a leader in circular economy, we must shift all our investments to advance this goal. Professionals allocating funds would then need to be educated on circular economy, and most importantly, professionals across fields would need to understand, strategize and innovate for circular economy. This means getting academics and sustainability professionals out of their bubble, and bringing practical circular economy to all businesses and government bodies, as well as ensuring students are introduced to circular economy well before university. Hand-in-hand with this wider circular economy education, funding bodies and businesses alike will need to focus more on long-term thinking and scalability of circular economy when assessing projects and business models. 4. We need to be weary of hype without stalling action. It didn’t matter what sector or organization a participant represented — everyone was clearly excited about the potential of circular economy. Many saw circular economy innovation as a key tool to re-imagine our communities, grow business, and boost Finland’s image and competitiveness globally. This excitement was often reflected in project descriptions which referred to circular economy solutions as “win-win-win” for business, people and environment alike. But as I’ve previously written, win-win scenarios can be dangerous, ignoring solutions that might in fact be more optimal for people or environment than business would tolerate. In addition, the potential negative consequences and rebound effect of circular economy solutions were worryingly absent from the conversation. It’s easy to get caught up in the hype when preaching to a choir of sustainability enthusiasts. While experiments and pilots are where we can test out ideas, it is important that pilot projects are critically observed for secondary effects, and that we don’t allow circular economy to become a supplement to business as usual. 5. Circular economy may be the hardest, yet most vital thing, we have worked toward as a nation. The Sitra workshops provided a sampling of why circular economy work is so difficult: it requires stakeholders from very different backgrounds and with very different priorities to come together, often building an entirely new way of thinking and doing things. The entities we are working with are huge and require time to get moving. However, there is one thing we all agreed on, and that is that we do not have any other choice… and that we are running out of time. The move away from a linear system to a circular one is a requirement for achieving carbon reductions and ensuring a safe and prospering country, and planet, for us all. The clear, underlying sentiment was that the time for talk is over. The time for action is now. In the end, not one person questioned the goal — the only argument was on how we get there.
https://medium.com/pure-growth-innovations/five-lessons-from-sitras-circular-economy-roadmap-workshops-946d886b8a07
['Anna Pakkala']
2018-10-04 08:15:04.815000+00:00
['Government', 'Circulareconomy', 'Business', 'Sustainability', 'Funding']
This Week in Data Preparation (September 28, 2020)
This weekly post with news items from the data preparation market is brought to you by The Data Value Factory, the company offering Data Preparer. 12 links in this week’s post: 6 articles (on building data teams, big data in HR, CDOs, feature engineering, and artificial intelligence, by HSBC, EQT Ventures, Engagedly, Insperity,Singapore University of Technology and Design, EverythingBenefits, Teradata, Columbia University, Dresner Advisory Services, Dstillery, Forrester Research, eBay, and Landing AI), 5 company updates (by Facebook UK, Primary Engineer, Fivetran, dbt, Fishtown Analytics, Autodesk, Altair, Ellexus, Exterro, Stout, causaLens, CLS), and 1 capital raise announcement (by Ahana). The Data Value Factory — This Week in Data Preparation. September 2020 Image by Tumisu from Pixabay. Articles There’s no data science unicorn — building a data team at HSBC. Rahul Boteju, Global Head of Data Analytics at HSBC, was speaking this week at the Big Data LDN event, where he shed some light on what it takes to build an effective data science team that can scale. The many roles of big data in HR. Erik van Vulpen writes for the Academy to Innovate HR that even though HR data might lack volume and be largely static, it has enough variety and value “to generate valuable insights into the workforce” through business intelligence and HR analytics. Big data in HR could make hiring more equitable and improve diversity by reducing bias, Zoe Jervier Hewitt of EQT Ventures tells Protocol. “Big data with Natural Language Processing can help analyze the feedback, project reviews and overall talent profile data to build skill profiles of employees within the organization in near-real time, which can be used as a tool for workforce planning,” Srikant Chellappa of Engagedly tells Forbes. “When applied to recruiting, employers can utilize big data to better predict hiring needs, while improving their quality of hire and employee retention,” John Feldmann of Insperity tells Forbes. Regarding retention, “[w]ith the help of big data technology, algorithms can flag employees at risk of leaving by interpreting their online activity, profile updates, employment history, job performance, and payroll data,” Vikash Kumar writes for AIIM — The Association for Intelligent Information Management. HR professionals have “always relied on gut instincts using very descriptive data,” but they have the potential to make more objective decisions by incorporating big data, Jaclyn Lee, chief human resources officer at the Singapore University of Technology and Design, tells Human Resources Director. Having such insights from big data analytics and artificial intelligence is even more important, since “[w]ith the unemployment rate being as low as it is, HR leadership does not have much leeway to ‘get it right,’ ” Rachel Lyubovitzky of EverythingBenefits tells Forbes. HR also could introduce new data-focused positions, such as data detective, Richard Binder writes in Benefits Pro. The CDO’s Role in Leading Data-Driven Transformation. “Now is the time for organizations to rethink — and elevate — the role of the Chief Data Officer, providing them the authority and ability to take risks in order to embrace new opportunities that drive increased innovation. We’re at a critical stage where data must be placed at the forefront of the enterprise to create digital and data transformations, and CDOs will be crucial in making that mission of growth a reality.” writes Dr. Yasmeen Ahmad, VP of Global Enterprise Analytics at Teradata. Key steps in the feature engineering process. “Really, the feature engineering process is turning the data that you have into the most effective and usable version of what’s going to get at the question you want to answer,” said Hannah Pullen-Blasnik, a graduate research fellow at Columbia University and former senior data scientist at Digitas North America, a global marketing and technology agency. According to Brian Lett, research director at Dresner Advisory Services, feature engineering is a balance of art and science. “I think [feature engineering] is important because a lot of times we can get bogged down in information that’s not helpful for the problem at hand,” said Gilad Barash, vice president of analytics at Dstillery, a custom audience marketing consultancy based in New York. Mike Gualtieri, an analyst at Forrester Research, said feature engineering is the most important part of the machine learning process because it can make or break an algorithm’s accuracy. Artificial Intelligence Is Ready For Prime Time, But Needs Full Executive Support. As part of a massive operation with so much experience with AI, Mazen Rawashdeh, CTO of eBay, has plenty to say about the current state of enterprise AI. Artificial Intelligence Advances Food Safety. “Similar to a human, AI is very good at dealing with a lot of variations in whatever’s being looked at,” says Quinn Killough, senior business development manager for Landing AI, a company that provides end-to-end AI platforms for manufacturing. Company updates Facebook UK and Primary Engineer focus on data to tackle climate change. Eoghan Griffin, EMEA sustainability manager for Facebook, said: “Facebook understands the urgency of climate change and is determined to be part of the solution.” Dr Susan Scurlock MBE, CEO and Founder of Primary Engineer explains, “STATWARS is a programme designed to help young people make decisions based on data and using the data to make changes in their daily lives to have a positive impact on Climate Change. Chris Rochester, UK Director of Primary Engineer adds, “The opportunity for schools and the relationship with Facebook is hugely exciting. Fivetran Integrates With dbt for Automatic Data Transformations. “The importance of data analytics continues to be paramount for growing companies, and the exponential growth in data at our fingertips only means there is more to harness, store, and analyze to make revenue-impacting business decisions,” said Martin Casado, general partner at Andreessen Horowitz, an investor in both companies. “By integrating dbt with Fivetran, we are giving customers the ability to manage all their data pipelines within the Fivetran toolset,” said George Fraser, CEO and co-founder of Fivetran. This integration will make it easier for more data analysts to start using dbt, adopt the dbt viewpoint, and join the vibrant dbt community,” said Tristan Handy, CEO of Fishtown Analytics. “dbt allows us to set up repeatable data transformations. We can schedule jobs to create data tables for us that surface in downstream tables,” said Evin Anderson, data engineering manager at Autodesk. Why Altair Acquired Analytics Firm Ellexus. “Altair proceeds to extend its reach and capacities for HPC environments to support critical modern workloads including for data analytics, AI and advanced driver-assistance systems (ADAS),” said James Scapa, Altair’s chief executive officer and founder. Altair will also deepen its technical skillset as Dr Rosemary Francis, Ellexus founder and the chief executive officer will join its team to help Altair best utilise the technology. Exterro Announces Launch of Groundbreaking Data Source Discovery. “For many years we’ve heard our clients describe the countless hours it takes between in-house professionals, outside counsel, and IT to scope discovery,” said Chief Marketing Officer Bill Piwonka. “The burden on Legal and IT teams to effectively identify and preserve new sources of information within their organization has increased tremendously over the past few years, with the current work-from-home environment only exacerbating the issue,” says Ross Dubinsky, Director, Legal Management Consulting for global advisory firm Stout. causaLens Launches Causal AI Platform. “Businesses investing in the current form of machine learning (ML), including AutoML, have just been paying to automate a process that fits curves to data without an understanding of the real world. They are effectively driving forward by looking in the rear-view mirror,” explains causaLens CEO Darko Matovski. “The causaLens platform has enabled us to discover additional value in our data,” said Masami Johnstone, Head of Information Services at CLS, whose products help clients navigate the changing Foreign Exchange marketplace. Capital raise announcements Ahana Expands Seed Funding to $4.8 Million with Additional Funding Led by Lux Capital. Joining Cofounder and CEO Steven Mih; Cofounder and Chief Product Officer Dipti Borkar; and Cofounder and CTO David Simmen, the Ahana team now includes: Vivek Bharathan, Cofounder, Principal Software Engineer and Presto Contributor, Ashish Tadose, Cofounder, Principal Software Engineer and Presto Contributor, George Wang, Cofounder, Principal Software Engineer and Presto Contributor. “With data spread across many sources, particularly data lakes, a federated SQL approach is replacing the traditional data warehouse model and Presto is poised to become the new SQL query engine of choice for running across data stores,” said Brandon Reeves, Principal, Lux Capital. A week’s worth of manual data preparation in minutes. Thank you for reading our weekly post with news items from the data preparation market. Have you tried Data Preparer?
https://medium.com/the-data-value-factory/this-week-in-data-preparation-september-28-2020-e6e9af5807be
['Nikolaos Konstantinou']
2020-10-03 22:25:45.660000+00:00
['Analytics', 'AI', 'Data Transformation', 'HR', 'Data Science']
No-Hit Wonders: 20 Iconic Acts, Zero Top 40 Singles
No-Hit Wonders: 20 Iconic Acts, Zero Top 40 Singles Let us now show some respect for music’s most-surprising superstar shut-outs. Photo: Caligvla at English Wikipedia Drake, Taylor Swift, and Bruno Mars make it look so easy, but racking up number ones is hard work. The Boss himself, Bruce Springsteen, has never topped Billboard’s Hot 100 with one of his singles, a dishonor he shares with James Brown, Creedence Clearwater Revival, ELO, and The Pointer Sisters. At least he has a nice collection of Top 10s — 12 of them — to show for his recorded efforts. Meanwhile, the groundbreaking likes of Joni Mitchell, Led Zeppelin, and Bonnie Raitt have but one Top 10 apiece (“Help Me,” “Whole Lotta Love,” and “Something to Talk About,” respectively). The late, great Etta James didn’t have that many. Her chart failures included the pop standard “At Last,” which didn’t get past number 47. It’s even tougher for alternative rockers to grab a sweet spot in the U.S. Top 40. Iggy Pop, Roxy Music, Siouxsie & the Banshees, and White Stripes have each taken but a single trip there (via “Candy,” “Love Is the Drug,” “Kiss Them for Me,” and “Icky Thump,” respectively). That’s one more than the following superstars, all of whom have struck out repeatedly on the American singles scene. 1. Björk She may be too eccentric to fit comfortably into any mainstream niche, but the Icelandic diva certainly qualifies as one of music’s 20 most influential women of the last three decades. If only her U.S. hit list matched her trailblazer status. For several years, from the mid-’90s to the turn of the century, Björk was the Beyoncé of modern rock. Fellow off-center talents like k.d. lang, Joan Armatrading (see below), and Thom Yorke adored her. She even nabbed a starring role in director Lars von Trier’s 2000 film Dancer in the Dark, earning a Golden Globe nod, a Best Original Song Oscar nomination, and the Cannes Film Festival’s Best Actress prize. Her albums won her critical acclaim and a slavish cult following. Several even went gold and platinum. But neither as frontwoman of The Sugarcubes nor as a solo act has Björk ever managed to creep into the Hot 100’s Top 40. In fact, she’s only charted twice, with 1993’s “Big Time Sensuality (number 88) and 2007’s “Earth Intruders” (number 84), which still stands as her biggest U.S. hit. 2. Blur The ’90s British invasion didn’t catch on quite like the one in the ’60s or the new-wave one that launched the ’80s, but Blur vs. Oasis was still one of the decade’s big match-ups. They were Generation X and Y’s Beatles vs. The Stones, with Blur cast as the alternately poppier and more experimental fab four, and Oasis as harder-rock heirs to Mick Jagger and company. In the U.S. arena, the knockout went to… Oasis. Their albums sold better, and they went all the way to number eight on the Hot 100 with “Wonderwall” in 1995. Meanwhile, Blur logged a string of big UK hits, but only two crept into the bottom half of the Hot 100. Blur frontman Damon Albarn would have to create the cartoon band Gorillaz to land his only U.S. Top 40 single to date, 2005’s “Feel Good Inc,” which felt fantastic at number 14. 3. Bob Marley He’s one of the most celebrated artists in the history of music, right up there with fellow gone-to-soon legends like Elvis, Lennon, and Marvin. But like straight-up reggae, Marley never really caught on in the U.S. mainstream during his lifetime. After his 1981 death, he finally scored the U.S. smash that had eluded him in life. His 1984 Legend compilation, which has launched many a frat party in the decades since its release, is one of the best-selling albums ever, having well surpassed diamond status (10 million copies sold) in America. But only one of his singles, 1976’s “Roots, Rock, Reggae,” managed to chart, peaking at number 51. His legacy here remains bigger than any of his singles. Later reggae acts that managed to score U.S. number ones, like UB40 and Maxi Priest, wouldn’t exist without Marley’s influence. Neither, of course, would his son Ziggy, who sneaked into the U.S. Top 40 at number 39 with “Tomorrow People” in 1988, and his grandson Skip, who became the first Marley to go Top 10 when his 2017 Katy Perry collaboration “Chained to the Rhythm” locked onto number four. Marley did live to enjoy two huge Top 40 triumphs, if only by association. Eric Clapton took his rock version of Marley’s “I Shot the Sheriff” to number one in 1974, and Stevie Wonder’s 1980 number-five single “Master Blaster (Jammin’)” was a musical tribute to Jamaica’s greatest native son. 4. Grace Jones Her status as an enduring gay icon will have to do. The woman behind post-disco classics like “Pull Up to the Bumper,” “My Jamaican Guy,” and “Slave to the Rhythm,” none of which charted on the Hot 100, only made the U.S. hit list three times, never going higher than number 69. That’s where her 1986 single “I’m Not Perfect (But I’m Perfect for You)” peaked. Shocking, right? Would there even be a Rihanna without the antecedent of Grace Jones? She also can take credit for several hip hop hits. Her 1983 single “My Jamaican Guy,” which Jones wrote solo, has been sampled by a number of rappers and R&B artists, including LL Cool J, who used it for the musical backdrop of his 1996 single “Doin’ It” and watched it soar all the way to number 9 on the Hot 100. Despite never making the Top 40 on her own, Jones’s voice did enter the Top 10 once. That’s her speaking during the bridge of “Election Day” by the Duran Duran spin-off Arcadia. The 1985 single made it to number six, but alas, her cameo went uncredited on the single’s cover. 5. Grandmaster Flash and the Furious Five Musical pioneers rarely get the chart love they deserve. They lay the foundation for the success of a genre and often watch those who follow achieve greater commercial success. Grandmaster Flash and the Furious Five were contemporaries of The Sugarhill Gang, who took rap into the American Top 40 for the first time with 1979’s “Rapper’s Delight.” Unfortunately, the sextet never managed to go quite that far. They enjoyed moderate success on Billboard’s R&B singles chart, but only 1982’s “The Message” made an appearance on the Hot 100, rising to number 62. Presaging the politicized rap of Run-D.M.C., Public Enemy, and NWA, all of whom would hit the Top 40 at least once, “The Message” came at a time when message music went out of style — especially on the Hot 100. With only two studio albums to their name, they still made a lasting impression. In 2007, Grandmaster Flash and the Furious Five became the first hip hop group ever to be inducted into the Rock & Roll Hall of Fame. 6. Joan Armatrading Love makes no sense and neither does this: One of the most talented British singer-songwriters of all time has hit the Hot 100 just once, and she did it with an uncharacteristically rocking single. “Drop the Pilot” flew to number 78 in 1983, introducing American pop fans to the Saint Kitts-born Brit who had already been releasing albums for 11 years. Despite her lack of chart clout, Armatrading managed to attract big-name fans. Scottish singer Sheena Easton covered her 1976 UK Top 10 “Love and Affection” on 1984’s A Private Heaven, and Mandy Moore sent “Pilot” back into flight on 2003’s Coverage. Armatrading had another devotee in the late Hollywood director Herbert Ross (Footloose, Steel Magnolias). He included her 1977 Show Some Emotion album track “Willow” on the soundtrack for 1995’s Boys on the Side, his well-received final film. 7. KRS-One When R.E.M. released “Radio Song” as the fourth single from their massive 1991 Out of Time album, it looked like they would help rapper KRS-One finally do what he’d never do with his then-group Boogie Down Productions: score a Hot 100 hit. The single ended up climbing to number 28 in the UK and number 5 in Ireland, but it missed the Hot 100, extending KRS-One’s pop-chart shut-out. He’d commence a successful solo career the following year after Boogie Down Productions split, and go all the way to number three on Billboard’s Top 200 album chart with 1997’s I Got Next. His solo singles, though, failed to make as much of an impact. Three of them hit the Hot 100, with the biggest, “MC’s Act Like They Don’t Know,” topping out at 57. He remains a regular attraction on other people’s records, but the “radio song” he and R.E.M. damned in 1991 eludes him still. 8. Leonard Cohen Neil Young aside, Cohen might be the closest thing Canada has ever had to its own Bob Dylan. His songs have been covered by the best, and his composition “Hallelujah” is a rock standard. But his Hot 100 chart fortunes here never matched those of his considerably less-celebrated fellow Canuck singer-songwriter Gordon Lightfoot. Go figure. Cohen’s lone Hot 100 appearance was a posthumous one: When he died in 2016, interest in his back catalog sent “Hallelujah” to number 59. Fortunately, he did live long enough to see his best-known song triumph on the chart. After performing it live at the Hope for Haiti Now earthquake-relief telethon in 2010, Justin Timberlake, Matt Morris, and Charlie Sexton, made “Hallelujah” a U.S. Top 40 hit for the first time. Their version went all the way to number 13. Hallelujah, indeed. 9. Morrissey Here’s where things get really weird: The man with one of the most formidable discographies in the history of alternative rock calls “The More You Ignore Me, the Closer I Get,” which reached number 46 in 1994, his biggest U.S. hit. It’s a decent tune, but not exactly prime Morrissey. Shockingly, that’s the only song in Morrissey’s canon of mope-rock classics, with and without The Smiths, ever to chart on the Hot 100. It boldly went where The Smiths “This Charming Man” and “How Soon Is Now” and Morrissey’s own “Suedehead” and “Every Day Is Like Sunday” could never go. Adding insult to unfathomable, The Smiths guitarist Johnny Marr upstaged Morrissey on the Hot 100 after the band’s 1987 split. Joining New Order’s Bernard Sumner and Pet Shop Boy’s Neil Tennant for the synth-pop supergroup Electronic, Marr enjoyed a brief stay in the U.S. Top 40 when “Getting Away with It” hit number 38 in 1990. 10. Tom Waits His gravelly singing style is not really the stuff that Top 40 hits are made of, but one would expect a legend like Waits to have hit the Hot 100 at least once. He’s scored off-the-charts cool cred over the decades, but he’s never managed to score on the charts with any of his own singles. Waits is not completely hitless, though, thanks to Rod Stewart, who rode his “Downtown Train” all the way to number three in 1990. Former Scandal frontwoman Patty Smyth had previously taken it to number 95 in 1987. Two years earlier, British blue-eyed soul singer Paul Young included a cover of Waits’s “Soldier’s Things” on his The Secret of Association album, which went to number one in the UK and reached the U.S. Top 20. Over on Billboard’s Top 200 album chart, Waits has fared much better in his own right. Several of his albums have gone gold, and five reached the Top 40. His 16th and most-recent studio album, 2011’s Bad As Me, became his late, late-breaking first Top 10 success, going all the way to number six 28 years after his 1973 debut. Looks like Waits’s wait was finally worth it. 11. The Velvet Underground The band that is arguably second only to The Beatles in terms of overall influence over the decades never graced Billboard’s Hot 100. That’s right. Not one of the American band’s songs, not “I’m Waiting for the Man,” not “Femme Fatale,” not “Sweet Jane,” not anything, ever scored on the Top 40 singles scene on either side of the Atlantic. They never even made it into the upper half of Billboard’s Top 200 album chart with any of the four LPs they recorded and released during their three key years of activity (1967 to 1970). It took them until 1985 to get there, with the compilation VU, which climbed to number 85. Frontman Lou Reed did considerably better after going solo in 1970, reaching number 16 in the U.S. and number 10 in the UK with 1972’s “Walk on the Wild Side.” Never underestimate the power of “colored girls” going “Doot, di-doot, di-doot…” Dishonorable mentions 12. Iron Maiden No Hot 100 appearances 13. Judas Priest Biggest Hot 100 hit: “You’ve Got Another Thing Comin’,” number 67 14. Loretta Lynn Biggest Hot 100 hit: “After the Fire Is Gone,” with Conway Twitty, number 56 (But don’t weep for the queen of country music, who enjoyed a steady string of Top 10 country hits between 1962 and 1979.) 15. Megadeth Biggest Hot 100 hit: “Symphony of Destruction,” number 71 16. Peter Murphy Biggest Hot 100 hit: “Cuts You Up,” number 55 17. Pixies No Hot 100 appearances 18. Robbie Williams Biggest Hot 100 hit: “Angels,” number 53 19. Sonic Youth No Hot 100 appearances 20. Traffic Biggest Hot 100 hit: “Empty Pages,” number 74
https://jeremyhelligar.medium.com/no-hit-wonders-20-iconic-acts-zero-top-40-singles-18bf769d830b
['Jeremy Helligar']
2018-05-23 16:44:29.490000+00:00
['Morrissey', 'Bob Marley', 'Bjork', 'Velvet Underground', 'Music']
Alternative Way to Perform OR Query in Cloud Firestore
If you are using Cloud Firestore for your project’s database, you may have tried performing OR query but soon realized that there’s no function for that. I also encountered the same problem. As a matter of fact, many of us had encountered the same problem and here’s the proof. At the time of writing this article, Google hasn’t provided OR, IN, NOT IN operators for WHERE query. Therefore, the only approach now, though unpleasant, is to run several queries in a loop and then merge the query results on the client side. I would show you how to achieve this in java. For javascript, use Lim Shang Yi’s approach here. Cloud Firestore So here in my database, I have books collection, and each document has three fields- author , genre , and title . In this example, I want to retrieve books from more than one genre. So here we go: This actually does the job, but it’s not quite efficient because it uses more resources on the client side. I hope Google will provide a more efficient way of doing this soon. By the way, I used the result to populate this RecyclerView Books App …and here’s the source code. I would love your feedback! Thanks for reading!
https://medium.com/swlh/alternative-way-to-perform-or-query-in-cloud-firestore-d4cccf43dbbd
['Mendhie Emmanuel']
2019-07-17 11:07:47.218000+00:00
['JavaScript', 'Java', 'Firebase', 'AndroidDev']
Anisotropic, Dynamic, Spectral and Multiscale Filters Defined on Graphs
Sperduti & Starita, 1997: “Until now neural networks have been used for classifying unstructured patterns and sequences. However, standard neural networks and statistical methods are usually believed to be inadequate when dealing with complex structures because of their feature-based approach.” From 1997, the body of works on learning from graphs has grown so much and in so many diverse directions that it is very hard to keep track without some smart automated system. I believe we are converging to using methods based on neural networks (based on our formula (2) explained in the first part of my tutorial), or some combination of neural networks and other methods. Graph neural layer’s formula (2) from the first part of my tutorial that we will also need in this part. Keep in mind, that if we need to compute a specific loss for the output features or if we need to stack these layers, we apply some activation like ReLU or Softmax. To recap the notation we used in the first part, we have some undirected graph G with N nodes. Each node in this graph has a C-dimensional feature vector, and features of all nodes are represented as an N×C dimensional matrix X⁽ˡ⁾. In a typical graph network, such as GCN (Kipf & Welling, ICLR, 2017), we feed these features X⁽ˡ⁾ to a graph neural layer with C×F dimensional trainable weights W⁽ˡ⁾ , so that the output of this layer is an N×F matrix X⁽ˡ⁺¹⁾ encoding updated (and hopefully better in some sense) node features. 𝓐 is an N×N matrix, where the entry 𝓐ᵢⱼ indicates if node i is connected (adjacent) to node j. This matrix is called an adjacency matrix. I use 𝓐 instead of plain A to highlight that this matrix can be normalized in a way to facilitate feature propagation in a deep network. For the purpose of this tutorial, we can assume that 𝓐=A, i.e. each i-th row of the matrix product 𝓐X⁽ˡ⁾ will contain a sum of features of node i neighbors. In the rest of this part of the tutorial, I’ll briefly explain works of my choice showed in bold boxes in the overview graph. I recommend Bronstein et al.’s review for a more comprehensive and formal analysis. Note that even though I dive into some technical details of spectral graph convolution below, many recent works (e.g., GIN in Xu et al., ICLR, 2019) are built without spectral convolution and show great results in some tasks. However, knowing how spectral convolution works is still helpful to understand and avoid potential problems with other methods. 1. Spectral graph convolution Bruna et al., 2014, ICLR 2014 I explain spectral graph convolution in detail in my another post. I’ll briefly summarize it here for the purpose of this part of the tutorial. A formal definition of spectral graph convolution, which is very similar to the convolution theorem in signal/image processing, can be written as: Spectral graph convolution, where ⊙ means element-wise multiplication. where V are eigenvectors and Λ are eigenvalues of the graph Laplacian L, which can be found by eigen-decomposition: L=VΛVᵀ; W_spectral are filters. Throughout this tutorial I’m going to assume “symmetric normalized Laplacian”. It is computed based only on an adjacency matrix A of a graph, which can be done in a few lines of Python code as follows: # Computing the graph Laplacian # A is an adjacency matrix import numpy as np N = A.shape[0] # number of nodes in a graph D = np.sum(A, 0) # node degrees D_hat = np.diag((D + 1e-5)**(-0.5)) # normalized node degrees L = np.identity(N) — np.dot(D_hat, A).dot(D_hat) # Laplacian Here, we assume that A is symmetric, i.e. A = Aᵀ and our graph is undirected, otherwise node degrees are not well-defined and some assumptions must be made to compute the Laplacian. In the context of computer vision and machine learning, the graph Laplacian defines how node features will be updated if we stack several graph neural layers in the form of formula (2). So, given graph Laplacian L, node features X and filters W_spectral, in Python spectral convolution on graphs looks very simple: # Spectral convolution on graphs # X is an N×1 matrix of 1-dimensional node features # L is an N×N graph Laplacian computed above # W_spectral are N×F weights (filters) that we want to train from scipy.sparse.linalg import eigsh # assumes L to be symmetric Λ,V = eigsh(L,k=20,which=’SM’) # eigen-decomposition (i.e. find Λ,V) X_hat = V.T.dot(X) # 20×1 node features in the "spectral" domain W_hat = V.T.dot(W_spectral) # 20×F filters in the "spectral" domain Y = V.dot(X_hat * W_hat) # N×F result of convolution where we assume that our node features X⁽ˡ⁾ are 1-dimensional, e.g. MNIST pixels, but it can be extended to a C-dimensional case: we will just need to repeat this convolution for each channel and then sum over C as in signal/image convolution. Formula (3) is essentially the same as spectral convolution of signals on regular grids using the Fourier Transform, and so creates a few problems for machine learning: the dimensionality of trainable weights (filters) W_spectral depends on the number of nodes N in a graph; W_spectral also depends on the graph structure encoded in eigenvectors V. These issues prevent scaling to datasets with large graphs of variable structure. To solve the first issue, Bruna et al. proposed to smooth filters in the spectral domain, which makes them more local in the spatial domain according to the spectral theory. The idea is that you can represent our filter W_spectral from formula (3) as a sum of 𝐾 predefined functions, such as splines, and instead of learning N values of W, we learn K coefficients α of this sum: We can approximate our N dimensional filter W_spectral as a finite sum of K functions f, such as splines shown below. So, instead of learning N values of W_spectral, we can learn K coefficients (alpha) of those functions; it becomes efficient when K << N. While the dimensionality of fk does depend on the number of nodes N, these functions are fixed, so we don’t learn them. The only thing we learn are coefficients α, and so W_spectral is no longer dependent on N. To make our approximation in formula (4) reasonable, we want K<<N to reduce the number of trainable parameters from N to K and, more importantly, make it independent of N, so that our GNN can digest graphs of any size. While solves the first issue, this smoothing method does not address the second issue. 2. Chebyshev graph convolution Defferrard et al., NeurIPS, 2016 The main drawback of spectral convolution and its smooth version above is that it still requires eigen-decomposition of an N×N dimensional graph Laplacian L, which creates two main problems: 🙁 The complexity of eigen-decomposition is huge, O(N³). Moreover in case of large graphs, keeping the graph Laplacian in a dense format in RAM is infeasible. One solution is to use sparse matrices and find eigenvectors using scipy.sparse.linalg.eigs in Python. Additionally, you may preprocess all training graphs on a dedicated server with a lot of RAM and CPU cores. In many applications, your test graphs can also be preprocessed in advance, but if you have a constant influx of new large graphs, eigen-decomposition will make you sad. 🙁 Another problem is that the model you train ends up being closely related to the eigenvectors V of the graph. This can be a big problem if your training and test graphs have very different structures (numbers of nodes and edges). Otherwise, if all graphs are very similar, it is less of a problem. Moreover, if you use some smoothing of filters in the frequency domain like splines discussed above, then your filters become more localized and the problem of adapting to new graphs seems to be even less noticeable. However, the models will still be quite limited. Now, what does Chebyshev graph convolution have to do with all that? It turns out that it solves both problems at the same time! 😃 That is, it avoids computing costly eigen-decomposition and the filters are no longer “attached” to eigenvectors (yet they still are functions of eigenvalues Λ). Moreover, it has a very useful parameter, usually denoted as K having a similar intuition as K in our formula (4) above, determining the locality of filters. Informally: for K=1, we feed just node features X⁽ˡ⁾ to our GNN; for K=2, we feed X⁽ˡ⁾ and 𝓐X⁽ˡ⁾; for K=3, we feed X⁽ˡ⁾, 𝓐X⁽ˡ⁾ and 𝓐²X⁽ˡ⁾; and so forth for larger K (I hope you’ve noticed the pattern). See more accurate and formal definition in Defferrard et al. and my code below, plus additional analysis is given in (Knyazev et al., NeurIPS-W, 2018). Due to the power property of adjacency matrices, when we perform 𝓐²X⁽ˡ⁾ we actually average (or sum depending on how 𝓐 is normalized) over 2-hop neighbors, and analogously for any n in 𝓐ⁿX⁽ˡ⁾ as illustrated below, where we average over n-hop neighbors. Chebyshev convolution for K=3 for node 1 (dark blue). Circled nodes denote the nodes affecting feature representation of node 1. The [,] operator denotes concatenation over the feature dimension. W⁽ˡ⁾ are 3C×F dimensional weights. Note that to satisfy the orthogonality of the Chebyshev basis, 𝓐 assumes no loops in the graph, so that in each i-th row of matrix product 𝓐X⁽ˡ⁾ we will have features of the neighbors of node i, but not the features of node i itself. Features of node i will be fed separately as a matrix X⁽ˡ⁾. If K equals the number of nodes N, the Chebyshev convolution closely approximates a spectral convolution, so that the receptive field of filters will be the entire graph. But, as in the case of convolutional networks, we don’t want our filters to be as big as the input images for a number of reasons that I already discussed, so in practice, K takes reasonably small values. In my experience, this is one of the most powerful GNNs, achieving great results in a very wide range of graph tasks. The main downside is the necessity to loop over K in the forward/backward pass (since Chebyshev polynomials are recursive, so it’s not possible to parallelize them), which slows down the model. Same as with Splines discussed above, instead of training filters, we train coefficients, but this time, of the Chebyshev polynomial. Chebyshev basis used to approximate convolution in the spectral domain. To generate the Chebyshev basis, you can use the following Python code: # Set K to some integer > 0, like 4 or 5 in our plots above # Set n_points to a number of points on a curve (we set to 100) import numpy as np x = np.linspace(-1, 1, n_points) T = np.zeros((K, len(x))) T[0,:] = 1 T[1,:] = x for n in range(1, K-1): T[n+1, :] = 2*x*T[n, :] - T[n-1, :] # recursive computation return T The full code to generate spline and Chebyshev bases is in my github repo. To illustrate how a Chebyshev filter can look on a irregular grid, I follow the experiment from Bruna et al. again and sample 400 random points from the MNIST grid in the same way as I did to show eigenvectors of the graph Laplacian. I trained a Chebyshev graph convolution model on the MNIST images sampled from these 400 locations (same irregular grid is used for all images) and one of the filter for K=1 and K=20 is visualized below. A single Chebyshev filter (K=3 on the left and K=20 on the right) trained on MNIST and applied at different locations (shown as a red pixel) on a irregular grid with 400 points. Compared to filters of standard ConvNets, GNN filters have different shapes depending on the node at which they are applied, because each node has a different neighborhood structure. 3. GCN Kipf & Welling, ICLR, 2017 As you may have noticed, if you increase K of the Chebyshev convolution, it increases the total number of trainable parameters. For example, for K=2, our weights W⁽ˡ⁾ will be 2C×F instead of just C×F. This is because we concatenate features X⁽ˡ⁾ and 𝓐X⁽ˡ⁾ into a single N×2C matrix. More training parameters means the model is more difficult to train and more data must be labeled for training. Graph datasets are often extremely small. Whereas in computer vision, MNIST is considered a tiny dataset, because images are just 28×28 dimensional and there are only 60k training images, in terms of graph networks MNIST is quite large, because each graph would have N=784 nodes and 60k is a large number of training graphs. In contrast to computer vision tasks, many graph datasets have only around 20–100 nodes and 200–1000 training examples. These graphs can represent certain small molecules and labeling chemical/biological data is usually more expensive than labeling images. Therefore, training Chebyshev convolution models can lead to severe overfitting of the training set (i.e. the model will have the training loss close to 0 yet will have a large validation or test error). So, GCN of Kipf & Welling essentially “merged” matrices of node features X⁽ˡ⁾ and 𝓐X⁽ˡ⁾ into a single N×C matrix. As a result, the model has two times fewer parameters to train compared to Chebyshev convolution with K=2, yet has the same receptive field of 1 hop. The main trick involves adding “self-loops” to your graph by adding an identity matrix I to 𝓐 and normalizing it in a particular way, so now in each i-th row of matrix product 𝓐X⁽ˡ⁾ we will have features of the neighbors of node i, as well as features of node i. This model seems to be a standard baseline choice well-suited for many application due to its lightweight, good performance and scalability to larger graphs. 3.1. GCN vs Chebyshev layer The difference between GCN and Chebyshev convolution is illustrated below. The code above follows the same structure as in the first part of my tutorial, where I compared classical NN and GNN. One of the main steps both in GCN and Chebyshev convolution is computation of the rescaled graph Laplacian L. This rescaling is done to make eigenvalues in the range [-1,1] to facilitate training (this might be not a very important step in practice as weights can adapt during training). In GCN, self-loops are added to the graph by adding an identity matrix before computing the Laplacian as discussed above. The main difference between the two methods is that in the Chebyshev convolution we recursively loop over K to capture features in the K-hop neighborhood. We can stack such GCN or Chebyshev layers interleaved with nonlinearities to build a Graph Neural Network.
https://towardsdatascience.com/tutorial-on-graph-neural-networks-for-computer-vision-and-beyond-part-2-be6d71d70f49
['Boris Knyazev']
2019-12-20 14:16:42.295000+00:00
['Machine Learning', 'Computer Vision', 'Towards Data Science', 'Pytorch', 'Graph Neural Networks']
How Ready Are You For Long-Term Changes to Our Food Supply Chain?
I’ve been reading a series of books. They were written in the 1990s and look like they were printed on someone’s home printer and hand bound on a wire spiral. Stories and Recipes of the Great Depression of the 1930’s is a five-book series, written by Rita Van Amber and edited by her daughter, Janet Van Amber Paske. The books read like what I believe they probably were — a labor of love. The books are full of stories remembered by people who lived through the Great Depression. Most have to do with food — although one of my favorites is a woman in her 70s remembering her parents buying a new wig for an old doll and her mother sewing the doll a new dress. It was the best Christmas gift she’d ever received. The books are twice dated. The stories are of a time so long gone and so awful that they’re difficult to really conceive of. Things like mixing lard into peanut butter to stretch it or feeling grateful when a home was purchased just before the stock market crashed in 1929, because there was no more money in the bank to lose. And they are told in the 1980s, 1990s, and very early 2000s, which is of course still dated to a 2020 reader. But I’m enjoying them. Also, I’m terrified by them. Because I went to the butcher last week and meat had doubled in price since I was last there two weeks earlier. And Walmart is still rationing canned goods. And I just bought half a hog, for Gods sake. (I meant to buy a quarter hog, but upped it to a half when I saw how fast the price of meat is rising.) Not for some altruistic locavore reason, which might have been true just a few months ago, but because I’m really afraid that we’re going to have a supply chain problem pretty soon and I want to be able to feed my family. I spend most days watching the news while I work. That’s new for me, because I don’t have a TV in my office. But I do in my bedroom, and that’s where I’ve been working since mid-March. Over Memorial Day Weekend, it was surreal seeing people actively refusing to maintain social distancing — crowding into beaches and restaurants and house parties without masks or adequate spaces between them. I wonder if Memorial Day weekend will be the time that history remembers as the moment when we could have kept this pandemic manageable, but didn’t. If my grandchildren will learn about Memorial Day weekend 2020 the same way I’ve learned about October 29, 1929 as the day the stock market crashed and the kick off of the Great Depression. This is all so intensely surreal. My business is holding on so far. And I’ve had a good couple of years, so I have some cushion between my family and disaster. This is the first time in my life that’s been true, so I’m especially grateful. I know that more nearly 40 million Americans are out of work. Probably even more than that, because not everyone who is out of work has been able to get on Unemployment. But it doesn’t quite feel real yet. So far the government is shoring those people up, and by extension shoring me up. But what happens in July, when those $600 per week extra payments end? What happens if scientists are right and there’s a major resurgence of the Coronavirus this fall or winter? I wonder what people thought, in early 1930, just a couple of months into what would become an eleven-year depression. Did they feel a sense of unreality, like I do? Did it seem like the whole thing would blow over quickly? Could they have spent those first months preparing for what was coming, if only they’d seen it on the horizon? So far, it’s the idea of food supply problems that has felt the most real to me. Probably because I can see it so easily. Pennsylvania, where I live, has a particularly high level of Coronavirus in meat packing plants. For the first time in my life, I’ve gone to the grocery store in the last couple of months and not been able to buy exactly what I wanted, when I wanted it. My personal response to this has been two fold. I find myself really wanting (and needing) to learn. How can I garden for more than fun? I don’t have much space, but it might make a difference. Is raising rabbits worth checking into? I don’t think I could eat a rabbit I raised. At least not yet. I’d go vegetarian first. But if I can’t raise or get my hands on enough vegetable protein to feed my family? Yeah. I might think differently. Is it time to get passed my fear of my pressure canner? Probably. So — I’m reading a lot. Like those books about the Great Depression. And I’ve joined a couple of Facebook groups full of people who know a lot more about this stuff than I do. I’m learning. And I’m acting on what I’m learning. I feel compelled to do something. For the last month, I’ve spent every Friday working on my family’s food supply. Canning things. Planning meals. Creating a shopping list. I can’t unsee what I’ve seen about the way that workers are treated in meat packing plants. I can’t not know that people are dying so that Americans can buy cheap meat at Walmart. So, my family has bowed out of that entire system. I found a local woman to sell me eggs from her backyard chickens. And like I said, I bought half of a hog from a very small-scale local farmer, who also happens to be a woman. She also sells beef, chicken, and turkey. My second Misfits Market box will be here tomorrow. (Use the code COOKWME-GX3FSM to get 25 percent off your first box, if you want to give them a try.) Tomorrow is Friday. I’m going to pickle the vegetables that are left over from last week’s Misfits Market box — cauliflower, carrots — with some peppers that I need to use up. I also have a bunch of serranos I’m going to ferment to make pepper sauce with. And my daughter and I are going to make tamales for the freezer. We’re stocking up, a little bit at a time, for the possibility that there might be a year where things are dicey while America figures out it’s new food reality. I have no idea whether what’s going on now will become something as terrible and long-lasting as the Great Depression. I know that more than 40 million Americans have applied for unemployment — and that doesn’t include the people who were unable to get through or have otherwise been excluded from unemployment. I know that the food supply chain issue is real and could get scary. I know that Americans are tired of being locked down and that safety measures are being politicized. The President of the United States mocked a reporter a couple of days ago for being ‘politically correct’ for refusing to remove his mask during a press briefing. I know that many, many Americans are much further removed from their food sources than their great-grandparents were in the 1930s. It seems to me that we are much less equipped to create our own food sources (via gardening and raising food animals) than people were during that generation. We are less prepared for the hard, manual labor of food production, even on a home-scale. I’m afraid that we’re just at the beginning of something that could be a longer emergency. Learning and acting on what we learn is essential right now.
https://shauntagrimes.medium.com/how-ready-are-you-for-long-term-changes-to-our-food-supply-chain-c93ebd25bd68
['Shaunta Grimes']
2020-05-28 14:40:16.317000+00:00
['Covid 19', 'Food', 'Health', 'Life', 'Family']
Product Life Cycle and Software Engineering Management
Engineering Management Product Life Cycle and Software Engineering Management How to staying on top of delivery, people management, and system ownership at each stage of the product life cycle Photo by Annie Spratt on Unsplash From a software engineering perspective the product life cycle can be split into six stages: Development, Introduction, Growth, Maturity, Decline, and Abandonment. Each stage has different goals and challenges, so engineering managers have to shift their focus accordingly to keep providing best results for their organisation: At the Development stage , the goal is to build and deliver the product prototype into the hands of customers. , the goal is to build and deliver the product prototype into the hands of customers. At the Introduction stage — to find product/market fit. — to find product/market fit. At the Growth stage — to improve the product offering and accelerate growth. — to improve the product offering and accelerate growth. At the Maturity stage — optimise operating costs and keep on improving the product to defend its position in the market. — optimise operating costs and keep on improving the product to defend its position in the market. At the Decline stage — to minimise operating costs. — to minimise operating costs. At the Abandonment stage — to sunset the product efficiently. Not only the organisation goals, but also the size of the engineering team and the complexity of the product technical systems change from stage to stage, making the engineering managers’ job more difficult. This article explores how software engineering managers can stay on top of delivery, people management, and technical systems ownership at each stage of the product life cycle. Development Stage For an organisation, the goal of this stage is to build a product prototype and get it into the hands of customers. Hence delivery leadership should be the main focus for the engineering managers at the Development stage. Managers should use every opportunity to ship the prototype sooner, so that the Product could get feedback from real customers At this stage the product isn’t bringing any revenue. Instead, the organisation is spending its resources on building it. The sooner the prototype is built, the less this stage is going to cost to founders or investors. Technical system ownership responsibilities need little attention at this stage. There would be few systems to own and maintain. Technical debt wouldn’t matter for a product that hasn’t been shipped yet. Every technical aspect of the product that can be outsourced to save development time, should be outsourced. For example, if a product needs search capabilities, Engineering may choose to use a service like Algolia or use a managed Elastic instance, rather than setting up their own Elastic installation. If the product needs user login, then Engineering may choose a service like Auth0, instead of building their own login and user management functionality. People management responsibilities don’t take much time at this stage either. The engineering team would be small and the stage would last between several weeks and several months. That means, there won’t be a need to do much hiring, promotion, and training. The key thing is to ensure that engineers are staying motivated and focused on delivery. Introduction Stage The goal of the Introduction Stage is to find the product/market fit before the organisation runs out of budget, so delivery leadership remains the priority for engineering managers. However, people management and technical system ownership responsibilities are becoming more important compared to the previous stage. The product may pivot several times during this stage. To support that, engineering managers should optimise delivery process for agility. In other words, every system that the developers build should be flexible enough to be replaced, or repurposed, or thrown away. That would allow product managers to test their hypotheses quicker, find the product/market fit sooner, and move to the Growth stage. Because of frequent pivoting and failing experiments, engineers may start losing their confidence in the organisation’s ability to succeed and the team morale may decline. To prevent that, engineering managers need to ensure that the team members understand the challenges of this stage, have realistic expectations, and are working towards the organisation goal. Growth Stage This is the most challenging stage in the product life cycle for engineering managers as they have to scale delivery, system ownership, and people management processes and practices to keep supporting their organisations efficiently. What is more, each of those aspects starts competing with each other for attention. Engineering teams typically reach the Growth stage running an overgrown product prototype with plenty of technical debt accrued at the previous two stages. As a result, engineers have to deal with scalability, reliability, maintainability, and performance issues at the same time. One approach is to start paying off relevant technical debt at the very beginning of the Growth stage by dedicating 10–20% of overall engineering capacity to that. In other words, if the engineering team is 10 people, then at any time 1–2 developers should be working on paying out the tech debt. If the team is 30 engineers, that would be 4–6 people. Typical product users also change during this stage. They no longer are “innovators” and “early adopters” who tolerate minor glitches in the product. They start to come from the “early majority” cohort who value product reliability in addition to its “newness”. That means bug-fixing and minor improvements start competing with strategic initiatives in terms of delivery. As a result, to progress strategic initiatives and at the same time promptly respond to bug reports and random requests from Design, Product, and Marketing, engineering managers may choose to reserve some capacity for that type of work too. For example, 20–30% for bug-fixing and minor improvements, and 50–70% for initiatives. During the growth stage, the number of developers in the team increases as well. For bigger products, the engineering team eventually splits into smaller teams with narrow focus. Each team needs a lead, so the management structure in the Engineering has to become more complicated with two and sometimes three levels of management. At the Development and Introduction stages, the organisation mostly needs experienced developers who can quickly build the prototype and then rapidly iterate on it. From the Growth, stage hiring juniors starts making a lot of sense. There would be lots of tasks requiring little experience. New hires, regardless of their experience, would need on-boarding, upskilling, mentoring, more or less clear paths for career growth and promotion framework. All aspects of people management would become important in a growing engineering team. During the Growth stage the entire organisation radically changes. As a result, the original members of the engineering team may want to leave because the new environment would require skills that are different from those that were useful at early stages. If such people leave, the team may lose crucial technical knowledge and expertise. Managers must support such people through the change and help them find their place in the new reality to keep them with the organisation for longer. Maturity Stage At the Maturity stage the product growth starts slowing down. Strategic product development continues, however, optimising operating costs and increasing overall efficiency becomes increasingly important. Delivery leadership is still important at this stage. However, the engineering team would experience less pressure from the Product and less urgency to ship new functionality. Product quality is getting more important because new customers now represent the “late majority” cohort who value quality and stability of proven technology and products, rather than newness. Because of that, the engineering team has to allocate more time for various minor improvements and bug fixes. One way to deal with such tasks is to forward them to a dedicated team of developers. Some developers enjoy working on small well-defined tasks, especially when those tasks benefit the customers and the product. Less pressure from Product allows engineers to pay more attention to system maintenance. The engineering team may have an opportunity to run large maintenance initiatives, like migrations to new architectures and platforms, paying out lots of technical debt without the risk of dropping the ball on delivering product initiatives. Availability and reliability get increasingly important. Forming a Site Reliability team might be a way of dealing with it in bigger products. Engineering team growth slows down at the Maturity stage. As a result, the environment and culture become more stable. Engineers get fewer opportunities in people management but lots of opportunities in technical leadership as this is finally the time to do lots of things right from an engineering perspective. Engineers and teams within the Engineering department continue to specialise. Developers start spending lots of time in meetings to align efforts with other teams and non-technical departments. That impacts their productivity. Engineering managers have to be mindful of that and ensure developers have enough uninterrupted time to spend on writing code to remain productive. The Maturity stage can be the “golden age” for the engineering team. Its members get an opportunity to solve complex problems in a successful and still growing product, try new tools and technologies, new ways of working, there is time to invest into training and mentoring as well. Decline Stage A product reaches the Decline stage when its customer base and revenue start to shrink despite the Product team efforts to restart growth. At a certain point there would be no product initiatives and most work coming from Product or Support teams would be requests for minor improvements or bug reports. That means that delivery leadership would no longer be among the top priorities. Technical system ownership responsibilities are taking more time as the organisation optimises operating costs of the declining product, including the costs of running its technical systems. Security, performance, availability, and reliability remain as important as before. If the Site Reliability team does not exist by this stage, it may be worth creating it. Or, if the entire engineering team is too small for forming a dedicated team, most developers should get familiar with SRE responsibilities as keeping the product running smoothly and preventing incidents would be crucial for the organisation. At the Decline stage, the Engineering team typically needs fewer engineers than at previous stages. There would be little to no hiring and some people may transfer to other departments or leave the organisation. Before people leave the team, it is crucial to document their knowledge about the product in system diagrams, readmes, developer documentation, incident playbooks, etc. A declining product may stay at this stage and generate revenue for years and remaining engineers would need that knowledge to maintain the product during that time. It is important for engineering managers to set expectations for their teams regarding the type of work during that stage. For example, that there would be fewer opportunities to use the latest technologies. One consequence of that is that developers may feel that they are lagging behind the industry. To deal with it, managers may have to create opportunities for their teams to try new tools and tech. Staff retention would probably be the top people management priority. To motivate developers to stay longer with the team, engineering managers may want to keep improving the work environment and negotiating more perks for their team members. For example, extra annual leave, great equipment, conference tickets, generous training budget, etc. Awesome environment, flexibility, and supportive management can be extra reasons for some engineers to stay with the team and the organisation for longer. Abandonment Stage Products eventually reach their end of life. Their markets may disappear, customers may go to competitors, the organisation may decide to focus on more profitable products or may launch a more advanced product that would provide a way better experience. There is not much delivery and system ownership work at this stage. If the product gets shut down, the only goal for engineering managers is to find a cost-effective way to do that and archive the data and source code. If the product gets merged into another, then engineering managers need to propose an efficient (in terms of time, money, customer disruption, etc) technical strategy for that and execute it. As for the people management responsibilities at this stage, they can be boiled down to helping engineers find a new place within the organisation if that is possible, or outside, if the entire organisation gets shut down.
https://medium.com/datadriveninvestor/the-product-life-cycle-through-a-software-engineering-management-lens-bc6d005b02ce
['Andrei Gridnev']
2020-11-24 12:18:02.653000+00:00
['Leadership', 'Software Development', 'Startup', 'Engineering Mangement', 'Product Management']
Introducing the Pinterest chat extension and bot for Messenger
Hayder Casey, Pinterest engineering manager, Growth More than 200 million people use Pinterest every month to find ideas to try, from recipes to gifts to decor for the home. Family and friends are often a big part of these plans, and that’s why nearly 1M Pins are shared to Facebook Messenger each week. Today we’re making it easier for Pinners to collaborate with others on Messenger by launching a new chat extension and bot. Pinterest chat extension for Messenger We’re rolling out a Pinterest chat extension to make it simpler to share Pins and collaborate with family and friends directly from Messenger. Now Pinners can easily share ideas without ever having to leave the Messenger conversation. And starting today, Pins shared from our mobile apps to Messenger will link to our chat extension using a newly launched SDK from Messenger. With an improved sharing experience, now the full Pin’s image will be shown in Messenger once shared. If you tap on a Pin in Messenger, you’ll see a richer, more integrated experience through the chat extension which makes responding to ideas, sharing new Pins and accessing Pinterest Search and Related Pins quicker and easier than ever. We designed the experience with the most popular ways people share Pins to Messenger in mind. We scoped the requirements to: Performance–Users will be on the go, so it needs to load fast. Viewing–Show the full image and make it simple to visit the article when available. Ease of use–Enable people to browse and search related content on Pinterest and easily share it back to the conversation. We built the extension on top of our mobile web platform in React. This was an efficient route to leverage existing infrastructure. We removed all the chrome and banners to take advantage of the available space to show content. Pinterest bot for Messenger We’re also rolling out a Pinterest bot to bring the power of Pinterest Search to Messenger, and help Pinners find recipes, products, style inspiration and other ideas. To get started, chat the bot and select a topic like “food,” “home” or “DIY” to get recommendations from our dataset of +100B ideas. You can also search for ideas to try, from cocktail recipes to holiday outfits, right from the Pinterest bot. This is an early exploration into AI interactions for Pinterest, allowing users to engage with the product in a more natural and conversational way. Although this is a first step, we’re excited about the potential for future iterations. The Pinterest chat extension will be rolling out to Pinners using the English language version of our app on iPhone and Android over the coming days, and the bot will be available across all platforms, mobile and web. Be sure to update your Pinterest app to version 6.40 on iOS and 6.45 on Android.
https://medium.com/pinterest-engineering/introducing-the-pinterest-chat-extension-and-bot-for-messenger-a88ff9d77041
['Pinterest Engineering']
2017-12-05 18:00:02.576000+00:00
['Platform', 'Pinterest', 'Facebook', 'Developer', 'Bots']
5 ways to make a unstoppable business
a lot of people ask me at the conferences, what makes a unstoppable business? how do i keep businessing late into the night when my gums hurt and i’ve had enough business for the day?? as a leading thought leader, i have 5 secrets about business that will blow the lid off the whole damn thing 1. people first off, what is a business? it’s people. think about it. it’s you and me, rubbing our hands together over data-sheets, igniting the sparks of a global econo-fire. that’s the blod and guts of the system okay? that’s why diversity is so important, and why i constantly surround myself with all different kinds of men, tall men, fat men, men w/ confusing hats, etc. you want as many men in the boardroom as possible, so not a single man excluded 2. just graph it! one mistake i notice many young founders make, is that they don’t measure things. this is why they always fail. measure everything. all aspects of my life are measured. even how much i’m measuring things. i know exactly how many things i have, and how many things i don’t have, at all times, forever. do you? 3. ideas, but more of them you cant have a business without an idea. that’s why ideas are so rare, right? WRONG. trick question. ideas are all around us. right now, you are inhaling 1,000,000s of potential ideas. ideas are just “electric words” and you need to monetize all of them, starting yesterday 4. take a breather somtimes, alright? phew! 5. forget the handshake handshakes are relics of the past (more on___t_this in a future post_s). so stop shaking hands already! you’re wasting time wagging another person’s fleshmeat when you could be businessing! instead of shaking hands, look new clients directly in the eye and whisper, “i am now inside you” 5.5 push the limits to the max (…and beyond) boundaries are artificial barriers that keep you from excelling. an empire can’t expand beyond the castle walls its build for its elf. that’s why i always “push the limits” to the MAX. example: you thought there were only going to be 5 secrets on this list, but count how many now. you can’t get ahead unless you’re willing to go further, leave the limits behind in the star dust of the stratosphere ,,,,, b/c you deserve it NEVER STOP >> ~Live as if you were to die tomorrow; learn as if you were to live forever~ — gandhi (this article originally appeared on linkedin)
https://medium.com/slackjaw/5-ways-to-make-a-unstoppable-business-b569a5320cba
['Joe Veix']
2015-10-26 18:19:22.419000+00:00
['Entrepreneurship', 'Self Improvement', 'Business']
Introduction to Tesseract OCR
Hi everyone, my name is Bismo, working as Backend Engineer in Zeals. At Zeals, I am mostly taking care about microservices. In this article, I want to share my experience having fun with an engine. What engine? Let’s jump into it!! First of all, have you ever had experience moving text on your documents to editable text format? Doing it manually would take a lot of time and effort. We need something to help the process efficiently. And here’s an article to solve that problem, playing with OCR!! Introduction Optical character recognition or optical character reader (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene-photo (for example the text on signs and billboards in a landscape photo) or from subtitle text superimposed on an image (for example: from a television broadcast). [source] To better understand how OCR works, see the diagram process in the following picture. From end user side, OCR process is very simple just processing the image and will get the editable text. Picture 1. How OCR Works Library There are various OCR tools, not only from paid services (Google, Amazon, Azure, etc) but also from open source library, one of them is Tesseract. In this playground, we will have some experiments using Tesseract engine to do multiple case extracting text from text image based. But wait, what is Tesseract? Sounds like object on Avengers Movie :) Tesseract is an optical character recognition engine for various operating systems. It is free software, released under the Apache License. Originally developed by Hewlett-Packard as proprietary software in the 1980s, it was released as open source in 2005 and development has been sponsored by Google since 2006. [source] Picture 2. OCR Process Flow Tesseract has 37.4k stars +6.9k fork (28 Nov 2020) on their github and still maintained. That’s a good point why we should try this engine. Installation Follow the official Tesseract github page to install the package on your system. Once you have installed the package successfully, you will be able to run tesseract command on your terminal (I’m using Mac). $ tesseract Usage: tesseract --help | --help-extra | --version tesseract --list-langs tesseract imagename outputbase [options...] [configfile...] OCR options: -l LANG[+LANG] Specify language(s) used for OCR. NOTE: These options must occur before any configfile. Single options: --help Show this help message. --help-extra Show extra help for advanced users. --version Show version information. --list-langs List available languages for tesseract engine. In this article, my Tesseract version is 4.1.1 . You can check the version by type tesseract --version command. $ tesseract --version tesseract 4.1.1 leptonica-1.80.0 libgif 5.2.1 : libjpeg 9d : libpng 1.6.37 : libtiff 4.1.0 : zlib 1.2.11 : libwebp 1.1.0 : libopenjp2 2.3.1 Found AVX2 Found AVX Found FMA Found SSE Supported File Based on the information I found and my personal experience testing every file, this is the supported type from Tesseract that could be read by their engine: JPG PNG GIF PNM TIFF Unfortunately, Tesseract engine can’t read PDF file. I’ve tested a PDF file but the output getting error like message below. Error in pixReadStream: Pdf reading is not supported Error in pixRead: pix not read For PDF file case, we need to convert it to supported files above before extracting it using Tesseract. Testing When our Tesseract environment is ready, let’s try to test with this simple Lorem ipsum text image. As we can see, the image quality is pretty good and clear. This condition should be not give any issue when we want to extracting to editable text. Picture 3. Lorem Ipsum Example Extracting the text using Tesseract is quite simple. We just need to run tesseract IMAGE_PATH OUTPUT . Here is the explanation about the command. tesseract : Main command. : Main command. IMAGE_PATH : The location of the image parameter. : The location of the image parameter. OUTPUT : Output parameter. We can use stdout to getting output in the terminal. Or /path/to/txt to getting txt file. Stdout output will be look like this. $ tesseract lorem.jpg stdout Warning: Invalid resolution 0 dpi. Using 70 instead. Estimating resolution as 376 Lorem ipsum dignis eium fugit aspelluptat eati odis net exeri rectem lia venihil icipsapid qui dolupient quam aceatemque repedi tem lantiae provid quia sitatia temqui- buste voluptatquo comnit fugiat invenit fugia quo ditias ipi- tatat erspel id utesecuptur solorio. Hari veria dis nis et millic tota comnimet inctor sum aut laboressedis deribus illore non et fugitiosam, soluptat in eatur? Endignatem el et ex endicim re occullu ptatem laut audipita num fugit adis delenti cum sus aut iduntur arumqui blam eos molorum quissimint, nis ut aut adisci odic te as everspe ditati accum alit rempel iumque nobis repudamus. ...... Lupieniendis aut volorerio. Daectot aectestium latem volenimus ut velis alibus ulliciis aceaquam derovid elitatet que vel minctume iusam dolupti venis am fugit etus vellit re viducid quiatquiant volestisti torehen tiorestem antis militas nes del ilicaturibus et et est harum, ipsant, natem quos es ipsa velit est, es re volupta temolorum este explant. Pore vid est, audam facia voluptiae pos ut que nullo- ria core nihilita istio tem quiscia volore nulliae corpos eatur Generating Txt file just have an output message similar with the stdout command on the terminal, besides that we also found the txt file on the path we defined. $ tesseract lorem.jpg /Users/momo/Desktop/lorem Tesseract Open Source OCR Engine v4.1.1 with Leptonica Warning: Invalid resolution 0 dpi. Using 70 instead. Estimating resolution as 376 Picture 4 Lorem Ipsum Extracting Txt Output As the output, the text extracted from the image is perfectly correct!! Then, how about if the image is scanned from real document? I have the document in the following picture. There are a few noise found on the image. We will check the editable text will have a good result as well or not. Picture 5 Example Scanned Document $ tesseract toefl_score.jpg stdout Warning: Invalid resolution 0 dpi. Using 70 instead. Estimating resolution as 435 Centre for Language Development Institute of Educational Development and Quality Assurance Yogyakarta State University ProTEFL Score Report No. 1553.b/M/P2B-LPPMP.UN Y/ViJ/2015 Wow, from 171 characters, just missed around 3 characters or we got 98% correct result!! It is normal because from the picture above, the UNY/VII isn’t too clear and made ambiguous for the engine. Non-Latin Scripts Another concern about extracting the text is non-Latin scripts issue. Mostly, we use Latin when writing some text. But how about the country that rarely using Latin, for the example Japanese, Korean and Chinese? In their documentation, Tesseract support extracting text with language option. Then, we need to check the list of languages that we have within this command tesseract — list-langs . $ tesseract --list-langs List of available languages (3): eng osd snum If the language not available on our list, we should added it. For the example we will try to extract Japanese text. Because I’m Mac user then I will using brew command to add more languages; brew install tesseract-lang . After the installation finished, I should be available to see jpn in tesseract --list-langs . Let’s get non-Latin scripts experiment!! This is a simple Japanese characters and we will try to extract it. Picture 6 Japanese Characters If we just use the standard command like previous experiment, we will get the wrong result. $ tesseract arigatou.jpg stdout Warning: Invalid resolution 0 dpi. Using 70 instead. Estimating resolution as 556 Detected 9 diacritics HYUPESCCWOES How we fix it? Actually the command is quite similar, we just need to put language parameter in the end of command; tesseract arigatou.jpg stdout -l jpn . And we successfully getting the Japanese characters!! $ tesseract arigatou.jpg stdout -l jpn Warning: Invalid resolution 0 dpi. Using 70 instead. Estimating resolution as 556 Detected 9 diacritics ありがとうございます Next case, how if the text come with mixed both Latin and non-Latin scripts? Here is the example. Picture 7 Latin and Non Latin Characters Sure, we still can do it by mixed the language as well!! Just need to put the another language with + like this tesseract arigatou.jpg stdout -l jpn+eng $ tesseract arigatou.jpg stdout -l jpn+eng Warning: Invalid resolution 0 dpi. Using 70 instead. Estimating resolution as 379 Detected 9 diacritics ありがとうございます arigatou gozaimasu: thank you Result We will have several experiments with various condition of the image to make sure how’s Tesseract engine works properly. In the end, we getting this matrix result. Sorry I can’t attach the images due to personal info reason. From the result above, we have temporary conclusion that the result really depends on the scanned document condition. Also what the style from the document. Speed Then how fast Tesseract execution time? We will do various cases to measure what the point that make it faster/slower. In this execution, I will use trap command to show the time on each command. $ trap 'echo -e " Started at: $(date) "' DEBUG $ pwd Started at: Wed Dec 2 10:13:19 WIB 2020 /Users/momo As we can see, timestamp will be prepending on each command line. We need to modify the command (a little bit tricky) to know what exactly the time it’s need. $ trap 'echo -e " Started at: $(date) "' DEBUG $ tesseract arigatou.jpg stdout -l jpn && echo "finished" Started at: Wed Dec 2 10:33:36 WIB 2020 Warning: Invalid resolution 0 dpi. Using 70 instead. Estimating resolution as 556 Detected 9 diacritics ありがとうございます Started at: Wed Dec 2 10:33:36 WIB 2020 finished From the result above, we can notice that execution time is <1s. Because of both first and second commands started in same time. Finally, after several speed test, we are getting this result. Maybe isn’t 100% valid point that made execution time fast or slow, but size and character type gave some value that affected the time. For the example, 1st and 2nd try is same image just different on size, but the bigger size will take more time consuming compared to lower size. Another case, with same character counted and almost equal size, Japanese character (non-Latin script) will take around 6x time that Latin script. FYI, I’am using this MacBook specification when doing the test. MacBook Pro (13-inch, 2017, Two Thunderbolt 3 ports) Processor 2,3 GHz Dual-Core Intel Core i5 Memory 8 GB 2133 MHz LPDDR3 macOS Catalina Version 10.15.7 (19H15) Programming Language Package How to use Tesseract engine on specific programming language? Actually, a lot of library from multiple languages that working with Tesseract for the example. We can refer to the library above based on our programming language that we used to simplify the implementation. Conclusion After having fun with Tesseract OCR, I can say that the engine is amazing!! Here the list of interesting point from Tesseract in my opinion: Open Source. Easy to use. Good extract result. Support multi language (Latin & Non-Latin). If you facing some issues and think OCR as your solution, Tesseract would be nice to try! I hope this article is useful for you, thank you!!
https://medium.com/zeals-tech-blog/introduction-to-tesseract-ocr-84d3eff6f9df
['Bismo Baruno']
2020-12-05 05:03:13.789000+00:00
['Tesseract', 'Google', 'Ocr', 'Software Engineer', 'Open Source']
How Does Your Google Assistant Really Work?
A technical rundown of how your Google Home device works. Wake Word: “OK Google” A wake word is a very small scale algorithm that activates a device when spoken. It can also be referred to as a “trigger word,” or “wake up word.” The wake word used to activate your Google assistant is “OK Google.” When the device hears the wake word, it begins recording, and you will see four circles on the top, this means the device is activated. The microphones also determine the direction the word comes from, so it can focus in that direction. Photo by Luis Cortés on Unsplash Cloud Computing Wake word technology runs on Cloud. And no, not the clouds you see in the sky on a rainy day. Cloud Computing is used to refer to servers that are accessed over the Internet, and the software and databases that run on those servers. Cloud servers are located in data centers all over the world. When using Cloud Computing, computers can store information in another place, instead of on the computer. That saves space and allows for more information to be stored. However, the idea of integrating cloud into virtual assistants took fairly long. One problem that had to be solved was that the device had to respond quickly when called. The system couldn’t stream what its microphone heard to a cloud service continuously; that would result in lag and would slow down wake word recognition, enough to impact the user experience. Though, these problems have been resolved with the help of ever-growing technology. Intense Training Yes, your Google Assistant had to go through some intense training before you guys became best friends, but not in the way you think. The wake word is based on a Neural Network Algorithm. These Neural Networks (NN) need to be trained to work the way we want them too. It can be thought of as working out at the gym. When you work out, you’re training your muscles. The more training data that is fed into the NN, the more accurate the results are. However, this training data needs to be diverse. Going back to our gym example, if you only do push-ups, the muscles in your arms will become really strong, but the muscles in your leg won’t be. So, you will have a much harder time going for a long run, than doing 50 push-ups. The same can be applied to training Neural Networks. If the training data only consist of women speaking, even if its millions of them, there will be more errors when men try to activate the system. The same problem will occur when people with different accents try to activate the device. So, it is important that the training sets used to train the system are diverse. How does Google come up with its answers? When you ask Google a question, it records your question on the device and uses the internet and cloud computing to search for your question, to find potential answers. Your words and the tone of your question or request are analyzed by an algorithm, which is then matched with a command that the device thinks you asked. In essence, the device is saying, “I’m 90% sure you said this.” Of course, the algorithm isn’t going to be 100% sure. This is the most common reason why you don’t get the answer you were looking for. Alongside the algorithm, the main device connected to your Google Home (usually a phone) is trying to see if it can process your command locally through wi-fi or Bluetooth. For example, if you ask your Google Home to turn on the lights, your phone will take care of that command. However, for more complicated commands, such as, “what does ‘bye’ mean in French,” your Google Device will need to connect to the server to answer your question. Is your Google Home device listening to everything you say? Ah, that infamous question! Remember the concept of a wake word? Your Google device is only activated and only starts recording when you say “OK Google.” Yes your device is constantly listening for the wake word, but don’t worry, it doesn’t understand anything until the wake word is heard. So, the short answer is no, your device isn’t listening to everything you’re saying, at least, it’s not understanding it…
https://medium.com/swlh/how-does-your-google-assistant-really-work-88c64d5dd38d
['Mansi Katarey']
2020-12-15 02:00:53.885000+00:00
['Algorithms', 'Artificial Intelligence', 'Neural Networks', 'Google Assistant', 'Machine Learning']
The No-Nonsense Guide to Deep Work
Willpower is Not Enough for Deep Work The first changes that I made are the obvious ones. But sheer willpower will not keep you to stay focused for long. You also have to think about the environment. Where you work and what surrounds you makes a big difference. This might be more difficult for some than others, but it’s an essential part of deep work. A few things you should focus on Noise Unless you are working in a quiet cabin in the middle of the woods, or your room is soundproofed, you need to think about removing noise from your environment. Whether it’s screaming birds outside your window or kids in your house, you need to invest in good headphones — ideally, noise-canceling ones. I can’t stress enough what difference it will make to your focus. What you listen to is up to you, but I would recommend something that you can listen to in the background, without thinking about it. The view Yes, the view. You don’t need a picturesque landscape for deep work. What you need is a view that doesn’t provide distractions. You should face a wall. Or even a window but without too much going on behind it. One thing that you don’t want is to face a room where things are happening — where TV is on, or someone is constantly walking, or your cat is playing. Comfort How you sit and look at the screen matters. It might not matter when you check your emails in the evening on your sofa, but it matters when you need to focus. You don’t want to change your position every ten minutes. Even if you can sit on your sofa with a laptop, it’s not good for your back and neck muscles. You need a desk and a comfortable chair. I’m not going to talk about ergonomics here, but you should be aware of the right setup.
https://medium.com/better-programming/the-no-nonsense-guide-to-deep-work-7b980d7801d8
['Petr Zaparka']
2020-07-22 12:15:19.818000+00:00
['Programming', 'Deep Work', 'Software Development', 'Remote Working', 'Productivity']
Convolutional Neural Networks
Researchers came up with the concept of CNN or Convolutional Neural Network while working on image processing algorithms. Traditional fully connected networks were kind of a black box — that took in all of the inputs and passed through each value to a dense network that followed into a one hot output. That seemed to work with small set of inputs. But, when we work on a image of 1024x768 pixels, we have an input of 3x1024x768 = 2359296 numbers (RGB values per pixel). A dense multi layer neural network that consumes an input vector of 2359296 numbers would have at least 2359296 weights per neuron in the first layer itself — 2MB of weights per neuron of the first layer. That would be crazy! For the processor as well as the RAM. Back in 1990’s and early 2000’s, this was almost impossible. That led researchers wondering if there is a better way of doing this job. The first and foremost task in any image processing (recognition or manipulation) is typically detecting the edges and texture. This is followed by identifying and working on the real objects. If we agree on this, it is obvious to note that detecting the texture and edges really does not depend on the entire image. One needs to look at the pixels around a given pixel to identify an edge or a texture. Moreover, the algorithm (whatever it is), for identifying edges or the texture should be the same across the image. We cannot have a different algorithm for the center of the image or any corner or side. The concept of detecting edge or texture has to be the same. We don’t need to learn a new set of parameters for every pixel of the image. This understanding led to the convolutional neural networks. The first layer of the network is made of small chunk of neurons that scan across the image — processing a few pixels at a time. Typically these are squares of 9 or 16 or 25 pixels. CNN reduces the computation very efficiently. The small “filter/kernel” slides along the image, working on small blocks at a time. The processing required across the image is quite similar and hence this works very well. If you are interested in a detailed study of the subject, check out this paper by Matthew D. Zeiler and Rob Fergus Although it was introduced for image processing, over the years, CNN has found application in many other domains. An Example Now that we have an idea of the basic concepts of CNN, let us get a feel of how the numbers work. As we saw, edge detection is the primary task in any image processing problem. Let us see how CNN can be used to solve an edge detection problem. On left is a bitmap of a 16x16 monochrome image. Each value in the matrix represents the luminosity of the corresponding pixel. As we can see, this is a simple grey image with a square block in the center. When we try to convolve it with the 3x3 filter (in the center), we get a matrix of 14x14 (on the right). The filter we chose is such that it highlights the edges in the image. We can see in the matrix on the right, the values corresponding to the edges in the original image are high (positive or negative). This is a simple edge detection filter. Researchers have identified many different filters that can identify and highlight various different aspects of an image. In a typical CNN model development, we let the network learn and discover these filters for itself. Important Concepts Having seen a top level view of CNN, let us take another step forward. Here are some of the important concepts that we should know before we go further into using CNN. Padding One visible problem with the Convolution Filter is that each step reduces the “information” by reducing the matrix size — shrinking output. Essentially, if the original matrix is N x N, and the filter is F x F, the resulting matrix would be (N — F + 1) x (N — F + 1). This is because the pixels on the edges are used less than the pixels in the middle of the image. If we pad the image by (F — 1)/2 pixels on all sides, the size of N x N will be preserved. Thus we have two types of convolutions, Valid Convolution and Same Convolution. Valid essentially means no padding. So each Convolution results in reduction in the size. Same Convolution uses padding such that the size of the matrix is preserved. In computer vision, F is usually odd. So this works well. Odd F helps retain symmetry of the image and also allows for a center pixel that helps in various algorithms to apply a uniform bias. Thus, 3x3, 5x5, 7x7 filters are quite common. We also have 1x1 filters. Strided Convolution The convolution we discussed above is continuous in the sense that it sweeps the pixels continuously. We can also do it in strides — by skipping s pixels when moving the convolution filter across the image. Thus, if we have n x n image and f x f filter and we convolve with a stride s and padding p, size of the output is: ((n + 2p -f)/s + 1) x ((n + 2p -f)/s + 1) Of course, if this is not an integer, we would have to chop down or push up. Convolution v/s Cross Correlation Cross Correlation is essentially convolution with the matrix flipped over the bottom-top diagonal. Flipping adds the Associativity to the operation. But in image processing, we do not flip it. Convolution on RGB images Now we have an n x n x 3 image and we convolve it with f x f x 3 filter. Thus we have a height, width and number of channels in any image and its filter. At any time, the number of channels in the image is same as the number of channels in the filter. The output of this convolution has width and height of (n — f + 1) and 1 channel. Multiple Filters A 3 channel image convolved with a three channel filter gives us a single channel output. But we are not restricted to just one filter. We can have multiple filters — each of which results in a new layer of the output. Thus, the number of channels in the input should be the same as the number of channels in each filter. And the number of filters is the same as the number of channels in the output. Thus, we start with 3 channel image and end up with multiple channels in the output. Each of these output channel represents some particular aspect of the image that is picked up by the corresponding filter. Hence it is also called a feature rather than a channel. In a real deep network, we also add a bias and a non linear activation function like RelU. Pooling Layers Pooling is essentially combining values into one value. We could have average pooling, max pooling, min pooling, etc. Thus a nxn input with pooling of fxf will generate (n/f)x(n/f) output. It has no parameters to learn. Max Pooling CNN Architectures Typical small or medium size CNN models follow some basic principles. A Typical CNN Architecture (Source Wikimedia) Alternate convolution and pooling layers Gradually decreasing frame size and increasing frame count, Flat and fully connected layers towards the end RelU activation for all hidden layers, followed by a softmax for the final layer A prominent concept in CNN architectures is that the alternate layers change the information content to sparse and dense one after the other. This helps separate the individual pieces of the information. One can think of this as someone playing with a cotton ball. If we pull and push the threads again and again, we naturally separate the individual threads. Similarly, a CNN can separate individual components in the image. Things get more and more complex as we move over to large and very large networks. Researchers have provided us more concrete architectures that we can use here. ImageNet, GoogleNet and VGGNet are a few of these. Implementation Typically implementation of a CNN model data analysis and cleanup, followed by choosing a network model that we can start with. We provide the architecture in terms of the layout of the network number and size of layers and their connectivity — then we allow the network to learn the rest for itself. We can then tweak the hyperparameters to generate a model that is good enough for our purpose. Let us check out a simple example of how a convolutional network would work. In a previous blog, we had a look at building the MNIST model with a fully connected neural network. You can check it out if you want to have a detailed view of how TensorFlow and Keras can be used to build a deep model. Let us now look at doing the same job with a Convolutional Network. Import the Modules We start by importing the required modules. Get the Data The next step is to get the data. For academic purpose, we use the data set build into the Keras module — the MNIST data set. In real life, this would require a lot more processing. For now, let us proceed with this. Thus, we have the train and test data loaded. We reshape the data to make it more suitable for the convolutional networks. Essentially, we reshape it to a 4D array that has 60000 (number of records) entries of size 28x28x1 (each image has size 28x28). This makes it easy to build the Convolutional layer in Keras. If we wanted a dense neural network, we would reshape the data into 60000x784 — a 1D record per training image. But CNN’s are different. Remember that concept of convolution is 2D — so there is no point flattening it into a single dimensional array. We also change the labels into a categorical one-hot array instead of numeric classification. And finally, we normalize the image data to ensure we reduce the possibility of vanishing gradients. Build the Model The Keras library provides us ready to use API to build the model we want. We begin with creating an instance of the Sequential model. We then add individual layers into the model. The first layer is a convolution layer that processes input image of 28x28. We define the kernel size as 3 and create 32 such kernels — to create an output of 32 frames — of size 26x26 (28–3+1=26) This is followed by a max pooling layer of 2x2. This reduces the dimensions from 26x26 to 13x13. We used max pooling because we know that the essence of the problem is based on edges — and we know that edges show up as high values in a convolution. This is followed by another convolution layer with kernel size of 3x3, and generates 24 output frames. The size of each frame is 22x22. It is again followed by a convolution layer. Finally, we flatten this data and feed it to a dense layer that has outputs corresponding to the 10 required values. Train the Model Finally, we train the model with the data we have. Five epochs are enough to get a reasonably accurate model. Summary The model above has only 9*32 + 9*24 = 504 values to learn. This is amazing. A fully connected network would require 784 weights per neuron in the first layer itself! Thus we got a huge saving in the processing power — along with a reduced risk of overfitting. Note that in process, we used what we know about the problem. We used what we know and then trained the model to discover the rest. A black box approach of using a fully connected or randomly sparse network would never get us such accuracy at a this cost. By using what we know, we limit the machine to the known. Training the network from scratch can potentially open unknown avenues. However, it is best to leave that for the academic researchers. If we want to create something that can be used today, one should have a good blend of knowledge and discovery. Convolutional Neural Networks help us achieve that.
https://towardsdatascience.com/convolutional-neural-networks-e5a6745b2810
['Vikas Solegaonkar']
2019-04-15 15:33:14.372000+00:00
['Machine Learning', 'Computer Vision', 'Deep Learning', 'Convolutional Network']
Instacart Data Science Interview Questions
Interview Process The interview process is pretty straightforward. It starts with a data challenge, followed by a technical phone interview. After you pass these two, there is a round of technical and culture fit interviews on-site. The interviews are short and targeted and provide you a good insight into the job and the teams you’ll work with at Instacart. Important Reading Data Science Related Interview Questions When an item isn’t available, what algorithm should we use to replace it? How would you staff the team based on delivery data? What other products or revenue opportunities will arise from Instacart’s data? Write a script to format data in a text file. Estimate the demand and supply How might you have optimized parameters for this model differently? How would you tune a random forest? Given a OLTP system which tracks the sales of items with order processing, returns and shipping, create a data warehouse model to find gross sales, net sales and gross sales by product. Given a movie database, identify whether a movie has well defined genre. How should we solve our supply / demand problems at Instacart? Reflecting on the Questions The data science team at Instacart publishes blog articles regularly Engineering Instacart blog. At Instacart, data drives product decisions which reflects in their questions. The questions are aimed to get information on how you will meld within the existing team and if you can think in terms of the problems they are trying to solve. The scale of the items they catalog is huge. Something as day to day as groceries are looked from the purview of data science is interesting. A good knack to solve problems related to logistics and scale can surely land you a job with the largest grocery catalog in the world!
https://medium.com/acing-ai/instacart-data-science-interview-questions-e8d89bea1a34
['Vimarsh Karbhari']
2020-02-26 05:09:07.097000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Interview', 'Data']
Caution! Scams Targeting Freelancers
Scammers who want to steal your identity It has become quite a common occurrence that scammers without the necessary skills impersonate reputable freelancers to obtain projects from unsuspecting clients. The scammers steal reputable freelancers’ résumés (from the freelancers’ websites or job portals where full résumés can be uploaded), and edit just a few contact details such as the email address. Unsuspecting clients order services from the fake professionals under the assumption they are getting a quality product after checking out the impersonated freelancers’ solid online profiles. The projects are then performed poorly by the scammers or outsourced to reputable freelancers under a fake client name with no intention of ever paying them (see ‘Scammers who want to steal your work’ below). The client pays the scammers the invoiced amount, and by the time the client notices that the result is unusable, the scammers have long disappeared. The client ends up complaining to the real freelancer who allegedly provided the product or service. Ultimately, both the client and the real freelancer are the victims in this case. The client has lost money, and the real freelancer, whose résumé was stolen, has lost their reputation. Some clues to watch out for: Sometimes, scammers don’t even bother to change the legitimate author’s name in the document properties, the real address or even the photo in the résumés they steal, but change only the email address. What’s more, the language used in their messages is usually uneducated and contains many typos and grammatical errors.
https://medium.com/the-innovation/caution-scams-targeting-freelancers-c6e9473f03b3
['Kahli Bree Adams']
2020-07-04 16:50:05.903000+00:00
['Business', 'Startup', 'Work', 'Advice', 'Freelancing']
17 Python Interview Questions and Answers
17 Python Interview Questions and Answers Get familiar with some common Python interview questions so they don’t catch your off guard in your next interview Photo by KOBU Agency on Unsplash The tech industry is growing like never before. Every now and then, we see new software products released in the market. So no matter whether you’re fresh on the scene or an experienced Python developer, there are always opportunities waiting for you. The only requirement is you have to convince your potential employer using your skills. This can be made possible by appearing in Python programming interviews. But you’ve got to prepare yourself; otherwise, someone else might get the job. You can either try Python programming challenges or simply overview the frequently asked Python interview questions and answers. Today, I’m gonna share my personal experience of Python interviews with you. I’ll list the questions they asked me, including their possible solutions. So it’ll be an ultimate guide for you to get hired as a Python programmer.
https://medium.com/better-programming/17-python-interview-questions-and-answers-ab438c92866c
['Juan Cruz Martinez']
2020-10-22 15:57:44.653000+00:00
['Python', 'Software Development', 'Coding Interview', 'Interview', 'Programming']
Apply: Virtual #SDGs4Universities Symposium — By ICCDI AFRICA
The United Nations’ Sustainable Development Goals launched in 2016 is an agenda that requires all sectors of society to participate actively in bringing the target Goals to fruition, exploring the many challenges in Economic, Social and Environmental development while proffering integrated solutions to achieve the mandate. #SDGs4Universities It is with this that ICCDI launched its SDG4Universities project to educate more young people about the SDG goals The SDGs4Universities Symposium is a project curated by ICCDI to disseminate the knowledge of the Sustainable Development Goals to various tertiary institutions across the country and Africa. This initiative aims to localise the knowledge and application of the United Nations’ Sustainable Development Goals among university students, as well as position students to solve some of the world’s most pressing problems. Hence, since 2018, ICCDI Africa has hosted physically the SDG4Universities session six different times, in various universities, reaching over 5000 tertiary institution students across Nigeria. This year, the prolonged ASUU Strike and COVID-19 situation has made the organisation leverage ”the new normal” by hosting the SDG4Universities Symposium virtually. Event Date: 25th — 27th November 2020 Time: 10am — 1pm Apply Here:
https://medium.com/climatewed/apply-virtual-sdgs4universities-symposium-by-iccdi-africa-26fda2e74726
['Iccdi Africa']
2020-11-20 06:04:41.213000+00:00
['Sdgs', 'University', 'Climate Change', 'Economics', 'Environment']
Why Men Should Respect Sexually Empowered Women
As a man among men, I get the privilege to navigate my day without much trouble. I can wear whatever I want, do whatever I want, and be the man I want to be as long as it’s not illegal. (Ignoring the fact that I’m black. That’s a topic for another day). If I want to take off my shirt and reveal my body to the world, you won’t hear much of a fuss. In fact, I’ll either receive praise, some hilarious commentary, or maybe even a meme or two if I’m lucky. If I want to brag about my sexual conquests, then by golly I can do just that. It will make for a great story when I meet up with the fellas later. It’s going to boost my social status and win me points with the guys. If I want to go out and have crazy one-night stands, then brag to the world about it, more power to me! Too bad, women can’t enjoy life the same way without some sexist asshole ruining everything. Unfortunately, women just don’t have this same luxury of sexual empowerment. As a woman in today’s man-oriented society, it’s all too easy to become disrespected by the people in their life. When they express too much confidence in their sexuality, there’s always someone trying to bring them. Someone, who is so insecure about their self-worth, that they feel the need to threaten violence or slut-shame a woman just to boost their own value. And that’s messed up. More than anyone else, women have the right to sexual empowerment, and more so, they deserve it. Women Deserve Respect, Sexual or Otherwise In a society where there is so much emphasis on appeasing the gaze of a man, why do we hold women to such an impossibly high standard? Society puts a premium on women toning down their sexual confidence yet simultaneously expects them to cater to every single guy that just so happens to look to in their direction. When women don’t cater to the expectations of society, their very lives are literally at risk. When a woman doesn’t cater to the sexual preference of men or even women, they risk everything. Their reputation is at risk because people will go far out of their way to degrade them. From slut-shaming to outright defamation, nothing is off the table. Or worse. We have reached the point that violence has become a normalized response to sexually confident women. When sinister threats of assault and violence have become as normal as breathing, we have a problem. It never ceases to shock me when my womanly peers share their stories of the many threats they have received. The death threats, the forceful silencing, and promises of harm for simply embracing their confidence. By standing up for their selves and embracing their sexual power, women risk everything. And that’s pretty messed up. Society Should Feel Grateful For Sexually Empowered Women As a male, this isn’t a popular opinion, but I’m just going to say it. Modern culture needs to respect and appreciate that women are embracing their sexuality. In general, just existing as a woman is a hard job. Between the challenges of motherhood to the social bias surrounding femininity and career ambitions, I can imaging that it can be pretty challenging. Women, like men, are sexual creatures too — and shouldn’t feel shamed for embracing a natural side of themselves. These same women contribute much to society, regardless of sexual lifestyle. They do the hard work of child-rearing while us guys are off having our usual dick-measuring contest. They are there for men, despite being taken for granted by previous men. They constantly show up and give their all to those that may not return the favor. No matter what form her sexual confidence may take, there is no justification for violence. Whether it’s her own day-to-day confidence in society or a stripper, nothing justifies a death threat just because she’s not catering to the expectations of others. If a woman gives her time and attention to you, it’s because you earned her respect. Anything less than that isn’t affection — but fear and insecurity. When someone deliberately takes time out of their day to shame someone else, that speaks volumes to their level of insecurity. That the lifestyle of another intimidates you that much reveals not only a lack of maturity but a lack of confidence on their part. In a society that so often decries “love your neighbor”, it’s astounding how quickly we pick and choose who is worthy of that love and respect. If a woman doesn’t display enough of her sexual confidence, she’s a prude and unworthy of attention. If she displays too much of her sexual confidence, she’s a slut and deserves to be shamed. And if any of that sexual confidence doesn’t cater to your attention or preference, then she’s degraded regardless. It’s ridiculous. Women were not put on this Earth to please someone else. Their sexual confidence, and the empowerment they draw from it, is up to them. Whether or not you earned the right to enjoy that confidence is up to you. More Than Hips And Thighs Sexual empowerment is more than just a display of physicality — it’s the sexual extension of all aspects. Emotions. Intelligence. Power. Status. Career. Spiritually. Psychological. And so much more. When people think of sex, we naturally tend to focus on the obvious stuff. Yeah, we get it — they have boobs and guys have a dick. But it’s deeper than that. It’s about letting women explore every part of themselves, and more often than not, men are included in that journey. And even if men are not, it’s not about us. Women are not a piece of real estate or a fucking stock where it’s treated as a long-term financial asset with a fluctuating value. They’re human too and desire to explore every part of that human psyche. That includes the sexual side as well. Conclusion Sexually empowerment isn’t just a guy thing — it’s for everyone. Sexual lifestyles are not an absolute, but a multiple-choice response. A response that allows people to sexually live their lives in the ways that suit them best. I’m a guy, and I have no idea if any of this means anything coming from me. However, what I do know, is that women deserve better than threats and shaming. As men, our counterparts contribute too much for us to show them that sort of disrespect and worse, not allow them to be more. Sexual empowerment for women is empowerment for all of us. Because men seek confident women anyway — so let her be confident. When it comes down to it, it’s not about who has more power — it’s about a mutual respect.
https://medium.com/sexography/sympathy-for-sexually-empowered-women-73f81afaecf
['Dayon Cotton']
2020-11-04 17:03:41.441000+00:00
['Sexuality', 'Sex', 'Women', 'Equality', 'Society']
WebSockets on Demand With AWS Lambda, Serverless Framework, and Go
Setup of the Components in the Cloud Now that we understand what the clients can do and what the server is called to do, we can start building the infrastructure in the cloud. As we said, we are going to use the Serverless framework to do this. The configuration of the API Gateway and the Lambda function(s) is pretty simple. We start linking the Lambda function(s) to the types of event they are asked to manage. This is the Serveless yaml configuration that defines such links. Serverless configuration yaml In the above snippet, we define a service, a provider, and a Lambda function to be executed when the WebSocket events reach the server. The route property links an event to its function, meaning that when the event occurs, the function is triggered. The handler property points to the code that the Lambda function will execute, which in this case is bin/handleRequest , a Go compiled executable. In this example, the same function (implemented by bin/handleRequest ) manages the connect , disconnect , and default events. We could have defined different functions to manage different events, but we opted for a single function for the sake of simplicity and to allow some forms of optimisation which we will discuss later. Let’s go back to the events. We know what connect and disconnect are. But what is default ? When a client sends a message to the server, the meaning (i.e., the semantic) of the message is embedded in the content of its payload. For instance, if the message carries a JSON payload, then the JSON could have a property action to identify the logic the message is supposed to trigger. We could then configure AWS API Gateway to react with different Lambda functions to messages with different semantics. In other words, we could attach different Lambda functions to the different values of the action field (if we follow the example above). If no match is found, then the system reverts back to the default event and the Lambda function linked to it.
https://medium.com/better-programming/websockets-on-demand-with-aws-lambda-serverless-framework-and-go-616bd7ff11c9
['Enrico Piccinin']
2020-08-13 22:46:01.352000+00:00
['Programming', 'AWS', 'API', 'Go', 'Serverless']
Our Choices Make the Internet S(i)uck
Our Choices Make the Internet S(i)uck Bow before the ‘Order of Online Commentary’ The Order of Online Commentary Why is it with the greatest information resource at the hands of Humans everywhere, the thing we share the most are pictures of our cats, our dogs, and ourselves doing things? Meaningless things. Things better forgotten and indeed will be because for 97% of your existence, being a blank slate to those memories is ultimately your fate; outside of a few potent smells or key elements, your mind will recreate a reasonable facsimile of the event, even if you have pictures… A facsimile you would be hard-pressed to be sure actually happened unless you happened to film the entire event. Most of what we do in life, isn’t particularly memorable. Is this the secret of the complete lie that is social media? Is this why people spend so much time creating the illusion of perfection they try and present to the world on social media? In a time of death and suffering, where people are overdosing on pain and fentanyl, dressed as opiates for the masses; where loneliness is poisoning people with the potency of fifteen cigarettes a day and nary a puff; where suicide among police, arbiters of summary justice and those soldiers who dispense death from above, or from below are taking their own lives proving power doesn’t always mean freedom. We are a culture obsessed with youth and beauty and dying before we can live long enough to recognize just how useless both of those things truly are. By the time we realize we should be living life instead of taking pictures of it, it’s half over and have missed the best experiences of our lives because we were filming it, not living it. Why does the Internet instead of promoting useful intellectual discourse dissolve into debates about Jordan Peterson and his pedantic, man-baby ranting or the madness of Ayn Rand and the politicians who lack the intellectual capacity to recognize they need to sit down and shut the fuck up? How is it that naked thirst pictures whether they be artistic or just crudity writ large, draw more attention than anything else? What are we hungry for that these pictures can possibly feed? Connection? Love? Sex? Meaningful understanding of our tax returns? You can’t get any of those from a picture, no matter how long you stare at it. Dark humor is the next most supported thing. It allows us to laugh at ourselves or other people without feeling cruel or mean. Dark humor is the stuff you might not say in front of your grandmother, but you heard your mom saying it to a friend on the phone when you were a kid. Dark humor is nostalgic humor when people could say what they wanted without fear of being told they were being politically incorrect. I didn’t say it was good humor. I said it was dark humor. The stuff coursing through the veins of every one of us at one time or another. Even Jesus was heard to say “Did you hear the one about the Roman?” Don’t know any dark humor? Liar. Everyone knows a ribald limerick or a joke where three guys walk into a bar… Dark humor is the secret lubrication of the Old Boys Club, the back room gossip, the occasional barbershop quip, the nail salon cattiness. Everyone knows it exists. We politely pretend and mutually agree we don’t engage in it. My Daddy Once Told Me… Aphorisms are popular on the Internet. They are predigested thought-balloons sent up to show you have read something, once. Maybe you believe in it, maybe you don’t but they are the shortest form of acceptable knowledge transmission on the Internet. Rumored to be the stupidest, though. Rendered insignificant to the scholarly, because they are often taken out of context removing their complete value or assigning a different value to what was said. Left can be come right. Up can become down. Cigars can become… well, cigars. Ask the Conservative Right to answer any question will require they take a snippet of wisdom from some famous person they know nothing about and wrap up their latest bullshit nugget and parade it before the public as if it was a prized animal friend/slave and like magic they become the new sages, their alternate facts indefensible, their sagacity enshrined in video, their sanity questioned back stage, at least until the next pundit destroys them with research at 8:00. Most people will only transmit the thoughts of other famous people. A few will dare to create new ideas (which generally speaking aren’t new, just modernized for the audience of the day). Occasionally a hash-tag is created which embodies this thought process and becomes a viral success. For all but the most virile, viral hash-tags die off and recur like a herpes outbreak whenever a new generation discovers they weren’t the first to think of these things and transmit them… Music videos used to be the rage, making YouTube one of the most popular places on the Internet. Before hate-rage videos, product reactions, television show reactions, whispering women, oh and secret porn became the order of the day there. Now YouTube is a Wild West of moderators trying to stay ahead of the daily decapitations, people eating shit (seriously) or sucking down Tide pods. Or setting themselves on fire — or murdering their neighbors. Videos which show only the darkest aspects of the Human experience. I guess one day, when an AI is trolling through the unedited archives of Youtube, it will see the unaltered humanity in all its gory colors and various shades of insanity. But its primary takeaway will likely be: Humans are murderous and upon occasion eat cleaning products. It’s probably for the best they’re extinct now. Tabloid Journalism with a Side of Hate Sauce, please The cleanest cesspool you can enter online is the one which pretends to be news but isn’t. In an effort to fill 24/7/365, what was once news is now what we used to call tabloid journalism. The stuff you read while you were in line at the supermarket. “Alien baby arrives in my living room via beam of light,” swears drunken starlet. “Famed movie star eats two dozen raw eggs a day in a bizarre new diet. Unexpectedly exposed to salmonella by his diet, he dies wearing a bulldog mask and paws, during a naked cuddle party with a local furry group.” “Musician has his stomach pumped and found with 20 cc’s of semen found within. He remembers nothing of the incident due to high levels of ketamine later detected in his system.” Today, this stuff is found in the news feed of all but the most respectable journalism. ProPublica might be the only place left where I can find news, that’s actually news. Watching the Hate Brigade of Blonds on FOX or the Racist Avengers like Tucker Carlson pointing out that diversity isn’t an asset, and that ravaging hordes of zombies (er… Mexicans) will be coming toward the border and only 45 can save us. Yeah. News. Gotta love it. Does this mean there aren’t good news services out there? No. They are simply drowned out by the sensationalist media which, in their search for the elusive metrics to convince advertisers they are relevant, these companies will put up ANYTHING if it will get them a click or two million. Being paid by advertisers to promote hatred. Thus the road to society’s personal hell is paved through Madison Avenue. Rants from the Sidelines of Life Not quite the low man on the totem pole comes the personal polemic, the rants, the screaming the invective of a few dedicated souls who pour out their hearts or whatever passes for one, in rants about those people they hate over there that they have never met. Rants about men or women and why they don’t date anymore because all Human beings are trash, in their enlightened opinion. An opinion they want you to share, of course. So you can all be miserable hating, not having, talking about, secretly envying or lusting after something you cannot or should not or are not BOLD ENOUGH to admit you want. You hate that leather daddy who walks around dressed resplendently in his outfit? No, you don’t. You secretly envy the fucking balls it takes to put on that outfit with the ass out and strut around town like its the thing to do. You want to be him but your parents sent you to Yale and told you to be a entrepreneur and you take out your frustration on your subordinates and secretly see a dominatrix with the billions you squeeze out of the fools who buy your products. Yeah. Wouldn’t it just be greener to admit you’re a freak, put on some leather and go with it? The personal polemics I see on the Internet vary widely from the smartly done, intellectual discourse on our sociopolitical climate, to discussions about racial dynamics and how we can overcome the thing that isn’t a thing called race. When I discover these smartly written things, I covet them, I follow those people because such works are balms to my soul. They make me believe in humanity again. You think you are seeing the worst the Internet has to offer. But you are wrong. You could be hearing about it too. Imagine canned radio. Available at any time, a mystery dish whose pedigree, ingredients, themes, plots, schemes, histories, both foreign and domestic remain in a superimposed state, neither alive nor dead. The podcasts you aren’t listening to could reveal the best or the worst of the Human experience, but you won’t ever know. Most people don’t have time to listen to them. They are too busy looking at cat pictures, which are easy to digest and low in calories. Frankly, one of the things I hate most about podcasts is there is no objective way to know they are going to be good except to spend the time listening to them. This means like every other media source, people are making it, archiving it, with the possibility that after the day it airs and for a couple of months afterwards, it may never be listened to again. Without a transcript, everything stored within its bits of data lie on the shores of impossibility, available but unlikely to ever reach anyone except the most dedicated. Or the clinically insane with nothing else to do. The truth of the Internet is shocking. The absolute truth is this: Humans suck. Without moderation, without someone willing to absorb, delete, restrict what COULD be on the Internet, including snuff films, murders, beheading, racist rants which would cause you to explode if you were exposed to them, human trafficking caught on camera, the Amazon warehouse workers weeping in corners, and any number of other atrocities, you would drown in the horrors which comprise the festering spirit of a species in its final days. A species struggling to come to grips with its better nature, but unable to let go of what it doesn’t have, doesn’t know it wants, can’t recognize it needs, won’t allow others to be themselves, and an incessant desire to control things that have nothing to do with them at all. The Order of Online Commentary is just an observation I made, recognizing people click on the things they do because they want a release from what they know is lingering online, just out of the corner of their eye. Their own innate fear of being obsolete, useless, and just one more bit of mindless traffic on the World Wide Web. It doesn’t have to be that way. We all have the power to change this. Share new things which inspire you. Which reveal the world to you in new ways. Share the love you have for people who write about their struggle and are uplifted by it. Jackie Summers always inspires me, and I secretly covet his cat pictures as well. We have the power to make the Internet better. The Dark Web will still exist. Terrible things will still be happening. But how great would it be if we all just stopped sharing Tucker Carlson and his Brigand of Hateful Blonds on the Internet. If we shared ideas which gave us hope rather than despair. What if we wrote about the things which heartened us in these dark days. Wrote about those events which meant we weren’t the worst thing ever cooked up on the petri dish that is the Earth? Change this Order of Online Commentary by sharing things which show us at our finest. This doesn’t mean we have to ignore bad news. It means we need to temper it with facts. With solutions, with rigorous debate. With intellectual capacities honed by reasoning, not by emotions stoked by over-privileged bigots who bought their way to power. Let’s remake the Internet into a lasting legacy of the species, not the final networked gasp of an organism in despair. Amplify the awesome. Share something inspiring and meaningful online every day. Thaddeus Howze works as a writer and editor for two magazines, the Good Men Project, a social men’s magazine as well as for Krypton Radio, a sci-fi enthusiast media station and website. He is also the Cognitive Dissident, living in desolation, a disillusioned, and despondent essayist who has lost all hope in the improvement of the human species. But, somehow, despite it all, he still remains defiantly hopeful humanity may still escape the Sword of Damocles. He is also a freelance journalist for Polygon.com and Panel & Frame magazine. Thaddeus is the co-founder of Futura Science Fiction Magazine and one of the founding members of the Afrosurreal Writers Workshop in Oakland.
https://ebonstorm.medium.com/our-choices-make-the-internet-s-i-uck-4d35059c93e5
['Thaddeus Howze']
2019-01-20 02:38:53.012000+00:00
['Politics', 'Society', 'Rant', 'Social Media', 'Tech']
Best Workout Songs: 20 Tracks To Help You Get In Shape
Louis Chilton It’s never been easier to listen to music while working out. Wireless headphones and online streaming have given exercisers a world of musical possibilities at their fingertips. But what are the best workout songs for an intense gym session? While everyone has their own personal preference for music that will get their muscles pumping, there are a few qualities that all good workout songs have in common: a great beat, a quick tempo and a catchy, energising hook. From sugary pop bangers to slick hip-hop hits, we’ve picked what we think are the best workout songs of the last few decades to listen to when hitting the gym. Listen to the best workout song on Spotify, and scroll down for our 20 best workout songs. Best Workout Songs: 20 Tracks To Help Get You Back In Shape 20: Nelly Furtado: ‘Promiscuous’ When Loose was released in 2006, the album quickly became Canadian pop singer Nelly Furtado’s biggest success to date, and ‘Promiscuous’ her first US №1 single. Superstar producer Timbaland provides guest vocals as one half of a couple engaged in a seductive back-and-forth. The uptempo, powerfully danceable track also topped the charts in New Zealand, Denmark and Furtado’s home country. 19: Avicii: ‘Wake Me Up’ Swedish DJ Avicii took the world by storm with this uplifting 2013 single, which opened his debut album, True. With rich, sonorous vocals from American soul singer Aloe Blacc, ‘Wake Me Up’ was a №1 hit in 22 countries. The track’s idiosyncratic mix of house music, dance-pop and folk music helped make it a nightclub staple, and earns it a place among the best workout songs. 18: blink-182: ‘All The Small Things’ Few bands inhabit the specific turn-of-the-century musical landscape as distinctly as blink-182, and their 1999 hit ‘All The Small Things’, from their breakthrough album, Enema Of The State, remains popular to this day. Written by founding member Tom DeLonge (along with singer and bassist Mark Hoppus), the catchy pop-punk anthem features lyrics about his former wife, Jenna Jenkins. 17: Maroon 5: ‘Moves Like Jagger’ The charismatic, sexually-charged dance moves of The Rolling Stones’ frontman, Mick Jagger, inspired the lyrics to this infectious dance-pop hit. Released by California-based pop band Maroon 5, with additional vocals by none other than Christina Aguilara, ‘Moves Like Jagger’ was performed for the first time on TV talent show The Voice in June 2011 and went on to become one of the highest-selling singles of all time. Maroon 5’s 2010 studio album, Hands All Over, was reissued in 2011 to include the track. 16: 50 Cent: ‘In Da Club’ 50 Cent’s snappy hip-hop banger appeared on his debut studio album, Get Rich Or Die Tryin’. Featuring effortlessly great production from genre maestro Dr Dre, paired with 50 Cent’s lyrics, ‘In Da Club’ is a timeless track that became the rapper’s first single to peak at №1 on the Billboard Hot 100. 15: Spice Girls: ‘Wannabe’ Epitomising the concept of “girl power” which came to define the image of English pop sensation Spice Girls, ‘Wannabe’ was the group’s dynamic first single. Topping the Billboard Hot 100 for four consecutive weeks, the dance-pop song became the best-selling single by a girl group in history, and remains a perennial favourite for a generation of pop fans. 14: Iggy Azalea: ‘Fancy’ Australian star Iggy Azalea became one of the biggest female rappers of all time following the release of this 2014 single, which features Charli XCX singing the chorus. The track has spawned several prominent cover versions, including those by The Killers, Kasabian and Ed Sheeran. None quite compare to the stylishly produced original, however, which was included on Azalea’s debut album, The New Classic. 13: Miley Cyrus: ‘Party In The USA’ Back when Miley Cyrus released ‘Party In the USA’ (described by the singer as an “all-American” song), she was still best known for her leading role in The Disney Channel’s Hannah Montana. This upbeat, electrically sunny single helped establish Cyrus as a credible popstar in her own right and remains one of her most popular tracks to this day. It even inspired a parody by popular comedy singer “Weird Al” Yankovic, entitled ‘Party In The CIA’. 12. Taio Cruz: ‘Dynamite’ With straightforward, repetitive lyrics (“I came to dance, dance, dance, dance”), the appeal of this best-selling single lies in its emphatic, danceable production, impeccably catchy chorus and slickly Auto-Tuned vocals. Produced by Benny Blanco and Dr Luke, ‘Dynamite’ is credited to five different songwriters — Cruz, singer-songwriter Bonnie McKee, Swedish hitmaker Max Martin, plus its two producers. 11: Backstreet Boys: ‘Everybody (Backstreet’s Back)’ After the successful release of their self-titled debut international album, Backstreet Boys announced their return with this uptempo hit. ‘Everybody (Backstreet’s Back)’ was included on their sophomore record and was penned by the Swedish songwriters Max Martin and Denniz PoP. The track has become a signature tune for Backstreet Boys, who are currently the best-selling boy band of all time. 10: Far East Movement: ‘Like A G6’ Breaking new ground for Asian-American music in the US (they were the first such group to achieve a №1 hit), Far East Movement teamed up with California-based hip-hop producers The Cataracs and singer Dev for this energising electro-house track. With undeniably simple, repetitive lyrics, the lasting popularity of ‘Like A G6’ hinges on its central riff — a real earworm. 9: Katy Perry: ‘I Kissed A Girl’ The hit 2008 single by Katy Perry sits among the best workout songs thanks to its thumpingly catchy chorus and slick pop production — but there’s more to it than that. The lyrics, which excitedly describe a same-sex romance, are credited with instigating a greater acceptance of LGBTQ+ themes in mainstream pop music. ‘I Kissed A Girl’ paved the way for a generation of contemporary artists to celebrate sexual diversity through song. 8: Gwen Stefani: ‘Hollaback Girl’ Gwen Stefani’s best-selling 2005 single ‘Hollaback Girl’, from her debut solo album, Love.Angel.Music.Baby., is far from your typical pop hit. Mimicking the style of a cheerleader, the track boasts simple production and a memorable, chanted chorus and drum beat. It was written as a retort to comments made by Courtney Love: “I’m not interested in being Gwen Stefani. She’s the cheerleader, and I’m out in the smoker’s shed.” 7: Flo Rida (featuring T-Pain): ‘Low’ Mail On Sunday, the debut album from Florida-based hip-hop artist Flo Rida, contained this breakthrough single, which boasts impressive (Auto-Tuned) vocals from fellow-rapper T-Pain, and crowd-pleasing club hip-hop rhythms. The track, released in 2007, became the most-downloaded single of the 00s, and was later remixed with added contributions from frequent Enrique Iglesias collaborator Pitbull. 6: The Black Eyed Peas: ‘Pump It’ One of the singles from The Black Eyed Peas’ 2005 album, Monkey Business, ‘Pump It’ incorporates the memorable riff from Dick Dale’s 1962 recording of the Ottoman-rooted folk song ‘Misirlou’ (also famously used near the beginning of Quentin Tarantino’s 1994 movie, Pulp Fiction). The resulting mash-up, produced by will.i.am, is the perfect soundtrack for high-octane exercise. 5: Lady Gaga: ‘Just Dance’ In just over a decade, Stefani Germanotta, aka Lady Gaga, has gone from a relative unknown to a fully-fledged pop icon, with nine Grammys, over 27 million album sales and an Academy Award to her name. Her aptly titled debut single, co-written with Akon, gave everyone a taste of what was to come when it was released in 2008, with a contagious dance-pop beat that filled dancefloors the world over. 4: Rihanna: ‘Pon De Replay’ Rihanna’s debut single, ‘Pon De Replay’, was a fresh, fluid mixture of stylistic influences, from reggae to pop and R&B, with elements of dancehall. The singer, who hails from Barbados, included the track on her debut album, Music Of The Sun, which set her on the path to becoming a household name. The title translates to “play it again” in Bajan, one of the official Barbadian languages. 3: Nicki Minaj: ‘Starships’ With ‘Starships’, Nicki Minaj solidified her crossover from underground mixtape star to fully fledged mainstream hitmaker. The irrepressibly catchy track, produced by RedOne, Carl Falk and Rami Yacoub, was included on Minaj’s 2012 album, Pink Friday: Roman Reloaded. The accompanying music video was widely acclaimed and sees the rapper partying on a beach, perfectly capturing the song’s free-spirited vibes. 2: Eminem: ‘Lose Yourself’ Written for the Eminem-starring movie 8 Mile, ‘Lose Yourself’ quickly became an anthem, and one of the most successful hip-hop tracks of all time. Its churning, energising beat is elevated by Eminem’s lyrical dexterity — the track is a perfect showcase for the linguistic mastery and potent vocal delivery that earned pop provocateur Marshall Mathers his place in hip-hop history. 1: Kanye West: ‘Stronger’ Kanye West is no bad rapper by any means, but it’s his prowess as an innovating producer that made him the stuff of legend. ‘Stronger’, one of the hit singles included on his third album, Graduation, is a pop masterpiece: a brilliant, pulsating reworking of Daft Punk’s ‘Harder, Better, Faster, Stronger’. Its influence on the direction of both pop and hip-hop music in subsequent years cannot be understated — and it’s the perfect motivational song to get your blood and muscles pumping. Join us on Facebook and follow us on Twitter: @uDiscoverMusic
https://medium.com/udiscover-music/best-workout-songs-20-tracks-to-help-you-get-in-shape-98017747a4a1
['Udiscover Music']
2020-01-07 10:36:48.998000+00:00
['Lists', 'Lifestyle', 'Exercise', 'Culture', 'Music']
10 Habits to Increase Your Productivity While Working Remotely
10 Habits to Increase Your Productivity While Working Remotely You should have a set time to work, a defined workspace, and a daily routine, even though you’re at home Photo by Nelly Antoniadou on Unsplash Ever since work from home began, my productivity has had its ups and lows until I found the right way to do it. Since then, it has been at a constant high. During the initial days, all of us must have had mixed feelings. I first thought it would be fun for a while to wake up five minutes before the first meeting and spend all day in pajamas. And I thought avoiding traffic and travel was the best thing that could ever happen to me. But later on, when burnout kicked me pretty hard, I decided to not let it get to me. I thought it was just me, but later on, when I’d talked to my colleagues who live with their parents or the ones that have kids at home or a noisy neighbor who loves to yell all the time, I realized what the real deal was. Also, being prone to get distracted or to procrastinate has also become extremely common as we are all at home in our own space. We’re the master of our time now as there’s no one around us that we’re scared will judge us. I’d decided to figure out what was actually going wrong. Following these few, simple habits will give you enough focus and concentrated time to work. I had inculcated these habits the tough way because, honestly, if remote working is not planned, there are at least a thousand ways it can go seriously wrong, sucking up your time and energy.
https://medium.com/better-programming/10-habits-to-increase-your-productivity-while-working-remotely-5c21f7a466be
['Harsha Vardhan']
2020-11-23 17:09:30.266000+00:00
['Programming', 'Self Improvement', 'Productivity', 'Software Development', 'Web Development']
Analyzing a Time Series Real Estate Dataset with GridDB and Java | GridDB: Open Source Time Series Database for IoT
Analyzing a Time Series Real Estate Dataset with GridDB and Java | GridDB: Open Source Time Series Database for IoT Israel Imru Follow Nov 24 · 6 min read Dataset and Environment Setup In this article we will discuss how to analyze and ingest a time series dataset with GridDB and Java. The data we will be analyzing is an open dataset that contains real estate property sales details. You can download the dataset from this link First of all, let’s take a look at the structure of the dataset. You can have a proper idea on the dataset by referring to the following table. The reason why we have used GridDB in this implementation is because it has unique features that make it ideal for time series data. This article describes clearly about time series data and GridDB. Before starting implementation, you need to set up the GridDB server with a Java client. If you haven’t set it up yet, you can follow this quick start guide shown here. In this article, we will not be focusing on how to Connect, get GridStore and store data. We will be mainly focusing on analyzing the dataset. Now let’s move to the implementation. Declare column names and get gridstore instance First we need to declare the attributes of the dataset as a static inner class as follows. int MA; String type; int bedrooms; } static class Sales{ @RowKey Date salesdate;int MA;String type;int bedrooms; According to the dataset, there should be four attributes. Here we have used the same names as the column names for readability. @RowKey syntax is used to identify the row key of the document. Next, it is required to get the gridstore instance and create a timeseries. To get gridstore, set a number of properties such as notification address, notification port, username name and password. Read dataset and Store Now, the data should be read from the dataset and preprocess in order to store in the database. In our dataset, the date is in the format of “dd/MM/yyyy”. But the date should be stored as a timestamp. Since this dataset doesn’t contain any time given, let’s set the time as “00:00”. Other column values can be stored without doing any change. File csvFile = new File("ma_lga_12345.csv"); Scanner sc = new Scanner(csvFile); String data = sc.next(); while (sc.hasNext()){ String scData = sc.next(); String dataList[] = scData.split(","); String salesdate = dataList[0]; String MA = dataList[1]; String type = dataList[2]; String bedrooms = dataList[3]; Sales sales = new Sales(); sales.salesdate = convertDateToTimeStamp(salesdate); sales.MA = Integer.parseInt(MA); sales.type = type; sales.bedrooms = Integer.parseInt(bedrooms); ts.append(sales); } The above code reads the CSV file line by line extracting relevant data and then creates a sales object. Then the created sales object has been appended to the database. The ‘converDateToTimeStamp’ method is used to convert the sales Date in the dataset to timestamp as follows. static Date convertDateToTimeStamp(date){ String OLD_FORMAT = "dd/MM/yyyy"; String NEW_FORMAT = "yyyy/MM/dd"; SimpleDateFormat format = new SimpleDateFormat("yyyy/MM/ddHH:mm:ss"); SimpleDateFormat sdf = new SimpleDateFormat(OLD_FORMAT); Date d = sdf.parse(salesdate); sdf.applyPattern(NEW_FORMAT); String newDateString = sdf.format(d); String datetimes = newDateString +"00"+":00:00"; Date dates = format.parse(datetimes); Long dt = dates.getTime(); return new Date(dt); } We have used the SimpleDateFormat package in java in converting date to timestamp. By now we have stored all the data we need in the database. Analyzing data From this point, we can pay attention to analyzing the dataset we just prepared. These data analyzing techniques that have been applied on real estate sales data can be generalized to use in several other problem scenarios as well. Retrieving data in a given time range First, let’s see how we can extract the specified range of time series elements. As an example, let’s take 4 months as the time range. So what we need to do is, extracting the data from the current timestamp to 4 months before the current date. Read this code and understand the implementation of the logic as well as the syntax. Date now = TimestampUtils.current(); Date before = TimestampUtils.add(now, -4, TimeUnit.MONTH); RowSet rs = ts.query(before, now).fetch(); while (rs.hasNext()) { Sales sales = new Sales(); sales = rs.next(); System.out.println( "Sales Date=" + TimestampUtils.format(sales.salesdate) + " MA =" + sales.MA + " Type" + sales.type + " Bedrooms" + sales.bedrooms); } In the first line of this code, we have taken the current timestamp. In order to do that, we have used the TimestampUtils method in the implementation. Then we need to reduce four months of time from the current time. “TimestampUtils.add” method can be used for that. If you want to add a particular time to the current time, you only need to remove the “-” sign in front of the time in the function. If you need to change the time unit, you simply have to mention the time unit as TimeUnit.MONTH, TimeUnit.HOURS or any other time unit according to the time range you need to add or subtract. Now, we have two timestamps. First one is the current time. Second one is the timestamp four months back. Querying to get the data in this time range is quite easy. RowSet rs = ts.query(before, now).fetch(); This gives you all the rows of data between the specified time range. However, you may get more than one row as the output of the above code. So you should read each data row one by one as implemented in the code. Once you get the row and extract it to the variables, you can either print it or apply any operation on the data. Query the database Now, let’s discuss how to write a query which includes multiple conditions on attributes same as “SQL queries” and extract data. Assume that a user needs to get the first 20 unit type house details in decreasing order of “MA” value which are smaller than 50000$ . First let’s see how we can write the query for this scenario. Select * from sales01 where type='unit' and MA <5000 order by MA desc limit 20 If you are familiar with SQL, you may know that this is how we would write SQL queries for this scenario. It’s important to mention that although this query and the syntax of most other queries are similar to SQL, there are some differences between the query language which is used in GridDB (TQL) and SQL. Let’s see how to get an output from the query we just wrote. Query query = ts.query("select * from sales01" + " where type='unit' and MA < 50000 order by MA desc limit 20"); RowSet res = query.fetch(); The code above shows you how to get the result from any query. Similar to the previous example, you need to check whether there are any resultant rows remaining and apply relevant operation on extracted data. Computing the average of extracted data Finally let’s take a look at how we can get an average value of a particular data value which is in a specific time range. We are going to get the average “MA” value between 4 months time period, where, from 2 months before the given date to 2 months after the given date. Assume that “salesTime” is a timestamp given by the user. . Date start = TimestampUtils.add(salesTime, -2, TimeUnit.MONTH); Date end = TimestampUtils.add(salesTime, 2, TimeUnit.MONTH); AggregationResult avg = ts.aggregate(start, end, "MA", Aggregation.AVERAGE); System.out.println(“avg=" + avg.getDouble()); As we discussed previously, starting time and ending time can be obtained with the “TimestampUtils.add” method. Now, we need to get the average value of “MA” within the specified time range. For that, we can use the following aggregation method. AggregationResult avg =ts.aggregate(start, end, "MA", Aggregation.AVERAGE); Once we get the average value to ‘avg’ variable we can apply suitable operations on it such as storing the value in the database, pass the value to another function as a parameter or simply print the average in std out. Conclusion Great, that’s pretty much about it. In this article we discussed some simple methods on how we can analyze real estate sales data in Java. We used GridDB for manipulating this time series data set. Try to get more familiar with these technologies, add more complex methods to analyse data and improve the functionalities further.
https://medium.com/griddb/analyzing-a-time-series-real-estate-dataset-with-griddb-and-java-griddb-open-source-time-series-7fe0b3341ea8
['Israel Imru']
2020-11-24 19:46:12.556000+00:00
['Java', 'Database', 'Data Analysis', 'Time Serie', 'Griddb']
How Teaching Kids CS Made Me a Better Programmer
How Teaching Kids CS Made Me a Better Programmer If you can explain something to a 6-year-old, then you truly understand it Photo by the author (made using Canva). “The mediocre teacher tells. The good teacher explains. The superior teacher demonstrates. The great teacher inspires.“ — William Arthur Ward If you want to really learn something, try teaching it to someone else because only then will you be sure that you fully understand it. That statement is entirely true. It’s even more accurate when you try to teach some relatively complex topics to six-, seven-, or eight-year-olds. Then, you don’t only need to understand the topic fully. You need to attempt to simplify it enough that kids will understand it without making it sound small and unimportant. I have been teaching kids computer science for five years now and have been through more than 50 students. I can say for sure that my computer science and programming knowledge is now far superior to what it was five years ago. Teaching kids helped me grow as a person and improve my technical skills in a way I never thought about when I first started teaching. Here are five ways that teaching CS to kids helped me improve as a programmer.
https://medium.com/better-programming/how-teaching-kids-cs-made-me-a-better-programmer-37030dd2d3e4
['Sara A. Metwalli']
2020-08-04 14:07:09.361000+00:00
['Programming', 'Software Engineering', 'Software Development', 'Computer Science', 'Education']
Ella Fitzgerald’s Best Live Album Says A Lot About Her
Ella in Berlin: Mack the Knife is easily one of the singer’s most well known albums, and arguably one of her best. It was recorded live in Berlin’s Deutschlandhalle in February of 1960, while she and her jazz combo were on tour in Europe. At this point in her career, Ella was on fire. She had been performing for nearly three decades, and had improved immensely since her rocky beginning in the industry. She was no longer the shy teenager looking for her big break in a talent contest — being discovered by the Chick Webb orchestra had allowed her to polish her appearance and stage presence. By the time Ella in Berlin was recorded, she had been working long enough to reach the point of equilibrium that professional musicians strive for in their careers: experienced enough to be collected and confident in front of an audience, but not so overworked that exhaustion or age could take a toll on any musical capabilities. She had worked with all types of ensembles and musicians, from large orchestras to intimate jazz combos, and had already recorded dozens of albums, including her very popular Songbook series that focused on different composers’ works. Her Europe tour was the best place to record a new live album — not only did it showcase her world renown and her consistency singing on stage instead of in a recording studio, but it also captured the audience’s exhilarated response to her talent. The original LP release of Ella in Berlin was only nine tracks long. The CD re-release in the 1990s included two additional songs from the show that had been left out, as well as two others (“Love for Sale” and “Just One of Those Things”) from a concert a few years earlier that were mistakenly thought to be from the 1960 performance. The final set list features a lot of tried-and-true classics that she had been known to sing many times before — George Gershwin’s “Summertime”, for example, was a staple for her. But although better-known versions might exist elsewhere on her studio albums, the tunes seem almost like hidden gems when they appear on a live album. From the slight rasp in her voice in one of the choruses of “Love is Here To Stay” to her breathless laughter as the audience applauds for “Too Darn Hot”, the live performance aspect adds a raw and powerful dimension to her singing that is not otherwise heard in a more sterile studio environment. One of the most famous recordings from that night in Berlin is her rendition of “Mack the Knife” — and not just because it was featured in the title of the album. For context, it’s important to know that “Mack the Knife” was originally a ballad called “Die Moritat von Mackie Messer”, written for the satirical German opera Die Dreigroschenoper. For a vocalist, the song certainly is an undertaking — even after being translated into English and performed as a jazz tune, a typical arrangement of the piece usually features about five different key changes, a number of unique verses, and no repeated chorus. Crooners like Frank Sinatra and Bobby Darin had performed it with great success, but as Ella starts to introduce the song to the audience in the recording, she notes that it hadn’t been sung by a woman yet. Right before she begins to sing, she also makes another noteworthy comment: “Since [the song]’s so popular, we’d like to try and do it for you,” she says coyly. “We hope we remember all of the words.” This seemingly-offhanded remark essentially becomes a moment of foreshadowing as the song unfolds. Apparently, the band knew when Ella called the tune out that she didn’t know the words very well — they had not properly rehearsed it with her, but she requested it anyways while they were out on stage, and they dutifully followed her lead. To her credit, she successfully makes it through the first two verses without issue. On the third, however, she begins to falter, singing somewhat more hesitantly and slightly behind the chord changes. By the time the fourth verse rolls around, she is completely lost, but she continues to sing along with the melody: “Oh what’s the next chorus, To this song, now? This is the one, now, I don’t know…” After a little floundering, she does manage to remember a few more of the correct lyrics, but she soon abandons them entirely in favor of playfully making up her own. In a later interview about the Berlin performance, her bassist Wilfred Middlebrooks said that in that moment, he knew right away what she would do next. “When Ella got lost in a swing number, she usually fell back on her Louis imitation, which was a sure fire a crowd pleaser. And sure enough, about that time [she forgot the words], here came Louis.” The quick combination of goofy new lyrics and a spot-on Louis Armstrong impression kills. What could have been a rather embarrassing moment is instead met with applause and further admiration — and in a strange sort of way, she is perfectly in her element. The ability to improvise at a moment’s notice is a core principle of jazz as a genre, and she demonstrates her mastery of it to her listeners not only in her solos, but in moments like this. By employing a tactic like her Louis impression, which she had carefully rehearsed to keep a performance on track, she protects herself from the spontaneous chaos of making mistake and gives herself a moment to recover. The song stays together, the audience is entertained, and Ella is victorious once again. As incredible a performance as “Mack the Knife” is, the next and final track on the album was truly the jewel in the crown of the Berlin show. With no introduction, Ella transitions from “Mack the Knife” directly into Morgan Lewis and Nancy Hamilton’s jazz standard “How High the Moon”. She only has time to finish a single verse before a sudden drum break electrifies the atmosphere and cranks up the pace. With the new tempo hovering around a breathless 300 beats per minute, she sings through one more chorus before beginning what would eventually become one of the most iconic solos in jazz history. In just under six minutes, she manages to quote Charlie Parker’s famous “Ornithology” solo, Harold Arlen’s “Stormy Weather”, her own famous version of “A-Tisket, A-Tasket”, Irving Berlin’s “Heat Wave”, a portion of the William Tell Overture bugle call, and Jerome Kern’s “Smoke Gets in Your Eyes” — which she jokingly changes to “Sweat Gets in My Eyes” — while still weaving in her own improvised material. Towards the end, the band drops out to let her sing alone with just the hi-hat for accompaniment, and their absence is barely noticeable. The solo is positively earth-shattering. She finishes on a high B flat — not because she has anything left to prove to her listeners, but simply because she can. The one-two punch effect that Ella creates by following “Mack the Knife” with “How High the Moon” is nothing short of spectacular. Any other artist might not have the stamina, after a nearly hour-long set, to deliver such an energetic finale, but Ella is a powerhouse. In spite of stage fright and her naturally quiet disposition, she proved herself in complete command of any stage, and the Deutschlandhalle stage was no exception. Her mistakes win Grammys; her victories bring the house down. The takeaway for both her audience in 1960 and her listeners today remains the same: Ella Fitzgerald is one of a kind.
https://utzig.medium.com/ella-fitzgeralds-best-live-album-says-a-lot-about-her-3ad2ba72308d
['Lisa Utzig']
2020-10-27 03:22:44.932000+00:00
['Singer', 'Jazz', 'Music', 'Ella Fitzgerald']
Create an Auto Saving React Input Component
A better UX without too much heavy lifting When dealing with long forms (think medical forms, profiles, etc), it’s a huge UX improvement to allow fields to auto save as a user fills them out. Auto saving fields on blur sounds like a lot of extra work over a single submit button, eh? Not to worry, we are going to build one in 10minutes using Semantic UI React component library. If you want to skip this article entirely, here is the code sandbox: The field will save on blur Our Base Text Field Okay, before we can think about auto saving, let’s just create our base Text Field component, or in other words, save hours by using the Input component by Semantic UI: Semantic UI has a bunch of icons we can pass in by name, which will make displaying our saving / saved / error states a lot easier. For context, here is the main application code: Only two things noteworthy here : We created a mock save function which is just a promise that resolves in 2 seconds with the new value Every prop except onSave is apart of the Semantic UI API for Input components, nothing custom there. We will make use of onSave later. What Do we Need to Track? When saving individual fields, there is more to keep track of. Here are the most important questions: How do we do the actual saving? How do we indicate a field is saving? How do we indicate a successful / failed save? How do we do the Actual Saving? We probably want to save the field on blur. We can pass an async onSave function that is responsible for the details of “saving”, but our component isn’t really concerned with how data is saved, just the fact that it is saving. While it may seem like a lot of code, everything is pretty simple: We use useState to keep track of whether we are saving or not We maintain a reference to the last entered value so we can compare it to the current value. If they are the same, there is no need to save. To answer our “how do we indicate a field is saving?” question, Semantic UI already as a loading indicator that we can leverage. We show it when the field has either been passed a “loading” prop manually, or we are currently saving the field. The problem with auto saving fields is that user input can get overwritten after their data saves and the input field refreshes with the saved data. To avoid this, we simply disable the field when loading. This is where the magic happens. We attach an async handler to onBlur . In the handler we first check if the value has changed. If it has, we update our “saving” state to true, and attempt to save if we were passed an onSave function. Once the onSave promise resolves, we update the last saved value and reset our saving state. How do we Indicate a Successful / Failed Save? So we may be able to show when a field is saving or not, but it’s not enough. We want to show that a field has successfully saved (or failed to save). So this is pretty much our full component! Here are the additions we made: New state was introduced to keep track of when a field is saved, and when there is a save error For a better UX, we swap our icon for a green check mark when we successfully save, and a red warning when a save fails. Semantic UI gives us the flexibility to set our icon colors manually, so we change the color to correspond to the current save state. Now that we are dealing with more than field validation errors, we manually pass either our regular error prop (which is passed to our component from a parent), or our internally regulated saveError variable, if either exist. Another UX improvement is making sure we remove the success icon when a user makes a change to the input. That way they understand the current changes aren’t saved yet. When we hit an error calling onSave (which we assume throws a promise rejection if saving fails), we simply update our saveError state. You can be a lot more flexible here (like passing the actual message from the API), I kept it simple for this example. Moral of the Story Obviously there are a bunch of areas where this component can be improved, but as far as getting something up and running fast, this can probably be done in 10–20 minutes thanks to Semantic UI React. You can use any other component library (Material UI, Ant, etc). The point is that using a component library will probably save you hours of development in cases like these. We were able to show icons, loading indicators, disabled & error states, and more just by passing simple props. Message From the Author Hey you… Yeah you. I know times are tough. Bored? Stressed? Going nuts at home? Want to dive deeper into React stuff? Have buddies looking to learn something new about React & UI development? I’ve got plenty of articles for you and your pals. Feel free to share and follow because… well… there isn’t much else for me to do besides check my Medium stats and binge La Casa de Papel. A note from JavaScript In Plain English We have launched three new publications! Show some love for our new publications by following them: AI in Plain English, UX in Plain English, Python in Plain English — thank you and keep learning! We are also always interested in helping to promote quality content. If you have an article that you would like to submit to any of our publications, send us an email at [email protected] with your Medium username and we will get you added as a writer. Also let us know which publication/s you want to be added to.
https://medium.com/javascript-in-plain-english/create-an-auto-saving-react-input-component-in-10-minutes-2359d84dc29b
['Kris Guzman']
2020-04-27 16:04:25.132000+00:00
['JavaScript', 'Web Development', 'React', 'Technology', 'Programming']