title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
How to Train Your Artist.
Well, I don’t want to torture you here anymore, so here comes the solution: When? The questioned painting was created 1 hour ago. Where? Here is its whereabouts (not a museum, but a folder on my hard disk): Who? And the Author of this painting is — I bet you know it already, as frequent reader of my blog — Deep Learned Neural Networks. Artificial Intelligence. Art-i-ficial for “making art”? Well, to be honest, the whole preparation work was made by _C0D32_ over at reddit. Here is what he did: he trained the StyleGAN on around 24k images (from dataset on kaggle). And the he thankfully provided Notebook on Google Collab —to try it out. Now it’s up to us — we can let AI generate ART. Changing various parameters in the notebook produces very different styles, images, atmospheres. Here, for example, we see an Impressionist landscape at spring. Trees are turning green and reflect in the calm lake. Far behind we can suspect mountain range. The style is even Pointillist, with roundish stokes, colourish points of interest. And then… this graphic architectonic image. Is it ruins? Is it a new building aiming high? Something between Gothic arcs and modernist skyscraper — and in some way very herbal. Urban flora? Florist urbanity? But why so grey? And where all the people? Are we missing a crucial part of the story? And here you see a depiction of an aristocrat family — in best style of a Diego Velázquez. The faces are distorted in a Francis Bacon way. In the wall opening behind the group a way is leading to the next castle. And what about this Spanish torero, being caught short before the bull run him down. You already see the bull’s hoofs behind the gracious, dandy-alike figure of the pathetic Matador, who doesn’t know about his own fate being about to change. OK, and now somebody Not Safe For Work may appear, so be prepared for obscenity of an abstract painting. I don’t know, what the characters of this painting do. Probably I overinterpret it. Something between Dalí, Max Ernst, Ballmer — bodies without indications. Skins. Grins. And here is a mysterious cabin in the woods. Something happens here. Are these people sitting out on the porch? Or rather bedclothes being ventilated outside, in fresh morning air? As you see, a variety of styles, motifs and qualities is generated here. Learned by the Art History. But unlike in an Art School, AI has to fend for itself. No history backgrounds. No style insights. No Academic intertextualities, no scholastic connections. AI has to fight through the vast of art by itself as a lonesome Visitor, being catapulted into the Alien World and looking for orientation. What a great achievement! I’ll follow its way across the World Art — in our FridAIs as well (now, after two week pause). And what’s the prize, you will ask? Follow this link and you will be able to produce your own AI generated art! Just don’t forget to save it into your Google Drive.
https://medium.com/merzazine/how-to-train-your-artist-cb8f188787b5
['Vlad Alex', 'Merzmensch']
2019-04-11 15:05:13.992000+00:00
['AI', 'Artificial Intelligence', 'Art History', 'Ai And Creativity', 'Art']
Shine
About haiku A haiku is a traditional form of Japanese poetry. It’s made up of 3 unrhymed lines with 17 syllables, 5–7–5. Haiku forces us to be creative and concise to convey the essence of a specific moment in only a few words. This efficiency with words can help us share messages more powerfully in other types of writing. It’s also a great way to kickstart a writing session. Common themes written in haiku are nature and human nature, though other themes can also be explored.
https://medium.com/house-of-haiku/shine-c4250ea1823f
['Cynthia Marinakos']
2020-07-07 11:01:01.280000+00:00
['Creativity', 'Poetry', 'Haiku', 'Nature', 'Writing']
Pulse: The Telegraph journey towards real-time analytics
Industry Outlook Technology enables publishers to measure the impact that a piece of content is having as soon as it becomes public. These days, reacting to this data is a vital part of promoting quality journalism in the sea of online articles competing for our attention. The real-time understanding of how a story is performing can significantly help to improve the customer experience on both our website and mobile apps. It’s important to know what our registrants and subscribers want to read and how we can deliver articles that are relevant to our audience. The Challenge Under this premise, in 2017 the data team was challenged to build a real-time dashboard, to display in the newsroom (pictured below), to show which articles were driving registrations and subscriptions. The first step was to identify a reliable data source on which we could build our analytics. The Telegraph’s entire website ran on Adobe Experience Manager and for this reason, we decided to consume the Adobe Livestream API in order to ingest behavioural information as soon as it was collected. Unfortunately, since no post-processing was applied to those data sets, filtering out the noise and retrieving only relevant records posed a challenge. The First Iteration We took an Agile approach and built a simple proof of concept (PoC) to establish that from this specific data source it was possible to extract meaningful analytics. We came up with the following design. A poller consumes the live stream of data and without any transformation writes record by record in Pub/Sub. Then a Dataflow real-time pipeline consumes the queue and filters out irrelevant records. The rest of the data are cleaned and enriched before being uploaded into Elasticsearch. One of the beauties of Dataflow is how clean the data transformation process looks once the code is deployed on Google Cloud. A flow diagram is automatically generated that shows the different logical steps implemented in the pipeline. In this way, it becomes easy to identify bottlenecks and errors in your process. Also, all the generated logs are automatically available in Stackdriver, which takes care to monitor the application and alert. In Elasticsearch only a rolling window of 8 days of data is kept while all the history is available in real-time in BigQuery, with the possibility to plug this into the top DataStudio dashboards. There are multiple reasons why we decided to adopt Elasticsearch for this specific use case: Knowledge of the technology. Elasticsearch had already been successfully used at The Telegraph in other solutions and we had in-house expertise. It presented the possibility of using Kibana to quickly deliver a dashboard without involving any front-end developer. Horizontal Scalability. Low response time for the type of queries that we wanted to run. Less than a couple of months since we started the proof of concept a basic Kibana dashboard was ready. Figures in the dashboard above are purely illustrative. The solution went live in September 2017 and despite some limitations, it was really well received by the newsroom. The Second Iteration By the beginning of 2018, the product had already been tested for a few months and most of the stability issues intrinsically linked to real-time data processing were solved. Due to the high scalability of Pub/Sub and Dataflow spikes, the handling of requests on our website to show how content is performing had become trivial. We decided at that point to move further and build our bespoke dashboard on the top of the same backend system. A few months later a second version of the dashboard with richer information was released. Figures in the dashboard above are purely illustrative. During this second iteration, we decided to remove Kibana and decouple the visualisation from the storage through an API developed in NodeJS and using GraphQL. This was actually one of the first times that we played with GraphQL at The Telegraph and it was a pleasant surprise, since it allowed much more flexibility. We moved away from a rigid contract with multiple endpoints in favour of a simpler approach with fewer endpoints and a clear schema allowing us to extract and filter data from Elasticsearch in a cleaner way. Below is the updated design. Pulse After this second release, we decided to undertake a new challenge. It’s one thing to have a dashboard displayed on a big wall that doesn’t allow much interaction, but quite another to have a product that allows users to conduct real-time exploration of how our content is performing. The idea of “Pulse” was born. The PoC phase was officially terminated and we started to consider Pulse as a product with a well-defined roadmap. A new team led by our Head of Data was created with the right mix of UX designers, data engineers and frontend developers. We ran a few workshops with different business users to understand the needs and priorities of the newsroom. After a couple of weeks, the first designs were ready. Once these sessions were concluded it was clear which metrics and dimensions were relevant to measure the performance of our articles. Luckily, from a backend point of view, the changes to the design were minimal, since we were starting from already a strong base, but most of the features requested would have required us to massively extend the solution. Once we finished collecting the requirements and we had a clear understanding of what we were trying to achieve we updated the architecture as shown below. In this third phase, it was not possible anymore to rely on a single data source to serve the data. Next to Adobe live stream we added Chartbeat, while Adobe post-processed Hitlog and the Unified Content Model (UCM, an article storage platform developed in house by The Telegraph Engineering team). The new integration with Chartbeat has been developed in order to offer metrics that was not possible to track through Adobe analytics. An example of this might be the average engaged time on a page for a specific audience. The post-processed Adobe Hitlog was added in order to offer a historical view of how our content was performing. Aside from the new data sources, further development work was necessary. The API used to serve the dashboard was rebuilt from scratch using Python and GraphQL to conform with the stack of technologies that we normally use. A new Redis cache was introduced to improve the response time and offer a smooth experience to the end user. The real-time data pipeline that consumes Adobe live stream data was updated to include the new metrics and offer better data cleansing. The need to also classify our articles, through a set of tags in near real-time, led to a hybrid design where both real-time, near real-time and batch data pipelines coexist. For this purpose, a tags data pipeline was developed. This specific pipeline runs every N minutes and for each article published on the day it checks if a set of conditions is satisfied in order to classify our content accordingly. The frontend has been built from scratch as well. Since we didn’t have anything in place yet, our frontend team started from a blank canvas and in record time developed a responsive dashboard that offers the possibility to our users to explore the statistics of each article or section under a set of predefined filters. Figures in the dashboard above are purely illustrative. Pulse went live at the beginning of 2019 and it is now part of the tools that are constantly used by our journalists. What’s Next What will be the next step? This time we are definitely going big! In next weeks (from writing this blog), we will release Pulse XL to replace the old editorial dashboard. This will introduce a historical data view, geographic information in our main dashboard and also will uniform all our real-time dashboards under the same product. Regardless of whether you are on mobile, on desktop or in The Telegraph newsroom, Pulse will provide support with reliable figures on our strategy. Pulse has changed our newspaper’s attitude to data; we are placing more confidence and trust in the information captured about our content. Put simply, we have one of the best available pieces of technology for capturing and analysing the stories that we publish in real-time. Pulse flags segments, such as engaged registered visitors, then prompts journalists on how to convert them to subscribers in real-time. This will be customisable for every team across editorial to ensure all content is achieving its purpose and contributing to the Telegraph’s broader strategy. Stefano Solimito is a Principal Data Engineer at The Telegraph
https://medium.com/the-telegraph-engineering/pulse-the-telegraph-journey-towards-real-time-analytics-cd08c1078fa6
['Stefano Solimito']
2019-03-14 11:51:54.817000+00:00
['Google Cloud Platform', 'Analytics', 'Data Engineering', 'Data Visualization', 'Big Data']
AWS StepFunctions: Fine Tuning Serverless Workflows Using the Result Selector
Introduction In this example, I will be talking about how you can improve your workflows when using StepFunctions by customising the results from AWS Lambda at each step. AWS recently released, in August 2020, a way to do this using the ResultSelector; read more about this release on the AWS blog here. 🎉 There is another alternative to using the result selector and getting around large data payloads; Writing the data to a file in S3. This option would give you the opportunity to store in S3 the specific data needed for the workload which could be read directly in a Lambda function and therefore mitigate the StepFunctions payload limit. Firstly, I must admit that this article is not just for serverless applications and I hope that you can utilise this example in any AWS environment you may have. However, I personally believe it leans itself nicely towards a serverless solution for a number of reasons, I’ll explain. Suppose you have a serverless application that consists of a set of AWS Lambda functions that are invoked by hitting various requests on AWS API Gateway or AWS AppSync endpoints. Wouldn’t it be nice to reuse the lambda functions you have already created and configured inside your StepFunction workloads. Example System Architecture In this example, we have a ‘Get Post’ Lambda function that retrieves the data from AWS DynamoDB and we can reuse this same Lambda function inside our ‘Report Post Metrics’ workload. Perhaps we want to fetch the meta data on the post, evaluate the data and release it to another area in the service. Why Reuse the same Lambda? 🤓 There are many reasons why you would want to use the same lambda function inside the workload. For example, within a serverless application you have account limits that AWS enforce to guide you on best practices and design. One of these default limits currently sets the number of concurrent Lambdas running per account to 1000. This may seem a lot however, consider a large application with high throughput and lambdas scaling out. Not only have you got such limits within your account, if you have multiple lambdas to retrieve the same data from the datasource you’ll end up with higher inefficiency and extra functions and code to maintain. So, Why do we need the Result Selector? 💡 As we are reusing our ‘Get Post’ Lambda throughout the application where possible, let’s imagine that the object that the Lambda returns contains all the post details, meta data, authorization access and much more data. For the StepFunction we only need the postId and all or some of the meta data on the object. All of the other information is redundant for this use case. Up until recently AWS StepFunctions had a payload limit of 32,768 characters, so if you were getting multiple posts, each returning the entire data object, you could quite easily hit this character limit and receive the following error: ERROR: The state/task returned a result with a size exceeding the maximum number of characters service limit. Fortunately, AWS release another recent update, in September 2020, to increase this limit to 256kb. Read more here. 🎉 Okay so, AWS have increased the limit… do we still need to worry about the data returned from the lambda? Maybe not for a small number of posts, but if you’re running a workload for all your posts or another large data set you may still see the error. Plus, it is good practice not to be fetching or logging any data you don’t need.
https://medium.com/swlh/aws-stepfunctions-fine-tuning-serverless-workflows-using-the-result-selector-d74b3de074
['Robert Bulmer']
2020-11-18 12:17:17.997000+00:00
['Serverless', 'AWS', 'Aws Step Functions', 'AWS Lambda', 'Cloud Computing']
How To Find Success By Publishing Regularly
How To Publish 5 Days A Week Photo by Jazmin Quaynor on Unsplash What I learned from my publishing schedule was the importance of a mixed bag of where you publish your posts. The key to making the most of this publishing schedule is to change up where you publish. It will need to be with big publications, small pubs, and your own personal pubs. It also doesn’t have to be long articles. Below is what I would suggest for your publishing week. №1 — Publish at least one newsletter and turn on the meter. Every Monday, I publish a newsletter with my publication The Dad Hammer Pub. After I hit the publish button, I jump back into the post setting and turn on the “meter” so it will make money. It honestly only makes a few cents each week, but it is a metered post and I think that really matters to whatever keeps me in front of people. №2 — Focus on 1–2 big publications each week. While it would be nice to be on big publications every single day, I don’t have the time during my week to scrutinize every single post I write to fit the new rigid standards of the big publication world thanks to Medium. So, now, I focus on one to two posts for the big pubs so I have the freedom to write what I want to write how I want to write it. №3 — Write for 1–2 smaller pubs weekly. When I say smaller pubs, these may not be all that small. Pubs like Illumination and the Innovation have been a great place to put my work up without the added restrictions to format and content that Medium has encouraged the big brother pubs to focus on. This way, I still get published with a wider audience, and I can connect with other writers and pubs. №4 — Create your own publication and grow it weekly. I have several publications of my own. However, I publish with them 2–3 times a week so I can have complete creative control, grow my own pub following, and enjoy writing without waiting for anyone else. It also helps me take advantage of the ‘publication’ trigger (not sure if this is real, but it seems better than posting without a publication). №5 — Stay fluid, don’t get rigid, and have fun. Look, writing on Medium shouldn’t be this super stressful, angry experience. Sure, there are lots of reasons to complain, people do it all the time, but it should be a place where you can write and get read. Which this practice helps you grow toward a business, book, or whatever else you desire. So, have fun with it and if you make a little money along the way, enjoy that too! Final Thoughts Whether you can write every day and publish five times a week doesn’t really matter, to be honest. What matters is getting better 1% every single day. I don’t want you to stick to this, burn out, and then never write again. I simply want to give you something to work with so you can thrive as a writer and enjoy your Medium experience. So, if you want to write more, write more. If you want to write and publish less then do that. However, this is what I have found to make $100 a month, grow my following, and even add to my email list. Most of all, I want this for you too. Sure, it would be nice to make thousands of dollars here on Medium, but that isn’t everyone’s story. While it can be, I want to set you up for a better experience than the let down so many feel when they don’t. Enjoy writing on Medium. Grow your skill and craft. And most of all, share your story and your message.
https://medium.com/the-innovation/how-to-find-success-by-publishing-regularly-6a1364311705
['Jack Heimbigner']
2020-12-02 20:42:55.087000+00:00
['Inspiration', 'Writing', 'Creativity', 'Art', 'Social Media']
Jetpack Compose: the future of Android UI
Learning Android Development Jetpack Compose: The Future of Android UI Android has come a long way and Jetpack Compose is there to grow Photo by Tomasz Frankowski on Unsplash Recently, I looked into how Android has progressed in how the code interacts with the UI (View) over the years. Much has been happening the last few years… and finally, it is landing with Jetpack Composes. Of course, the question arises — is this just hype or is it to stay? Should one start learning it, or just wait for the next real messiah? My view is, even if Jetpack Compose might not be the final way of how Android interacts with the UI, it will definitely be the path moving forward. Here’s why… Making XML working with code has never been perfect. Since day one, XML is the way for one to define the UI of the App. The little infamous findViewById where every iteration tries to get rid of using it, starting with ButterKnife, then Kotlin Synthetics, and DataBinding, and then ViewBinding of late. Things have improved over the years, but there’s still imperfection in using the XML approach. The UI logic works best using Kotlin than the XML, hence making two separate UI entity that Google still try hard to reconcile them. There’s even a try to move the Modal from the Code into XML in the DataBinding feature. Conventional View Update, where modal is in the code Data Binding Update where modal is in the XML instead Still then too much code boilerplate for the support. Moving more things into XML is not desirable. XML + Code will never be perfect. Jetpack Compose, on the other hand, removes the need for XML and replaces it with complete Kotlin Code for the UI. Now, with Jetpack Compose the UI Code and logic can stay within itself, cleanly separate from other parts of logic. And recently, Chris Banes had explored converting an App from XML to fully Jetpack Compose, it shows improvement both in APK file size and the build time! 🤩 The Architecture Dilemma: Is Activity a View or Not? When Android was firstly introduced, there was no architecture recommended by Google. Some people coded everything in the Activity. This is bad, as it makes the code spaghetti. The community itself evolves and MVP (Model View Presenter) becomes the more commonly practiced approach. Still then, it is very confusing on what an Activity is. Activity is viewed as the View, as it linked directly to the View (XML) and contains the UI logic sometimes). Activity is aware of the Live-Cycle, seems like the center of control of everything. (while ideally, the presenter should be more in control) To address this issue, in 2017, Google came up with Architecture Component. This is great as Google now provides a framework it strongly supports, the MVVM (Model-View-ViewModel) framework. In it, one of the most important bits (other than the ViewModel) is the LiveData and Live-Cycle Aware Component. This started to shift some of the life-cycle awareness from the Activity to the ViewModel. Nonetheless, it is still imperfect, as Activity still holds the key for State Restoration and Dependencies Injection. Beginning 2020, both State Restoration and Dependencies Injection issue mentioned above is handled through SaveStateHandle and Dagger Hilt. It is clear that Google is trying to move the core of development from Activity to the ViewModel. Nonetheless, the Activity is still there to tie with the XML UI. Some part of UI logic still resides within the Activity. In order to totally eliminate the need for Activity to manage any UI code anymore, and just be a conduit, Jetpack Compose is introduced. All the UI code now resides within the Composable Functions. The ViewModel and Activity no longer need to access the individual UI element to be updated. It just needs to update the State Variable and the UI will automatically get recomposed. To understand how recomposition works, check out the below. Activity is now free from all UI logic as well as the need to access the specific UI element to be updated. It just needs to be a conduit between the ViewModel and the Composable Functions. Declarative UI: The industrial trend across web and app Declarative UI is not a new thing that Android Jetpack Compose just introduced. In fact, Android native are coming in relatively late in the game. In the web world, React programming has long existed and is relatively welcome and popular. The App world tries to imitate the success, where we have React Native introduced by Facebook and recently Flutter by Google. These have reached some level of success, due are crippled at times due to their attempt to support both the iOS and Android world in one goal. Nonetheless, it is still relatively popular and the pursuit to make them work in the app work is still ongoing. There are many Web developer in the market that is already familiar with the framework. This will make such a development framework easily expanded to a wider community In the App world, for the iOS community, Storyboard based and InterfaceBuilder approach of UI programming in iOS is always debated. The code generated is unreadable, and hard to be peer-reviewed. There’s also a library called SnapKit to make UI code programming more feasible in Swift directly. Without hesitation, Apple has first introduced its declarative UI, i.e. SwiftUI to its community, since iOS 13.0. Now they have gone production worthy, and we do see the trend is growing, with much excitement in the community. Google has been quick to respond as well, in announcing Jetpack Compose way ahead of its readiness during Google IO 2019. Alpha was released, and Beta is now released. It looks clear that in 2021, it’s heading production worthy. What I like about Jetpack Compose compared to SwiftUI is, it went down to the extent of the basic drawing API control from it. Using Jetpack Compose, I can program custom views such as a Drawing App, Simple Clock Animation, and even Flappy Bird Game. Flappy Bird by Jetpack Compose It looks really promising. It’s a proven working UI framework the industry has adopted in the web world, and the app world (both iOS and Android) is heading towards it. It is there to stay.
https://medium.com/mobile-app-development-publication/jetpack-compose-the-future-of-android-ui-e021dc3739e9
[]
2020-12-22 15:15:48.319000+00:00
['Android', 'Android App Development', 'Mobile App Development', 'Google', 'Programming']
The Impossible Burger Could Change the Meat Industry Forever
GOOD MARKETING Maybe this section should be called, “Really, Really Good Digital Communications,” but that’s just not pithy enough for a title. There’s no perfect method of digital marketing and there are no set templates to follow. Every brand makes mistakes and every brand will have strengths and weaknesses. But there is a stark difference between brands that use their media moments to uncreatively tell you to buy their product and those who use their voice to say something unique, human, and worth hearing. Impossible Foods has one of the most effective modern marketing teams I’ve yet seen. It starts with their branding. Impossible Foods Cover Photo This is their Facebook cover photo; simultaneously retro and modern. As a company taking an old, beloved practice (burger eating), and completely reinventing it, this modern-retro intertwine tells a story. It says, “here is where we have been, and here is where we are going.” Impossible Foods logo Impossible’s logo uses the same color palette, and has the same strengths. It’s unique, and it’s obvious. When I see it in the news or on my social media feed, I know from the color exactly who is speaking without having to read the lettering. Impossible Foods is one of the few companies I follow on Twitter, and that’s because their team has learned a lesson that many other brands have missed: If you want to market yourself on social media, you have to be more than just a product. Many-a-brand has fallen by the wayside thinking naïvely that social media is just another tool for pushing ads, company updates, and product announcements. Consumers quickly become bored with that content, and engagement — and therefore the audience reach — drops. But Impossible Foods imbues humanness and personality throughout their online presence, reiterating their raison d’etre in every social media post. They don’t try to shift the conversation or push the reader somewhere they aren’t interested in going. That quickly becomes grating to customers. Instead, Impossible goes with the flow, injecting themselves into the conversation that is already happening. For example, people like talking about celebrities and they like jokes, so when Conan O’Brien makes a joke about meat, Impossible is there with commentary. People also like to hear what celebrities think about a product, so when endorsements come in, Impossible is ready. Here’s how they responded to Jordan Peele. And then when Ellie Goulding quoted Peele’s tweet, Impossible was, yet again, ready with a clever response. Even retired IndyCar racer Danica Patrick had kind things to say about the burger. But it’s not just celebrities. The cute interactions that Impossible has with their everyday customers are endearing as well — and they’re examples of stellar social media marketing. And another one… At the risk of over-expounding on Impossible’s Twitter expertise, they also deserve credit for their more serious content. Impossible knows that their mission is not apolitical. They are radically upending a highly ingrained food system and they aren’t going to hide why. Here they are making noise about climate change. This messaging fits perfectly with their image as the sassy company that cares, and they execute it seamlessly. When longer form content is needed, Impossible Foods isn’t afraid to get into the weeds, and sometimes the ring, via their Medium account. In one post from Rachel Conrad, Impossible’s Chief Communications Officer, the company delved into details on accusations that they were using dangerous amounts of glyphosate in their product. They came out swinging at a group called “Mom’s Across America,” writing: “‘Moms Across America’ has escalated a year-long campaign against Impossible Foods to push its anti-vaccine, anti-GMO agenda to anyone gullible enough to listen. The group’s latest salvo is a pathetic ‘news release’ full of lies, anti-science rhetoric and ignorance of basic arithmetic.” Later in the statement, Impossible called the group “charlatans,” and accused the group of hucksterism. They combined a defensive tack with an offensive retort, a genre of messaging usually reserved for political candidates. While for many companies, the idea of using their communications staff to go beyond simply promoting a product is out of the ordinary, for Impossible, it seems par for the course. They titled one Medium post, “7 earth-forward companies we know and love.” In it, they gave shoutouts to companies like Patagonia, Tesla, TerraCycle, and even Beyond Meat, who they called, “one of our many allies in the fight for a more sustainable global food system.” But no digital communications effort is complete without a website. And Impossible Foods’ site is superb. It has the same modern-yet-retro look as the rest of their brand, and it exudes a clear message of love and pride for their product. But by far, my favorite part of the website is the public media kit. As a writer, this kit is a Godsend. It houses old press releases, mission statements, employee bios, and a gorgeous photo/video gallery. It also links to some high quality, comprehensive reports that detail the progress Impossible Foods has made in a particular year. Here’s one paragraph from the 2018 report intro: “We live on the best planet in the known universe — the only one known to support life. Our planet has air, water, breathtaking beauty and staggering biodiversity. It’s perfect. But fragile. We, and all future generations, depend on the integrity of all its diverse ecosystems to keep us alive.” That’s splendid writing. It neatly encapsulates the bigger picture of the problem and it is a perfect lead-in to the marketer’s right hook in the next paragraph: “We can’t take it for granted. We have to fight to protect Earth’s resources and life-sustaining biodiversity, even if that fight requires us to take on challenges that seem almost impossible today. That’s what we do every day at Impossible Foods.” Impossible Foods says, “Here is the problem. We are the solution.” Wham, bam, thank you, ma’am. Why did Impossible devote so many resources into this media kit? Maybe for investors, maybe for reporters, maybe so that lowly bloggers like me would fawn over it to their few but steadfast followers. Whatever the reason, it has helped this ~350 employee company develop a streamlined, organized, coordinated message that they aren’t flinching from. In sum, Impossible’s expert leveraging of social and digital media has given them not just a customer base, but a fan club.
https://medium.com/bigger-picture/the-impossible-burger-could-change-the-meat-industry-forever-fd781b160d1
['Ben Chapman']
2019-07-09 16:12:15.305000+00:00
['Health', 'Environment', 'Climate Change', 'News', 'Food']
10 Python Skills They Don’t Teach in Bootcamp
Data science bootcamp is a ton of fun, but they don’t have time to teach you everything. The bootcamp experience is like showing up at a theme park. (Except some of the strangers there will become your best friends.) When the ride kicks in, it demands total concentration. Between bouts of intensity, you’ll have the chance to take a breath — trading stories, recommendations, and ideas. Recapture the thrill of learning new things with this collection of 10 Python skills they don’t teach you in bootcamp.
https://towardsdatascience.com/10-python-skills-419e5e4c4d66
['Nicole Janeway Bills']
2020-11-27 08:58:52.230000+00:00
['Machine Learning', 'Data Science', 'Python', 'Software Engineering', 'Software Development']
3 Beautiful Songs by the National
All the Wine “I’m put together beautifully,” Matt Berninger sings on the first line of “All the Wine,” continuing with “big wet bottle in my fist, big wet rose in my teeth.” These are bold, hubristic words by design, making the song feel like a bold anthem right from the beginning. And if you keep reading the lyric sheet, you’ll find more lines that follow the same pattern. In fact, taking just the words by themselves, “All the Wine” seems like one of the most arrogant, brash songs ever written. The chorus, “All the wine is all for me” (repeated several times) seems to hammer that point home. But actually listening to the song a few times leaves one with a different conclusion. In reality, it’s sad, almost tragic. Because no matter how proud, how confident the words of the song sound, there’s a sense of melancholy to the music beneath them. Berninger also repeats many of the lines — especially the titular “all the wine is all for me” — several times. And while it’s possible to read this repetition as a brag, I think it makes more sense as someone trying to convince himself of something. The words sound desperate by the end. The reason I chose to write a few paragraphs about “All the Wine” is that it’s a truly beautiful song (a description I try to use sparingly). The sense of unease that permeates the verses, Berninger’s incredible delivery vocal delivery, and the subtle guitar notes create an ensemble that remains among the band’s best. Even several albums later, this song remains one of the best in The National’s catalog. Rylan This song makes the list because of a driving piano line in the chorus that I find mesmerizing. It’s actually much older than the album it appears on — 2019’s I am Easy to Find — and has been a fan favorite at shows for many years. In any case, it’s a really enjoyable song to listen to and has all of the hallmarks of The National’s style. “It was fun to have a song we played live and didn’t put out,” Berninger said in an interview with /Pitchfork./ “We could tell it was getting better. I kinda wish we could do that with all of our songs.” As someone who rarely goes to concerts anymore, I’m really glad this one finally made it onto an album since I otherwise wouldn’t even know it exists. Like many of The National’s songs, “Rylan” has several lyrical layers. At its core though, this one seems to be about offering advice to a younger person. “Rylan, you should try to get some sun,” Berninger sings several times, “you remind me of everyone.” There are plenty of other good lyrical moments, but I find that “Rylan” as a song works not because its words are profound on their own, but because their marriage with the music just works when one listens to it. I’ll Still Destroy You One of the things that I appreciate the most about The National is that their music can be either background music or occupy my total attention. The melodies are smooth enough to help me concentrate or write, but the lyrics themselves are incredibly deep and poignant. That’s not exactly news to anyone even remotely familiar with the band. But, it helps me set up why I want to write about “I’ll Still Destroy You”: it’s one of the bands very best songs. The lyrics deal with anxiety, addiction, parenthood (and a few other themes). And like so many other songs by Berninger, there’s a quiet sense of desperation that lies just beneath the surface of the music. It’s a similar feeling to the wild lyrical gesticulating of “All the Wine,” but delivered much more reflectively. Where “All the Wine” is fast-paced, “I’ll Still Destroy You,” starts off slowly before building into a glorious crescendo at the end. I’ve been making playlists to listen to while I work on various projects or exercise, and I find myself coming back to this song in particular when I make them. The National is one of the few bands whose music can fit many different settings or moods, but this song is one of the most versatile (at least for my tastes). Just like the other two on this list, its true appeal comes from how the words and instruments fit together.
https://medium.com/the-coastline-is-quiet/3-beautiful-songs-by-the-national-a3324d57ba8d
['Thomas Jenkins']
2020-06-27 18:20:46.313000+00:00
['The National', 'Music', 'Songs', 'Culture', 'Writing']
Experts from Mercedes & Co. discuss their fears and dreams of the intersection of AI and neuroscience
If 2018 was the year that artificial intelligence exploded, and 2019 the year that AI became accessible to everyone, 2020 is shaping up to be the year that we become more mindful about how we use it. And at the forefront of this work is the discussion around the intersection of AI, neuroscience, and human behavior. Towards the end of last year, I was invited to take part in an event alongside World Summit AI, hosted by Mercedes EQ. I sat down with a selected group of experts afterwards (curated by DataSeries) where we discussed the implications of these topics, and what will come our way in 2020 and beyond. Of course, while much of our roundtable discussion centered around these topics as a whole, some of the discussion revolved around the car industry, and what possibilities exist for embracing neuroscience in the mobility field. But before we got to that particular sector, our talk centered around data privacy and security. AI and machine learning require large datasets to learn from, and as we collect more information from behavioral data, including measuring emotions from video, audio, and even brain-wave activity, there are inherent dangers and concerns regarding the storage and use of this highly personal information. “In the U.S. and Europe, there are macro effects that are affected by data privacy concerns which include democratic elections,” NASA Datanaut and CEO at CLC Advisors Cindy Chin told me. “Psychological operations — or psyops as it’s called in a military context — where computer algorithms can be used to predict behavioral patterns and skew results through a targeted campaign is dangerous. The world has already seen evidence of such occurrences in the U.K. and U.S. elections.” “It has already been proven that by tracking a player’s gaze within a Virtual Reality (VR) environment, one can not only measure accurately what elements or ads the user is really looking at, but the advertising can — as a result — perform significantly better than advertising placed in a VR environment without this information,” General Partner at OpenOcean, Tom Henriksson, told me. And eye-tracking is just the start. It’s almost table stakes for VR and AR applications in 2020. “Imagine a future where the machine knows what the user is actually looking at, and it can measure from the user’s pulse, or even from their brainwaves, how the person is feeling and reacting to the advertising,” Henriksson said. “It certainly brings us one step closer to advertising that is so dynamic, personalized, and well-targeted, that it will feel more like a valuable service than advertising. At the same time, the power of this technology, which at least for the pulse and gaze tracking part already exists, also asks for incredibly high standards for user control and data protection.” One thing is for sure. The more we talk about behavioral data, the more often the discussion of regulation is reared. “Companies and individuals pay large amounts of information to obtain this data and there are few checks and balances or levels of enforcement against negative use and infringement of privacy behaviors,” Chin said. “Where we have seen positive results in the use of collected datasets in the healthcare industry. Already, the prediction of accurate diagnoses that leads to proper clinical treatment is found in areas such as breast cancer and Alzheimer’s disease research. I would like to see a broader international ethics framework or charter where tech companies and governments demonstrate their commitment to the data privacy of global citizens. I would also like to see broader datasets that take into account inclusion and diversity. The inaccuracies in what is being created today in AI and machine learning is alarming.” That high level of data security is something at the forefront of Steven Peters’ — Manager Artificial Intelligence Research at Mercedes-Benz AG — mind. “We’re known for our commitment to safety when it comes to designing and producing vehicles, and that can’t change when it comes to behavioral data,” Peters said. “Daimler has published its AI Principles in September 2019 which guide the way to a human-centric approach of AI in both, products and processes. To start a conversation about the future of AI and developments that shape tomorrow such as our electric brand Mercedes-Benz EQ we created the EQ community. Collaboration and empathy for each other as well as focusing on customer needs is our way of thinking.” “I consider data literacy as a primary need in our more and more data-driven society,” Project Manager AI Health Intelligent Analytics at Massive Data DFKI GmbH, Anne Schwerk, Ph.D., concurs. “Everyone has to be able to understand data, the usage, and the analytics to a sufficiently high degree. Otherwise, our society will never be able to truly understand the consequences of AI and the digital world. Another very relevant aspect is the need for explainable systems, that allow users to trace decisions and be in control of the machinery behind the fancy GUI and the personalized predictions.” There are many possible use cases for applying behavioral data in a mobility setting. Imagine, for example, that the vehicle is able to identify that the driver is feeling angry or anxious, and it then recommends calming music and adjusts the air conditioning to improve the mood of the driver. This AI and neuroscience-driven future is not far from becoming reality, so how can we use behavioral data in a sensitive, positive way to improve products and services for consumers in the future? “Anyone consuming media ‘for free’ would naturally like to experience more relevant and valuable advertising,” Henriksson said. “To trust technology companies and service providers to utilize such ultrapersonal data, where a machine might understand a person’s thoughts or desires before the person does, requires iron clad data protection and strong user control.” And in an age where trust in the biggest social networks and tech giants is at an all-time low, we need to be extremely clear about what data is being collected, how it is being used, where it is being stored, and what value we’re getting in return. “New solutions where for instance people can decide on, and filter, both what impulses the systems are allowed to measure and what advertisers are allowed to access user data, are needed,” Henriksson said. “Further, user trust in technology companies needs to be completely revamped before this can go mainstream. Unlike in the case of Facebook today, we need to be able to 100 percent trust all companies handling our deeply personal data to adhere to the highest ethical standards, implement the best possible data security, and vigourously anonymize and protect the data of customers.” What should happen next in the use of neuroscience, behavioral data, AI, and machine learning, and where are we heading in the near future, especially when it comes to using this highly personal information in something as high-risk as mobility solutions? “In the area of mobility, location data is an excellent example of the opportunities ahead,” Henriksson said. “Tracking of users’ locations is currently hotly debated within the location technology industry and far beyond, as there are no formal rules or ethical standards on how this sensitive data should be used. Used correctly and with respect, location is a very powerful signal, which can enhance the digital profile of a user with information on what is going on in the real world.” Examples of how this type of data can be used for good are plenty, but we always come back to ensuring companies leverage this information in a respectful way. “Prediction is exciting and if used in the right way, with the right level of respect for user privacy and security, can add powerful utility,” co-founder and creative director of Plumen, Nicolas Roope, told me. “If someone is driving at speed, it’s a lot more useful to recognize that they’re about to go to sleep than recognizing the event as it happens. And of course, this happy driver who is alerted — and lives as a result — doesn’t want to subsequently be bombarded with energy drink ads because the car company has shared their data. It’s also nice not to share this event with the insurance company.” And how can that information be used outside of the world of advertising or vehicle control and safety, and what needs to happen next to make it possible? “Humans still spend at least 70 percent of our day in the real world, so having more information about this part of our lives definitely enables better services like, for example, ride-hailing, child tracking, and better-planned cities,” Henriksson said. “Recently, the CEO of location data company Foursquare suggested in a New York Times opinion piece that technology companies should adhere to standards similar to the Hippocratic oath of doctors and that Congress should regulate the location technology industry. Perhaps such actions need to be taken to ensure we can enjoy the great mobility solutions of tomorrow.”
https://medium.com/dataseries/mercedes-execs-and-more-discuss-their-fears-and-dreams-of-the-intersection-of-ai-and-neuroscience-1ddaaa5dc2fe
[]
2020-12-07 09:24:49.917000+00:00
['Neuroscience', 'Mobility', 'Artificial Intelligence', 'Behaviour', 'Data']
Have You Ever Faked Creativity (Be Honest)?
Have You Ever Faked Creativity (Be Honest)? We all know great ideas are elusive at times. Photo by Sharon McCutcheon on Unsplash Have you ever faked creativity? Go on, be honest. We all know creativity is elusive at times. You can set the right mood with music, lighting and comfortable seating. Yet despite all the coaxing, inspiration fails to strike. It’s maddening, right? Whether you’re a writer, artist or designer, there is a natural ebb and flow to any creative process. Sometimes you feel stuck or uninspired. Other times the ideas flow freely. I work in a creative industry. Everyday I am expected to come up with ideas and insight on demand. Some days I am more creative than others. When I am less inspired, I dig deep. I rely on my technical training to push through blocks. Yet deep down I may not be satisfied with the quality of my output. Deep down I know when I am faking creativity. No one knows for sure how creativity works. Everyone knows the left-brain versus right-brain thinking. This type of binary brain theory is considered overly simplistic and passé by neurological experts. New brain theory suggests creativity occurs when the brain connects between three different networks. When we daydream or brainstorm we use our brain’s default network. When we carry out complex problem solving we use our executive control network. The salience network detects environmental stimuli and switches between the other two networks. It’s the synchrony between these three large-scale brain networks that seems to be important for creativity — Roger Beaty, Neuroscientist In other words: people who think more flexibly and come up with creative ideas may be better able to engage regions of the brain that typically don’t work together. Is it possible to spark brain connectivity to boost our creativity? Now that is the multi-million-dollar question! We are scratching the surface of the possibilities of brain science. Will we determine if brain network connectivity can be modified or improved? And if so, would that impact or enable us to be more creative? Only time will tell. For now when creativity stagnates, your best bet is to — trust in your process. My creative process based on design thinking is outlined in this article below. Thank you for reading!
https://luciko.medium.com/have-you-ever-faked-creativity-be-honest-6ec458b088fb
['Lucy King']
2019-01-23 23:11:19.696000+00:00
['Life Lessons', 'Self Improvement', 'Creativity', 'Ideas', 'Design']
What is Machine Listening? (Part 1)
What is Machine Listening? (Part 1) Listening is easy for humans, but still difficult for computers. One of the human basic senses When I get up in the morning during travel, the thing that usually wakes me up is sunlight coming into the bedroom. But actually, the very first moment I realise that I’m actually not at home is exotic birds chirping sounds, probably as well as the slightly different height of my pillow. Hearing is one of the five basic senses of humans. We use this amazing ability naturally in our daily lives, but we often forget its importance as we can’t actually see it. We communicate with other people by talking, also feel and perceive this world through acoustic information along with other sensory data. Probably, speech recognition is one of the most widely adopted sound recognition technology in the industry and it allows people to communicate with computers in a more natural way. It used to be very hard for computers to understand a human speech, but the technological level of it got much better from the time around 2010, the time when a modern deep learning technology appeared. Conventionally, it was based on the rule-based method which uses hand-crafted features designed by domain expert engineers. But with the power of advanced deep learning methods, technological level of various technologies has been rapidly improved and AI systems now start to see things and understand what people talk to them. Listening ability of computers used to be (or still) limited to speech recognition Computer vision, natural language processing, and speech recognition are all really important technologies for artificial intelligence. However, we are missing some important thing here, a sound. Speech is sound. However, there are literally millions of other sounds we hear every day, but machines still do not well understand what's going on around them. Let me show some examples. It is easy to recognise that this is a raining sound. When you listen to it, you may think of taking an umbrella or closing the window. It is also obvious that it is a footstep sound. If we listen to it carefully, we can even know that it is high heels, getting closer, and walking at normal speed. The bell-ringing sound of Big Ben (Elizabeth Tower) at Westminster delivers the time for everyone in the town. Humans naturally listen to the world in everyday life, to think, also to act. Examples above only show information such as weather, type of shoes, or time, but it is just a tiny bit of contextual information the sound carries. Machine listening is a research area to make a system that can understand non-verbal information from the audio. A formal definition from the machine listening research laboratory from Queen Mary, University of London is like below. “Machine listening” is the use of signal processing and machine learning for making sense of natural / everyday sounds, and recorded music. Speech is only a small part of acoustic information Human voice contains linguistic information. However, people also can guess various clues from the voice such as age, gender, emotion, and even the health status of the speaker. Music is another type of audio that contains even more complex information such as genre, mood, tempo, short, and pitch. Still, music and voice are an extremely small part of what we hear in our daily lives. Actually, we don’t even know how many sounds people can distinguish and there are no clear boundaries between sounds. In machine listening academia, all other sound is usually called environmental sound and it is divided into two large groups of topics which are acoustic scenes and acoustic events. The acoustic scene, as its name conveys, is about the location-related information such as bus, park, library, cafe, or city centre. It is impossible to recognise the scene with very short audio, so normally researchers assume that at least 10 seconds of audio is required to estimate the scene. On the other hand, the acoustic event is a term normally used for shorter sounds that contain clues of surrounding events such as glass break, knock, car horn, or dog bark. It might be a very short sound like 0.1 seconds, but also can be quite long like continuous water flow. Probably it is easier to understand if we compare it with topics in a computer vision field which is more obviously visible. Optical character recognition (OCR) in the computer vision field can be considered as voice recognition in machine listening as these are about linguistic information. Facial recognition can be a counterpart of music search or speaker identification because it identifies specific and unique targets. Lastly, object detection would be the most similar concept in computer vision field to the acoustic scene/event detection because it aims to identify a huge number of targets that is all in different forms. 2017: A memorable year for machine listening Although the machine listening field has been actively researched since more than decades ago, it was still quite far from the level that can be widely applied to real-world applications. Only simple sound recognition for the limited number of sound was possible and its performance was unstable, just like an old speech recognition systems. Even after modern neural network algorithms were introduced, unlike other technologies, it struggled to outperform conventional approaches as simply adopting the latest deep learning techniques were not a solution. But finally, researchers have made a breakthrough and deep learning approaches outperformed conventional methods in 2017. Experimental settings are slightly different so strictly it can not be directly compared, but 2017 was a more difficult setting. There is an annual workshop called DCASE (Detection and Classification of Acoustic Scene and Events), organised by IEEE, and scene classification the top accuracy of the submitted system achieved 92% while it was only 76% in 2013. This result is particularly meaningful because the top 10 systems in 2016 were all conventional methods. Not only a scene classification task, but top accuracy systems of all other tasks were also replaced by deep learning methods. I don’t think that deep learning simply can solve all the problems in the world, and it is only a part of the system. But still, 2017 is a memorable year because accumulated effort from researchers made a meaningful result, found a way to utilise and make the most of the latest ML technique, got one step closer to human-like machine listening system. Required domain knowledge Simply pushing audio clips into the off-the-shelf ML models might work pretty well for simple sound recognition demo. But actually these kinds of simple recognition works quite well even with the traditional approaches, and the concept of modern machine listening should be distinguished from this because the only thing it can do is simple trigger and action in highly limited conditions. Advanced ML technology opened millions of opportunities that can give a positive impact on the quality of our daily life. Next-generation machine listening should aim for general auditory intelligence that can be used in a real-world situation, possible to improve continuously rather than re-inventing the wheel every time. To do so, it requires a range of domain knowledge in various fields such as signal processing, cognitive sciences, music, psychoacoustics, acoustics, and machine learning — because the real-world environment and auditory perception of human are highly complicated. Conclusion In this article, I’ve briefly introduced the concepts of machine listening. Next time, I will write more details about music information retrieval (MIR) which can be considered as a part of machine listening, but still, extremely huge research topic where we have tons of things to investigate.
https://medium.com/cochl/what-is-machine-listening-part-1-6fbdf2a3d892
['Yoonchang Han']
2020-08-04 08:47:25.005000+00:00
['Machine Learning', 'Artificial Intelligence', 'Machine Listening', 'Music', 'Audio']
Applications of Zero-Shot Learning
As a member of a research group involved in computer vision, I wanted to write this short article to briefly present what we call “Zero-shot learning” (ZSL), an interesting variant of transfer learning, and the current research related to it. Today, many machine learning methods focus on classifying instances whose classes have already been seen in training. Concretely, many applications require classifying instances whose classes have not been seen before. Zero-shot learning is a promising learning method, in which the classes covered by training instances and the classes we aim to classify are disjoint. In other words, Zero-shot learning is about leveraging supervised learning with no additional training data. Zero-shot learning refers to a specific use case of machine learning (and therefore deep learning) where you want the model to classify data based on very few or even no labeled example, which means classifying on the fly. Let’s think of how Convolution Neural Networks (CNN) work — they break down the general tasks of e.g. image recognition into a sequence of smaller tasks carried out by successive layers where each layer works on increasingly complex features. When we train a network to recognize a given picture, for instance, a human, we have already also trained it to recognize arms, legs, face, etc. Thanks to this, we can re-use those feature detectors and rearrange them to perform some other task without additional training. In other words, zero-shot learning is about leveraging deep learning networks already trained by supervised learning in other ways, without additional supervised learning. Zero-shot learning could yield extremely interesting applications, especially where we lack proper datasets. As you may know, the lack of data is a huge issue in almost all computer vision projects. If I had to sum up ZSL in a few words, I’d say that it is: Pattern recognition without training examples Based on semantic transfer Natural Scarcity of Data Zero-shot learning is an ability that humans already have. Indeed, we can learn a lot of things with just “minimal dataset”. For instance, you tend to differentiate different varieties of the same fruit (fine-grained classification) from others or similar-looking fruits (regular classification) with a few numbers of pictures of each type of fruit. The situation is different for machines... They need a lot of images to learn to adapt to the variance that occurs naturally. This natural ability comes from our existing language knowledge base, which provides a high-level description of a new or unseen class and makes a connection between it and seen classes and visual concepts. Why do we need Zero-Shot Learning? As you may know, there is a large and growing number of categories in many domains. As a consequence, it is difficult to collect a lot of annotated data per category. In some projects, the number of classes can be in thousands, and obtaining sufficient training data for each class is complex. Zero-shot learning aims at predicting a large number of unseen classes using only labeled data from a small set of classes and external knowledge about class relations. Moreover, the number of categories keeps increasing as well as the difficulty to collect new data for each new category. It is especially true in Deep Learning where you need a lot of data… Different varieties of the same object can quickly become a nightmare and unsupervised learning can’t be applied to help in this situation. Furthermore, in a normal object recognition process, we have to determine a certain number of object classes to enhance our accuracy as well as collect as many sample images as possible for selected object classes. Moreover, these sample images should contain elements taken from different angles in various environments in order to enrich a dataset. In some cases, labeling can only be achieved by an expert. Fine-grained object recognition tasks like recognition of specific species can be considered as examples of labeling under the supervision of an expert. There is an increasing interest in machine ZSL for scaling up visual recognition. How does it work Without getting too much into details, Zero-shot learning relies on the existence of a labeled training set of seen classes and unseen class. Both seen and unseen classes are related in a high dimensional vector space, called semantic space, where the knowledge from seen classes can be transferred to unseen classes. Zero-shot learning approaches are designed to learn intermediate semantic layer, their attributes, and apply them at inference time to predict a new class of data. Usually, zero-shot learning algorithms first map instances to intermediate attributes, which can be seen classes (those with labeled data), human-specified or data-dependent attributes. Then the predicted attributes are mapped to a large number of unseen classes through the knowledge bases. In this way, the prediction of unseen classes become possible and no training data is required for those classes. Zero-shot learning is a two-stage process: training and inference. In the training stage, knowledge about the attributes is captured, and in the inference stage, this knowledge is used to categorise instances among a new set of classes. It seems like many efforts have been made to improve the training stage whereas the inference stage has received little attention. For example, many approaches are incapable of fully exploiting the discriminative capacity of attributes, and cannot harness the uncertainty of the attribute prediction obtained in the first stage. Research From a research perspective, I have seen teams working on more accurate ZSL model that uses neural net architectures called generative adversarial networks (GANs) to read and analyze text from the web, and then visually identify the objects they describe. This new approach enables systems to classify objects based on category, and then use that information to identify other similar objects. Another important element benefiting from the research is bias. Indeed, the collection and labeling of training data can be very time-consuming, and because it remains difficult to gather enough statistically diverse training images, unlabeled target classes (i.e. images or objects that have not been seen before), are often categorized as labeled source classes, which results in a poor accuracy in generalized settings. When there are few training images available, existing object recognition models struggle to make correct predictions, and ZSL was developed principally as a means to fight this problem. Thanks to our research, we managed to build a prototype that can recognize species by analyzing related web articles. Looking at only those text descriptions (without seeing an image of the species) the system extracts key features, such as the shape of the animal’s head. The system can then somehow imagine what the species looks like, generating a synthetic visual model. It is important to say that the result of image and text understanding doesn’t eliminate the need for training, but it’s an example of how ZSL can reduce training and help systems be accurate when confronted with unexpected data. As ZSL continues to develop, I expect to see more applications such as better recommendations and more advanced solutions that automatically flag bad content within categories on social media. I also envision a strong development of ZSL in the robotics field. The Zero-Shot learning method is similar to human vision in many ways, therefore it can be used in robot vision. Instead of performing recognition on a limited set of objects, using Zero-Shot learning it is possible to recognize every object. I have no dounts that ZSL could help transition AI away from today’s limited applications and toward the kind of versatility that’s so natural for humans. For more information, I recommend this video: - https://www.youtube.com/watch?v=jBnCcr-3bXc&t=626s
https://towardsdatascience.com/applications-of-zero-shot-learning-f65bb232963f
['Alexandre Gonfalonieri']
2019-09-03 14:38:37.252000+00:00
['Machine Learning', 'Artificial Intelligence', 'Business', 'Technology', 'AI']
Data Entry Made Easy with Blazor Multicolumn AutoComplete
The Syncfusion Blazor AutoComplete is a text box component that provides a list of suggestions to select from as the user types. It has several out-of-the-box features such as data binding, filtering, grouping, UI customization, and accessibility. In this blog post, we will learn how to create a multicolumn AutoComplete component by customizing its template. You will also learn how to configure and bind the data in the Syncfusion Blazor AutoComplete component to make it display search results in multiple columns. Purpose of the multicolumn data list A standard Autocomplete search result will display a list of items in a single column. So, only a single piece of information about each item can be displayed in any instance. Example: Displaying a list of product names. But a multicolumn Autocomplete displays a list of search result items in multiple columns like in a data grid view. So, we can use it to display more information about items in a single list. Example: Displaying a list of product names with their unit prices and the units in stock. We may also display more information, like units on order and discounts, with multicolumn data grid view support. Let’s create a multicolumn AutoComplete component in the Blazor platform! Prerequisites Create a Blazor WebAssembly application In this example, we are going to display the following information regarding products: Product ID Name Unit price Units in stock Units on order Step 1: Let’s create a Blazor WebAssembly app and create a new model class file inside the DataModel folder with the name Product. public class Product { public int? ProductID { get; set; } public string ProductName { get; set; } public int? SupplierID { get; set; } public double? UnitPrice { get; set; } public int UnitsInStock { get; set; } public int UnitsOnOrder { get; set; } } Step 2: After creating the model class, add the method GetProducts() to get the product data. Add AutoComplete component and configure the multicolumn data grid view Second, we add the Syncfusion Blazor AutoComplete component to the created Blazor WebAssembly app. Step 1: Add the following code example in the index.razor file to create a simple AutoComplete component and bind its source to the product list. <SfAutoComplete TValue="string" TItem="Product" DataSource="@_productsList" PopupHeight="400px" Placeholder="Select a Product"> <AutoCompleteFieldSettings Value="ProductName"> </AutoCompleteFieldSettings> </SfAutoComplete> Step 2: Next, get the data from the product service and assign it to the AutoComplete DataSource API with the OnInitialized method. @code{ private IEnumerable<Product> _productsList; protected override void OnInitialized() { _productsList = new Product().GetProducts(); } } Step 3: Now, we need to display the pop-up list items in a multicolumn view to show more information when the user opens AutoComplete’s pop-up. To do so, use: The item template property to display the column data in table view. The header template property to display the column names. Refer to the following code example. <SfAutoComplete TValue="string" TItem="Product" PopupWidth="700px" DataSource="@_productsList" PopupHeight="400px" Placeholder="Select a Product"> <AutoCompleteTemplates TItem="Product"> <HeaderTemplate> <table><tr><th class="e-text-center">Product ID</th><th width="240px">Product Name</th><th>Unit Price</th><th>Units In Stock</th><th>Units On Order</th></tr></table> </HeaderTemplate> <ItemTemplate> <table><tbody><tr><td class="e-text-center">@((context as Product).ProductID)</td><td width="240px">@((context as Product).ProductName)</td><td>@((context as Product).UnitPrice)</td><td>@((context as Product).UnitsInStock)</td><td>@((context as Product).UnitsOnOrder)</td></tr> </tbody></table> </ItemTemplate> </AutoCompleteTemplates> <AutoCompleteFieldSettings Value="ProductName"></AutoCompleteFieldSettings> </SfAutoComplete> Step 4: The multicolumn style class is available inbuilt in the Syncfusion Blazor theme files. In the CssClass API, name the multicolumn root class e-multi-column. <SfAutoComplete TValue="string" TItem="Product" PopupWidth="700px" DataSource="@_productsList" PopupHeight="400px" CssClass="e-multi-column" Placeholder="Select a Product"> <AutoCompleteFieldSettings Value="ProductName"></AutoCompleteFieldSettings> </SfAutoComplete> When executing the previous code example, we’ll get output like in the following screenshot. See how it displays the AutoComplete list item information in the multicolumn data grid view. Display customized information in data grid We can customize the alignment of the text displayed in each column using a built-in class. The following are the text alignment options available: e-text-center: Displays the text in the center of the column. Displays the text in the center of the column. e-text-right: Displays the text on the right side of the column. Displays the text on the right side of the column. e-text-left: Displays the text on the left side of the column. <AutoCompleteTemplates TItem="Product"> <HeaderTemplate> <table><tr><th class="e-text-center">Product ID</th><th width="240px">Product Name</th><th>Unit Price</th><th>Units In Stock</th><th>Units On Order</th></tr></table> </HeaderTemplate> <ItemTemplate> <table><tbody><tr><td class="e-text-center">@((context as Product).ProductID)</td><td width="240px">@((context as Product).ProductName)</td><td>@((context as Product).UnitPrice)</td><td>@((context as Product).UnitsInStock)</td><td>@((context as Product).UnitsOnOrder)</td></tr> </tbody></table> </ItemTemplate> </AutoCompleteTemplates> GitHub reference You can download the complete source code of this example from the GitHub repository. Conclusion Thanks for reading! I hope you can now add and display AutoComplete search result items in a multicolumn data grid view. You can easily customize the pop-up and list items using the template options. See more samples in this demo. Feel free to have a look at our online examples and documentation to explore other available features. Try our Blazor AutoComplete component by downloading a free 30-day trial or from our NuGet package. If you have any questions, please let us know in the comments section below. You can also contact us through our support forum , Direct-Trac or feedback portal. We are always happy to assist you!
https://medium.com/syncfusion/data-entry-made-easy-with-blazor-multicolumn-autocomplete-e8b6b5b2a32f
['Rajeshwari Pandinagarajan']
2020-12-02 16:43:11.239000+00:00
['Csharp', 'Productivity', 'Web Development', 'Blazor', 'Data']
5 The Most Exciting AI & IoT Trends of 2021
Technology Trends 5 The Most Exciting AI & IoT Trends of 2021 … holographic meetings sound pretty cool Photo by Maximalfocus on Unsplash Every industrial revolution is a change and an upgrade of traditional/ industrial practices. The 1st industrial revolution has brought us mechanization, steam and water power. The 2nd one — mass production and electricity. The 3rd revolution has brought us electronics and IT systems and was the starting point of automation. The current industrial revolution is in the process of bringing cyber-physical systems. Each of these revolutions bears a global societal transformation at large. Even from the 1st industrial revolution, the new technologies have reformed society. It has changed not only how we work and how we live. It also reshaped the way we think, rest, communicate and how we perceive ourselves as human beings. In the next year, we will see more and more Tech For Good applications that will bring more purpose and value to technology. Here I would like to cover 5 of the most exciting AI and IoT trends and innovations that will drive 2021 and the fourth industrial revolution.
https://medium.com/technology-hits/5-the-most-exciting-ai-iot-trends-of-2021-425a6a028efc
['Elena Beliaeva-Baran']
2020-12-15 08:51:38.291000+00:00
['AI', 'Artificial Intelligence', 'IoT', 'Internet of Things', 'Technology Trends']
The Story of Data — Privacy By Design
Discuss the need for adopting frameworks like Privacy By Design very early in your data management life cycle Image by Author Every byte of data has a story to tell. The question is whether the story is being narrated accurately and securely. Usually, we focus sharply on the trends around data with a goal of revenue acceleration but commonly forget about the vulnerabilities caused due to bad data management. Data possesses immense power, but immense power comes with increased responsibility. In today’s world collecting, analyzing and build prediction models is simply not enough. I keep reminding my students that we are in a generation where the requirements for data security have perhaps surpassed the need for data correctness. Hence the need for Privacy By Design is greater than ever. Before we discuss Privacy By Design lets understand some issues surrounding data security and privacy. Security — Role of Data Processors & Data Controllers A few years ago just building a Data Lake was a quantum leap for many organizations. For security conscious organizations security of the Data Lake was simply limited to enabling Kerberos for identity management. We rarely talked about security of data-at-rest or data-in-motion. Not any more…. In today’s world the role of data processors and data controllers has been redefined. Handling data securely whether it is at rest or in transit is no more a option. On top of that the organizations are legally bound to notify the involved parties as soon as any data breach has been detected. Speed of Data Until recently businesses have focused on looking at data over long stretches of time, made possible by Big Data. With the advent of Internet of Things (IoT) analyzing real-time data has gained immense importance. It is very common these days to have devices in our homes that collect personal data and transmit it to external locations for either monitoring or analytical purposes. In many cases the the poor consumer is finding it difficult to balance the benefits they get from surrendering their personal data against the risk involved with providing them. A right balance needs to be met. Privacy By Design We are in times where history is being made on a daily basis. While giants like Facebook, Google and others are looking at innovative ways to collect, handle, use and share data the legislators are drafting and enforcing new regulations around data privacy and ownership. Regulations like the European GDPR attempts to define the legal accountability of data controllers and data processors. The regulation puts greater focus on data governance polices around personal data portability, retention and destruction. What is Privacy by Design? Integrate privacy issues from the beginning and throughout you development cycle. The same approach should apply to services offered as well as internal processes. The theory behind this approach is that privacy cannot be solely enforced by legislation alone. Privacy by Design is based on 7 foundational principles, as developed by Ann Cavoukian and formalized in a joint report with Information and Privacy Commissioner of Ontario. More than ever IT companies are trying to help customers come up with the correct blend of data innovation and data governance in a security-conscious world. As the use of machine learning and artificial intelligence gains strength we need to help our customers adopt frameworks like Privacy by Design to make their future data roadmaps simpler, secure as well as cost-effective. I hope this article was helpful in spreading the news regarding the adoption of Privacy By Design. Topics like these are covered as part of the Big Data Hadoop, Spark & Kafka course offered by Datafence Cloud Academy. The course is taught online by myself on weekends.
https://towardsdatascience.com/the-story-of-data-privacy-by-design-530f4bfdfd8f
['Manoj Kukreja']
2020-09-08 19:21:10.320000+00:00
['Privacy', 'Data', 'Data Science', 'Artificial Intelligence', 'AWS']
How to Be An Artist In A Relationship
Accepting the term ‘artist’ feels like a huge step in the life of anyone who aims to be creative. Whatever labels we attach to ourselves, this one feels particularly loaded. Who gets to call themselves an ‘artist’? What does this really mean? These are big questions, and ones it takes time to answer. But when we do accept the term for what we are doing — be it going professional as a photographer, or dabbling in watercolours, or trying our hand at a novel for the first time ever — being an ‘artistic’ sort has its share of problems. Even if we’re comfortable with the term. These problems really emerge when we think about our relationship between that part of ourselves we want to nourish (our ‘inner artist’, let’s say) and the rest of the world — the people we hold nearest. Our family, friends and lovers, in particular. Cohabitation and romance with an artist is not famously a walk in the park. In the incomparable and timeless classic, The Artist’s Way, Julia Cameron says: An artist requires the upkeep of creative solitude. An artist requires the healing of time alone. Without this period of recharging, our artist becomes depleted. I think she’s spot on. It’s a stereotype that certain artistic types throw away all semblance of human relationships in order to pursue their craft. But the reality is that most of us probably still do want love and closeness — after all, why else make art? Having a happy emotional life supports an artistic practice. Relationships are key to this, and yet, they can sometimes come into conflict with our pursuits. So, what does this demand for solitude, for time to recharge and the need to feed ourselves, mean for our relationships? What should we do, as artists, to keep our relationships strong? Make sure you share values If you don’t agree that making art of any kind is a valuable pursuit, you’re very likely to run into disappointments. The fact is that some people just don’t get it! That’s okay. But you may not want to cohabitate with this person, who fundamentally disagrees with your passion. So, knowing that they don’t dismiss your pursuits entirely is a fairly important starting point. Once you know this about each other, you also have to agree on the values you uphold together. Again, the basics: if they are pursuing very different paths, know that that is going to be okay for both of you. Personally, I love that my partner is in a very different field to me — I am always happy to hear about his day, and how different it was to mine. And he is the same. What’s shared here is respect. From respect for each other’s values, we are able to agree on other things. More pragmatic things. Which do inevitably come up when one partner takes a less-than-traditional route… Talk openly about the practical I’m a writer. That means I have a certain discipline required as part of my process — I sit at my desk, undisturbed, at roughly the same time every day to do my work. This isn’t an accident! As I don’t have the luxury of a private space in which only I have access and work, I had to arrange this. This was essential, and it was important I had the discussion. This relates to a lot of practical elements, but in particular: Space As above, you do have to agree if there’s a special place you do your work in the home. We can get caught up in our craft, to the point where we might forget that the space is meant to be shared (in theory!). Of course, you might have the ability to really define spaces for your work as your own — if so, great! Just make sure you both know what this is, and what it looks like, to avoid arguments, bitterness and disappointment down the line — from either them or you. Time I work best in the mornings. This means I expect to get an hour or two without interruptions. This is complicated if you’re a carer, or a parent. You must somehow define what this time can reasonably be, in which you are able to do that necessary solitary work. Discussing this openly allows the practical to simple be checked off, on terms that everyone acknowledges. Of course this can be tricky, particularly if your partner doesn’t agree (and here’s where my first point gets important!). Finding a compromise can be difficult, and you ultimately have to weigh the pros and cons. But know what you need as a minimum, and ensure you are clear about this — if you’re keen on remaining in a team, you will find a way to set that boundary in a way that works for both of you. Money So here’s something we famously hate to discuss in most cultures. But in a partnership, it’s unavoidable. As a writer, I accept that my income is going to be limited by various factors. Your art will likely do the same. Is your partner going to be okay with this? Will you both be okay with this? How much do you each need to do to make sure it works? Do you need to work at other things to keep up your half of the bill payments? All these details must be worked through, because the money conversation leaks into the lifestyle conversation — where you live, how you live, etc. You need to know what the reasonable limits are on your spending habits and desires. You shouldn’t be falling into debt you can’t repay, or compromising your partner’s credit rating, as a result of decisions on this front. And ask yourself: will this income/lifestyle work for us in a year? In 5 years? In 10? Changes may have to be made. In my case, I work freelance. It would be impossible to simply sit at my desk all day, every day, and write into the sunset. My partner and I acknowledge that we could be wealthier. In our case, we’re okay with what we have right now — we agree about our lifestyle. In the future, we might move somewhere cheaper, or make different decisions. But taking the awkward out of this practical component goes a long way to helping make the relationship last. Set your boundaries In extension to the above, you will have to come to some decisions about your boundaries more broadly. This includes a few things, like — When do you say yes or no to a social commitment? Who does what household chores? What about other household management tasks? How often do you invite people over? Who else needs to be in the home? etc These are tricky boundaries to define, particularly, I think, for women in heterosexual relationships (on the point about housework and management!). There has to be a balance and agreement regarding these sorts of questions. It’s all about setting yourself a boundary that you can feel comfortable with, and that still respects your partner and their desires/lifestyle/etc. After all, chances are you share a social life and other aspects of your lives that require both of you to participate. If you do stay home to work on your art, it can be easy to become the main chore-doer as well, for instance. But this may not be strictly practical, or fair — after all, if you’re doing everything about the house all day, and get no time for your art, you will be frustrated. Define what these limits are for yourself — how much time can you give to other things? And when it comes to deciding what other things to participate in, it’s worth asking yourself the question: what do I say yes to? What do I say no to? Don’t make it a competition If we are generally absorbed in our passion when we create, it can be easy to forget that a relationship is also a passion. It also requires time, and attention. Don’t pit your lover against your art. It can be easy for a person to feel jealous, ignored, or somehow forgotten, if you are sitting in your room all day, every day, having ‘alone time’ for your work. Don’t create the situation where your partner must compete. Make sure you do set aside the time for them — quality time, to be together, to talk, to exchange, to rekindle your passion for each other. What we nurture grows. Your partner (if they are a good one!) wants to support you and love you, and be there for you during your process. Don’t repay them by being obtuse, or treating them as an extra or aside to your creative pursuits. That sounds extreme, but it can happen — it’s a balance between defining your boundaries, and making sure you still give your time and energy to the relationship in order for it to flourish. Don’t wait for them to have to prompt you. In my case, my partner and I always have our morning coffee together, first thing after we wake up. Benefits or drawbacks of this kind of caffeine intake aside, it works for us! He loves coffee, I love coffee, we love to talk. So that’s what we do for the first half hour of our day. After that, he goes to the office and I do too (so to speak). We each leave for our respective ‘jobs’. I write my novel for an hour or two, then get stuck in to client work. That half an hour every single morning is golden quality time for us (and quality time is definitely our shared love language!). We seed in other opportunities — occasional lunch times, Friday nights, time on the weekend. I don’t make myself write on weekends. Those are simply decisions I’ve made (and we’ve made) to keep our relationship strong. Find the moments of connection that work for you. And on the other hand… Don’t forget that you must participate! While we certainly do require solitude, recharging, and quiet thinking time, artists do actually have to participate in the world too, to make art about it! It can be easy to forget how vital this is if you do manage to create a space and time to make the work happen. In Anne Lamott’s wonderful guide for writing (and a creative life more generally), Bird by Bird, she says: To be a good writer, you not only have to write a great deal, but you have to care. you do have to care… A writer always tries, I think, to be a part of the solution, to understand a little about life and to pass it on. This applies to all kinds of artists. Lamott then cites “grim and unsentimental” Beckett, who still manages to offer up observations that are wry, witty, funny, and very human. The fact remains that all artists must observe and participate in the world to some extent, at least for a little while, in order to explore it in their work. She has another quote I’m particularly fond of: There is ecstasy in paying attention. Beauty, wonder, awe… all of these things are what we look for in art itself, often. But they are, of course, in life. Paying attention to the world is itself the ecstasy we often need to be inspired. In conclusion, life as an artist is a tough balance. When we’re inspired, excited, or especially motivated, we can feel like we need to leave the world behind. And when we’re tired, depleted, or struggling, we can feel we need solitude to recover, replenish and restart. But our relationships give us the support, strength and love we often need to realise our creative projects. Nurturing both of these things is not inconceivable.
https://medium.com/swlh/how-to-be-an-artist-in-a-relationship-f9546794e902
['Christina Hope']
2020-03-11 17:54:25.638000+00:00
['Communication', 'Relationships', 'Writing', 'Art', 'Creativity']
Type-Level GraphQL Parsing and Desirable Features for TypeScript
Introduction The story around TypeScript’s interoperability with GraphQL is far from picture-perfect. Between libraries, language services and bundler plugins, we have options––options such as the Guild’s tools, Nexus, TypeGraphQL, QuickType, and more. However, while they let us avoid defining types twice for both type systems, they all fail to let us do so without codegen. I don’t enjoy configuring and regularly executing codegen scripts. But I love TypeScript. I want the language––by itself––to hold the solution to type-level interoperability with others. And the pieces seemingly began to come together: With this release, we became capable of slicing and recombining strings within the type system. Also part of the release: we became capable of embedding conditional logic within recursive types. The door to new type-safe experiences swung off its hinges. A few tiny type-level parsers came out of the woodworks––most notably, ts-sql. I believed it was only a matter of time before someone bridged the GraphQL and TypeScript type systems using template string types, and I was impatient for that person to come around, so I figured I’d take a crack at giving myself and other developers a unified experience: From a single source of truth––the GraphQL schema––defined as a TypeScript string literal… 2. … we could define type-checked resolvers… 3. … and GraphQL documents as string literals, which could be type-checked against the schema… 4. … which would allow overlain libraries to obtain the correct signature of requests. It seemed that the first step to building such an experience would be to create a reliable type-level parser. This turned into a grueling battle with the TypeScript compiler’s recursion limiter and with shortcomings of the language, which I’ll detail. Let’s dive in. Structure How do we describe the building blocks of our type-level AST? Let’s say we’re defining a GraphQL input object. Here is the GraphQL source: Its AST node counterpart would look as follows: The outer-most node––an input object type definition––contains a name and a list of fields––input value definitions––, which each contain a name and named type. In representing these nodes within the type system, we may want to begin with an atom, such as the leaf node Name . In other nodes, we’ll want to pass in Name type instances via type params. For example, the NamedType node: What do we write in place of the constraint (currently place-held by an ellipsis)? To constrain the Name param, we’ll need a “widest” instance of Name , so that any subtype of Name is valid. This kind of namespacing allows us to group our nodes and their widest-instance counterparts more legibly. For good hygiene, it often makes sense to have even more deeply-nested namespacing. Value.Object.Field for example: We can re-use this structure to organize our node-specific parsing logic. For instance, we can place the argument parsing logic relative to the Argument AST node definition. Meanwhile, Argument.Parse can make use of other Parse types––in this case Value.Parse . Many nodes will look similar to the following, with a description, an identifier, a kind, and some directives. Aside from kind ––which is a hard-coded literal type––every field is generic. We can think of these generic types as factories. The runtime equivalent might look as follows. Techniques A quick look at string parsing. Let’s say we want to split a string at its first space, we can do it like so. We use this template literal syntax to match and (essentially) “destructure” by the first occurrence of a character (in this case a space). If we want to do this for a longer sequence of words, we can create a recursive Split type. The parsing continues until there are no more spaces to match, at which point we return the final iteration’s type param ( "today?” ), wrapped with brackets, as to be spread in the return type of its parent call. This unraveling of a source string is the foundation of our parser. However, there is a distinct difference between runtime string parsing and type-level string parsing, and it’s a difference of which we’ll need be wary: we want to parse one AST. When it comes to TypeScript 4.1’s pattern matching, this can be difficult. Let’s say we’re parsing the following Src string and we want to gather whatever characters precede the first instance of whitespace. Whitespace can be a line break, a space or a tab. How would we go about gathering the text following the leading whitespace? Your first instinct might be to match on Src like so: If we take a look at the tuple type contained within Parsed , we’ll see it’s not quite what we expected. We ended up with a tuple containing two string literal unions. These unions contain the possibilities as if we’d passed in each whitespace kind independently. With respect to type systems, this is desirable behavior. Yet it does throw a wrench in our approach to certain kinds of parsing: if we don’t account for branching, we can end up with an AST type that represents multiple ASTs (or even thousands!). How do we safeguard against unintentional variance? From here on, we’ll want to use a dedicated Match utility. Your first instinct may be to create Match as a recursive type that iterates over a string and checks whether one of the delimiters matches the left-shrinking slice (n²). Already, recursion limiting is of concern. If we use such a Match utility within other recursive types and their lineage (for instance Schema.Parse → Object.Parse → Field.Parse → Input.Parse → Directive.Parse → Value.Parse ) our CPU could very well turn into a modern-day Chernobyl plant. I’ll share the solution which worked best for me. While it is just as complex, it incurs less recursion limiting, ostensibly because of the checker’s mapped-type optimizations. 1. Create a Match type for convenience of later access ( M extends Match<infer X, infer Y, infer Z> is simple to reason about within conditional branches). 2. Create a type which accepts a source string and a union of delimiter strings, and gives us the union of delimiter-leading strings. For instance, Leading<”a b c d”, “a” | “b” | “c” | “d”> would give us “” | “a “ | “a b “ | “a b c “ . 3. Create our Match type as follows. And Voila! We’ll be using the Match.First utility and Match types in most of our node-specific parsers. Here’s a quick look at how we can use this type: So, we’ve outlined our approach to node factories and to safely matching the first occurrence of a member of a union of delimiters. Now, let’s parse some GraphQL source! Specifically, let’s parse the fields contained within an enum. Here’s a complex example of what we may encounter: The CAT option has the directive mean , which accepts no input. The HAMSTER option has a description. And the DOG option has a description and two directives, one of ( breed ) which accepts an input with a single field name . Building off of the aforementioned Argument.Parse , we’ll first need to define a Directive.SequenceAndTail.Parse . This utility type will accept whatever slice follows the enum value, and give us a list of directives (could be empty) followed by the tail (the next enum option and beyond). Our Directive & its widened type will look like this: Next we’ll create a type that encapsulates the return of our Directive.SequenceAndTail.Parse . A tuple should be just fine. And finally, we’ll define Parse . Let’s break this down a bit. The type params are (A) Src , the source string in need of parsing, (B) Directives , the previously accumulated list of directives to be returned upon encountering a non-directive, (C) Trimmed , Src , stripped of its leading whitespace, and (D) FirstMatch , which defaults to the result of Match.First , checking for the @ symbol (signaling a directive) or whitespace on Trimmed . If the first match is in fact on whitespace, then there’s no leading directive, and we can return whatever’s been accumulated in a Directive.SequenceAndTail . Otherwise, we’ll want to gather the name and (optional) arguments from the currently-visited leading directive before passing the formed Directive node into the next SequenceAndTail.Parse call. Now that we defined the directive and tail parsing, we’ll want to do the same for descriptions. And finally, we can dive into Enum.Value.Sequence.Parse : First, we define Enum.Value.NameAndTail.Parse to gather the leading value name and its tail (which may contain a directive or sequence of directives). And then we tie the sub-parsers together. Notice how their tail-calls coalesce quite legibly. But wait! Type instantiation is excessively deep and possibly infinite. Although we know that Src is eventually whittled down to an empty string, and that the Tail2 extends “” check protects us from infinity, the checker is cramping our style. This simple trick gives us the upper hand: 1. Define a SafeTail utility type. 2. Use the SafeTail utility to (effectively) re-alias Tail2 . Our parse type will now look like this: And with that, we say goodbye to pre-emptive recursion limiting. This is the essence of any type-level parsers in TS 4.1: we can define recursive tail call types, which produce wrappers of specific node types, which can be neatly unwrapped inside the conditional branches of other types. When we encounter recursion limiting, we use SafeTail , and we slowly build up to our elder nodes. The Requiem You can get all the way to the tippy top of your tree––all the way to the elusive Schema.Parse and Document.Parse utility types––only to have your dreams crushed upon instantiation (as mine were). Type instantiation is excessively deep and possibly infinite. If matching first-characters wasn’t so complex, perhaps we could get further. Perhaps we could parse 100-LOC schemas. Maybe even 1000-LOC schemas. But what happens if we’re trying to work with a schema such as GitHub’s (30K+ LOC)? Even if we could overcome the recursion limiting, this would be very taxing on semantic analysis. The checker is NOT incremental. According to Ryan Cavanaugh (one of the TypeScript program managers) Although the GraphQL source may remain unchanged between edits, its computed structure is not preserved. Cavanaugh continued… I asked for more detail: His response seemed to address mechanics, not ideology. Moreover: And while that subtle jab at Flow made me smile, the pursuit of my DX vision came to a grinding halt. It was officially time to stop development and write a blog post or something. Aside from performance, there doesn’t seem to be a “strict” reason to not map DSL types into the TypeScript environment. However, it does seem aggravating that we’d need trade battle-tested open source parsers for those written in the extremely-limited type-level language. It also seems that––putting performance hurtles aside––this mapping “should” be possible. What would need to come into place to ease the creation of type-level parsers? Let’s now speculate about the future of TypeScript? Potential Future TypeScript Features Matching To be conservative in the face of recursion limits, we created a parser without a lexing step. This isn’t necessarily preferable. Normally, we might prefer to define and match rules to a stream of tokens. If we were to define such rules in the type system (imagining type-level parser generators), we would likely represent these rules as tuples. In a single GraphQL object field––for instance––we have a group of characters, followed by a colon, followed by camelcase characters, followed by an optional sequence of directives, which may or may not contain arguments. How would we try to represent this rule? Spoiler: we cannot, and the following is invalid. Although the above does not work, the hope would be that we could iterate over the tokens and check––in order––which of our rules (such as FieldRule ) match. From there, we could decide whether to process the next tokens as belonging to a sibling node, or decide to descend or ascent. Beyond the illegal treatment of arrays, the code above assumes that AlphaCharGroup and CamelCaseCharGroup can be matched like subtypes. Aka. “HelloWorld” extends CamelCaseCharGroup . This is not the case. Let’s look at another example: differentiating between an integer string and others. How would we model the type-level check to determine whether a string is of an integer? First we would create an AreDigits utility type, which returns true if all characters in Src are digits. Next we would use AreDigits within an IsInt type (which allows for a single leading sign). How would we then use this utility type to match an integer string? We could of course check to see if IsInt<”1618”> extends true . But we cannot use IsInt as part of a pattern (for instance X extends `hello ${IsInt}` extends true ). This is very limiting in the world of parsers. Fortunately, this could soon change for TypeScript, should “Conditional Assignability”––a proposed language feature––make its way into future versions of TypeScript. Here is a conditional assignability snippet written by Anders Hejlsberg (TypeScript lead architect / Obi-Wan Anders): The look of this is strikingly similar to that of parser combinators. Conditional assignability would give us a greater simplicity leap than that of promise callbacks to async-await. Moreover, we could begin to model type-level parsers as rule-based. For now––however––we’re stuck. And even if we could effectively match arrangements by constraints other than subtype, look-ahead is limited. This makes it difficult to mark regions as off-limits for certain kinds of processing (such as in comments and descriptions). When it comes to comments, the temporary solution is to do a first-pass strip of all commented lines (splitting the GraphQL source by # , recursively stripping trailing text until a new line, and then recombining the remaining text). This is costly. Errors Who doesn’t love a good “ never -hunt”––that is, debugging unintentional bottom-up never propagation. Much of the time, this has to do with conditional branches for which we did not want to account in the first place. Why is it that we end up writing never ? Moreover, why is it that––when we have multiple conditions that evaluate to the same branch––that we need repeat them? For instance, let’s say we have three booleans, and we want a type that evaluates to the number of truthy. If writing runtime code, we’d do something along the following lines. This makes sense. This is easy to read. Meanwhile, the type-level equivalent is harder to grok. When we have deep nesting of conditions, this becomes unwieldy, as the size of our code can become exponentially greater with each level of nesting. While the technique I showed (result-and-tail-type unwrapping) somewhat helps with this, the presence of excessive statements is stark. If you want to push for cleaner expression of conditional types, please upvote this issue. Aside from the ergonomics of never -”domino-ing” conditionals, we lack type-level error throwing. While there is a draft PR out to address this shortcoming, it is unclear whether this is currently a priority to the TypeScript team. Meanwhile, I put out my own proposal on type-level erroring, although I’m not sure if I fully stand by it in retrospect. Type-level erroring would allow developers to communicate unmet constraints within conditional types, which would likely take over for ESLint in the long-term. As of recently, I’ve stopped using ESLint. I still use Prettier (I soon hope to switch to DPrint), but the TypeScript compiler covers most of my static analysis needs. ESLint is starting to seem trivial. Documentation Type-level documentation is one of the most underrated features of TypeScript: the documentation flows through re-aliasing and mapped types as if part of the language itself (and if you’re judging based on the compiler internals, it is!). While it abides by a particular control flow, documentation is not accessible within the type system, and it’s difficult to be deliberate about where and how to assign and validate the documentation. For the GraphQL use case, this is a major problem, as we want the GraphQL descriptions to flow into the TypeScript environment. Let’s say we have an object type Person . In the resulting TypeScript type Person , we would want the name field to contain type-level documentation drawn from the GraphQL field description. We currently have no way of specifying this behavior. Please upvote this proposal to help secure type-level documentation’s place in the future of TypeScript. Conclusion There’s a lot of fun to be had with type-level programming. There’s also a lot of excess. While I love the idea that––through the type system––one can make an API impossible to misuse, there is a point at which type-level considerations hinder productivity. I’m trying hard to steer clear of the “ideal” in favor of the “actually completed and released.” I’m coming to terms with the fact that this is my major shortcoming as a programmer: I don’t always code with an explicit purpose or achievable goal. I code to explore uncharted territory to trigger the release of dopamine in my brain. Through this process and through interacting in TypeScript issues, I met someone with whom I’ve had some of the very best of conversations: an incredibly creative and thoughtful individual by the alias of tjjfvi. Definitely worth following him/her on GitHub. They have chosen to remain anonymous. Could be Elvis on a private island for all we know. Death faked. Mojito in hand. Palm fronds rustlang… I mean rustling in the wind. I want to give a huge thank you to my mentor and close friend Sam Goodwin. Your wife is too good for you. Thank you all for reading. Here is my twitter. My mom is my only follower. Please help a guy out.
https://harrysolovay.medium.com/type-level-graphql-parsing-and-the-future-of-typescript-46277d9e0667
['Harry Solovay']
2020-12-03 18:35:28.383000+00:00
['Future', 'Typescript', 'GraphQL', 'JavaScript', 'Parser']
Matplotlib Kütüphanesi İle Histogram
mertalabas - Overview Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 40…
https://medium.com/datarunner/histogram-d69955f22379
['Mert Alabaş']
2019-08-28 09:24:49.915000+00:00
['Python', 'Histograms', 'Matplotlib', 'Veri Görselleştirme', 'Data Visualization']
Designing the UI of Google Translate
The design story behind making Google Translate’s camera feature and real-time conversation mode more discoverable Humans have been fascinated with magical creatures or devices that break language barriers for as long as languages have existed. At Google Translate, we have the opportunity to work on a product that helps us get one step closer to realizing the magic. Discovering the magic Google Translate’s instant camera translation and real-time conversation modes never cease to amaze our users. But beyond their wow-inducing nature, they are extremely useful as well, helping language learners, students, tourists, and all sorts of people who need to break through language barriers. The problem Unfortunately, many of our users did not know these features existed. We first got a hunch this is true when we noticed people asking for features we already had during user research interviews. We confirmed that by running a survey which showed 38% of our users did not know we had these tools.
https://medium.com/google-design/a-fish-in-your-ear-134deed70268
['Pendar Yousefi']
2019-06-05 18:58:00.850000+00:00
['Case Study', 'Google', 'Translation', 'Design', 'UX']
US vs China in AI — A Realistic, No B.S. Assessment
With headlines like the ones below appearing in respectable publications, one could not be faulted for being misled into believing that China might be ahead of the US in artificial intelligence (AI) already. But is it really? Let’s take a closer look beneath the headlines and the hype… Quantity does not equal quality Yes. China has applied for a lot of AI patents and the rate is growing at the fastest in the world. But if you read this article closely, it also goes on to point out, “Though China has made more applications, it still has fewer high quality, high-value patents than world leader US and Japan…” Companies looking to secure high-value intellectual property typically apply for an international patent via the Patent Cooperation Treaty, where the US makes up 41 percent of all applications — more than any other country — indicating its national strength in the sector. — Chinese Firms Apply for More AI Patents Than Any Other Country, Yicai Global And in terms of turning those patent applications into actual businesses generating significant revenue and brand power, China is still very far behind. According to research done by one of China’s ‘Ivy League’—Tsinghua University, only one Chinese entity ranks among the top 10 global AI patent owners — the state owned power grid enterprise. Source: China Institute for Science and Technology at Tsinghua University Talent is still hugely lacking and lagging China’s AI workforce is also still far behind the US in both quality and quantity. China has 18,232 AI talents, compared to 28,536 in the US. More importantly, only 5.4% of the Chinese talents are considered ‘outstanding’, whereas the proportion is 18.1% in the US. In case you think the study might be biased, again the numbers were counted by Tsinghua university. Source: China Institute for Science and Technology at Tsinghua University To dig further, in a survey done by China Money Network, the founders of China’s top 50 AI companies were mostly educated abroad, including Harvard, Brown and several others in the US. Source: China Money Network Funding vs Value created Here’s where the picture starts to look a bit bleaker for the US. Research firm CB Insights found in their 2018 AI Trend Report that China AI startups overtook the US in funding amount in 2017 for the first time ever, spiking up to 48% from just 11.3% in 2016. Source: CB Insights To be fair, the US still has a lot more deals than China — 50% of the global total compared to China’s 9%. So that means the US still has a lot more AI startups being funded, though clearly China saw some very large deals in 2017. Otherwise, it wouldn’t have been able to account for nearly half the funding amount with just 9% of the global deal volume. But that dominance in deal numbers is also trending down for the US, as shown by this chart below: Source: CB Insights And here’s a less talked about fact that should make the Americans worry a bit. Almost half the money invested into AI by the three China tech giants — Baidu, Alibaba and Tencents —over the last four years are in US startups. Source: CB Insights Catching up? Nonetheless, even though China is clearly still some distance behind the US now in AI, it doesn’t mean it can’t catch up, or even overtake. Indeed, if anything, money might be a key factor in China catching up. “We often believe that the US is so far ahead, that the US invented everything…And it has dominated the tech scene, but a miracle happened 10 years ago. That miracle is money, of course. The Chinese government has vowed to be become a world leader in AI by 2030 with its strategic roadmap…Entrepreneurs and VCs started pouring cash into new AI startups.” — Kai-Fu Lee, speaking at the O’Reilly Artificial Intelligence Conference in September 2018 Lee was formerly the president of Google China. He is now one of the most famous venture capitalists in China. Wired magazine calls him “something of a rock star in the Chinese tech scene, with more than 50 million followers on social networks within the country”. Lee has helped launch five AI companies now worth a combined US$25 billion. He’s written about it in a new book call ‘My Journey into AI’. In his book, besides money, he said that China also has the advantage of very hungry entrepreneurs and abundant data on their side in the AI race. His old boss at Google seems to agree with him. Eric Schmidt, CEO of Alphabet (Google’s parent company), said in November 2017 that the Chinese central government’s AI development strategy should “set alarm bells ringing in America.” “… the U.S. needs to get our act together if it doesn’t want to fall behind on the technology that could determine the future of both the defense and commercial sectors… It’s pretty simple. By 2020, they will have caught up. By 2025, they will be better than us. By 2030, they will dominate the industries of AI.” — Eric Schmidt Maybe it comes as no surprise that the folks from Google are the ones ringing the alarm bells. It could be their own regrets knocking on their conscience. Their China equivalent, Baidu, started working on AI long before they did… Source: CB Insights So the bottom line is this: China is still lagging in innovation and talent. But with money, highly-driven young entrepreneurs and government support, China has a good chance to dominate the world in AI. Most of the top AI startups are focused on software aspects like natural language processing, computer vision and voice recognition. Given my own limited experience with some of their products, I would say their technology is as good if not better than western competitors already. Here’s two more useful infographics on the valuations and spread of the top Chinese AI companies.
https://medium.com/behind-the-great-wall/us-vs-china-in-ai-a-realistic-no-b-s-assessment-a9cef7909eb6
['Lance Ng']
2019-01-26 01:55:14.016000+00:00
['China', 'Technology', 'USA', 'Artificial Intelligence', 'AI']
Clean Code — A must-read Coding Book for Programmers
Clean Code — A must-read Coding Book for Programmers Want to learn the art of converting a bad code to good code? This book can help image_credit — Clean Code book Even though the Clean Code book was released many years ago and there are lots of good reviews already available, I couldn’t resist writing my own experience of this great book. I came across this book many years ago, but since then, I have read it multiple times, and I have recommended it to my readers, students, and fellow developers. It is one of those books which make you feel that why didn’t you come across it earlier, I felt the same when I first learned about this book. The Clean Code book is all about writing good quality code, and how do you judge the quality of code? Well, you won’t appreciate good until you have seen bad code and that’s what this book does. It first presents a code, which is ugly, hard to read, hard to understand, hard to maintain, and then Uncle Bob goes step by step to refactor that code and converting them into the masterpiece you would be proud of writing. You will get a taste of how to convert a bad code to good code when you first read his example of an algorithm to generate the first 100 prime numbers. He has explained how to write clean code by with the Sieve of Eratosthenes is a very nice way. Here is another example of good code vs bad code which I found on LinkedIn and it nicely captures the idea of clean code. image_credit — Pushkar Some of you might argue that what is all about the clean code? If a code can function, then it’s just enough. Well, it’s not. We all think about that when we used to write programs in a computer science lab, our educational projects and on our semester practical exams but the real world is totally different, here you learn the Art of Writing Clean Code. One reason for that is because it’s not a throwaway code, it will remain longer than you expected. For example, some of the pricing systems in Investment banks are still running on mainframes, which is more than 40–50 years old. If a code isn’t clean, it can bring the company to its knees and reduce its ability to remain competitive by providing cutting-edge solutions. Since a Code needs to be maintained in most of his life-span, it must allow you to maintain and extend, and that’s what clean code does. Review of Clean Code Book The Clean Code book is well structured and divided into three main parts. The first part talks about principles, patterns, and practices of writing clean code. This is where I first learned about SOLID design principles, and it changed the way I write code. For example, if I didn’t know about the “Open Closed design principle,” I wouldn’t have understood the full power of Polymorphism and Abstraction ever. These little principles not only help you to understand fundamentals better but also help you to write better code, which is easier to understand and maintain. The second part is full of real-world case studies of increasing complexity. Each case study is an exercise of turning a bad code into good code, something which easier to read, understand, and maintain. By the way, clean code is not just about architecture but also about debugging and performance, a code that is easier to read is also easier to debug and optimize. The title “Clean Code: A Handbook of Agile Software Craftsmanship” fully justify the content inside the book because coding is no lesser than Craftmanship and his argument that “Even bad code can function. But if the code isn’t clean, it can bring a development organization to its knees” is perfectly valid. The third part is the most important, a payoff, like a single chapter containing a list of heuristics and code, smells gathered while creating the case studies. You can use this chapter as a knowledge base that describes the way we think when we write, read, and clean code. The book is full of programming best practices, I mean correctly naming variables, classes, and methods, some of you can find it here as well. It also put a lot of emphasis on unit testing and test-driven development which is one of the attributes of a professional programmer and something which distinguishes them from amateur programmers. In short, must-read books for the programmer, and after reading this book, you should be able to Tell the difference between good and harmful code. write good code and how to transform bad code into good code Create good names, good functions, good objects, and good classes Format code for maximum readability Implement complete error handling without obscuring code logic Unit test and practice test-driven development Btw, if you are interested in not just a book but also some online training courses to improve code quality, I suggest you check the free course, Clean Code: Writing Code for Humans by Cory House from Pluralsight. You can get it absolutely free by signing for the 10-day free trial; which allows you to watch 200 minutes of any course. Anyway, Pluralsight is full of such gems, and its monthly membership is what every programmer should consider having. That’s all about “Clean Code: A Handbook of Agile Software Craftsmanship,” one of the must-read book for any developer, software engineer, project manager, team lead, or systems analyst who want to write better code. So, if you just want to do one thing at this moment, just go and read the Clean Code. It’s worth every penny, and every second you spend. Other Programming resources and Books you may like Thanks for reading this article so far. If you like this article then please share it with your friends and colleagues. If you have any questions or feedback then please drop a note. P. S. — If you are looking for online courses to learn Design patterns for writing clean code then I also suggest you take a look at the Design Pattern in Java course by Dmitri Nestruk on Udemy. It will greatly improve your understanding of writing robust and easy to maintain object-oriented code in Java.
https://medium.com/javarevisited/clean-code-a-must-read-coding-book-for-programmers-9dc80494d27c
[]
2020-06-14 10:00:33.687000+00:00
['Programming', 'Coding', 'Java', 'Software Development', 'Books']
When routines fail, opportunities to innovate flourish
Businesses everywhere are experimenting with ways to tackle the obstacles imposed by the pandemic. For most, this is a process of trial and error. They either apply existing solutions that they are personally familiar with or copy as best they can what others around them are doing. But in some instances, like ETEN’s repurposing of micro-greenhouses, experimentation leads to genuinely novel solutions that reimagine products and services in new and exciting ways. So how do we go from deploying existing solutions to generating inspiring new approaches? It starts with how we think about resolving challenges: do we try to reconstruct the preexisting experience (even if they are at best Frankenstein-esque recreations), or pioneer new paths to discover new possibilities? Reconstruct: restoring the status-quo, or some version of it Much of our day-to-day actions unfold without much thought. We repeat routines and behaviors, allowing our experiences to guide us. The more ingrained and institutionalized they are, the less we think about them. For instance, before the pandemic, eating close to others was the norm, never crossing anyone’s mind as a serious threat to public health. As creatures of habit, we go through life like this until one of our habits or routines stops working as anticipated. When this happens, the particular stream of action we are in hits a roadblock, bringing the conflicted situation to our conscious awareness. With our automatic responses impaired by the roadblock, we consciously evaluate what other options in our repertoire of personal or socially observed experiences might help us restore the situation so that we can carry on. In most instances, quickly deploying a readily available alternative is sufficient to restore the situation. We saw this happening all across the country after states and municipalities imposed social distancing measures. The overnight introduction of takeout and make-at-home kits by elite restaurants in New York City, like Masa and Rezdôra, are prime examples. Neither strategy was part of their standard playbook, however without the ability to host guests and exorbitant overhead costs pilling up, they had to explore new ways to generate revenues. While takeout is novel to elite three-star New York Times reviewed restaurants, it is standard practice for most. This provided an immediate and easily deployable solution that, apart from the price-point, was familiar to their customers. Rezdôra’s make-at-home meal kit instructions Ultimately, Rezdôra, Masa, and, most recently, Eleven Madison Park, are trying, as best they can, to continue serving their customers within the parameters of government restrictions. All with the hope of weathering the storm and returning to their pre-pandemic operations and interactions with their guests. Omakase, after all, is an intimate and evolving interaction between you and the sushi master. If they cannot react to your every expression, playing to your likes and curiosities, the experience falls flat. Sushi masters at Sushi Gen in Los Angeles’s Little Tokyo Reimagine: pioneering a new path to new possibilities Ruptures in established routines and behaviors offer unique opportunities to reimagine how we think about and do things. During stable times, the social structures we are enmeshed in sustain and reinforce our routines and behaviors. We have all experienced the power of peer pressure. When we try to deviate from the norm, the group’s force pulls us back in line. (These same dynamics are at play at the organizational level, constraining how businesses think, act, and look.) When ruptures are systemic, throwing everyone into disarray, the power of conformity briefly loosens its grips. During these moments, we have the unique opportunity to construct new routines and behaviors, with the potential that they will catch on and become the new norm. So, how do we start pioneering new paths and possibilities? Especially given that how we see the world around us is hemmed in by our past experiences, social contexts, and technical constraints — all exerting forces that narrow our field of vision? Expanding our field of vision begins by replacing our current lenses with ones that provide new ways of seeing what we are tackling. Stripping down products and services to their core fundamentals is an effective way to gain new perspectives. Focusing on the fundamental characteristics of what we are exploring exposes the taken-for-granted assumptions and expectations that we have come to accept unconsciously. Through this awareness, we can ask how we can approach these core characteristics in new ways that better meet consumers’ evolving needs, wants, and desires. Apple did precisely this in 2007 when they introduced an entirely new take on what a mobile phone could be — leaving BlackBerry and the rest of the industry barreling down a path that no longer reflected consumers’ evolving desires. Yet no one apart from Apple seemed to see the emerging rift. Research In Motion, the creators of the industry-dominating CrackBerry (as many came to refer to the addictive divide) paid dearly for not returning to the fundamentals and exploring how else they could address the shifting preferences and desires of their consumers. Ferran Adrià, regarded as one of the most innovative chefs in history, attributes his team’s ability to imagine new possibilities to their rigorous approach to challenging the status quo. To create the awareness required to make apparent what we have taken for granted, his team starts by deconstructing what they are investigating to its first principles. Their approach includes breaking down definitions, classifications, cultural contexts within which the thing lives and is expressed, the end-to-end production, marketing, and sales processes, how different groups experience it, and the consequences and implications of these experiences. To add further depth to this complex web of knowledge, they also track each element’s historical trajectories to understand their evolution from conception to current iteration. With this foundational understanding, they can move from asking “what is” to “what could be” questions, guiding exploration and experimentation into the new and unknown. While it is not always necessary to go to the extent of Ferran Adrià, we must approach moments of rupture in our routines by exposing the taken for granted assumptions that they are built upon if we hope to create new possibilities. Sakichi Toyoda’s Five Whys interrogation methodology is an excellent start to provoking the awareness required to see things differently and open ourselves to new and unanticipated possibilities. Approaches on a continuum: or, how much new perspective must be gained? Moments of rupture in our day-to-day routines and behaviors offer us tremendous opportunities to bring about change. Rather than trying to reconstruct our pre-pandemic lives, we can create new possibilities. Or, at the very least, meaningfully transform existing ways of thinking about and doing things. Understanding which of the two approaches to use, however, isn’t so clear. It requires us to take a step back and ask if an area of life is worth preserving, and if so, why. Are we holding on to something just because we are used to it, or does it offer us meaningful value that is worthwhile preserving? At the extremes, it is easy to decide what approach to take. Preserving the intimate interaction between a sushi master and guest offers us rich experiences with the possibility of exposing us to new things. Let’s bring those back as soon as safely possible. Going to the office five days a week, just because we have done so since the invention of the clerical class during the industrial revolution, should be reimagined. Let’s not fall back on old paradigms of work. Challenges that fall between the two poles are the most perplexing to resolve. Areas of life that require a mix of reconstruction and reimagining. Here we need first to identify what is valuable to hold on to and where opportunity spaces exist to reimagine the fundamental characteristics of the thing we are tackling.
https://uxdesign.cc/when-routines-fail-opportunities-to-innovate-flourish-d58a9aea1cc8
['Andreas Hoffbauer']
2020-11-11 14:07:52.615000+00:00
['Innovation', 'Business', 'Startup', 'Creativity', 'Experimentation']
Everybody Is Welcome(d) To Thinking UP!
Everybody Is Welcome(d) To Thinking UP! A new Medium Publication that promotes intellectual creativity Today we live in a world where drastic changes are the norm. Every day new technological, economic, social and political innovations impact the way we interact with our environment, our relationships and our subjectivity. Caught up in such a movement, we find ourselves reduced to two choices: to follow the movement, or to make a stop and risk being overwhelmed. There is, however, a third path : that of artistic, intellectual, emotional and social creativity enabling us to empower ourselves and to turn the evolution of society in our favour. What this new publication aims to achieve is to make intellectual and creative resources accessible to an audience that has a sincere desire to make its mark in this world. Perhaps a number of you, like me, have been fortunate enough to be nourished by a deep intellectual culture (scientific, philosophical, literary, artistic) found mainly in books. It has certainly enabled you to nourish yourselves day after day with new models of thinking and to adopt a new vision over everyday problems. Unfortunately, these cultural resources are often too much preserved by an intellectual and academic elite more concerned with procrastinating in the sum of its knowledge than with making it applicable to our ways of acting, creating and deciding. And God knows that our actions, decisions and emotions make more sense with thoughts that are worthy of them. My publication is open to all types of writers who wish to share their own creative and intellectual discoveries to change the world around them. Simply submit your draft article at [email protected], and we will take care of bringing your insights to an audience. Take yourself to another level by subscribing and joining us !
https://medium.com/thinking-up/everybody-is-welcome-d-to-thinking-up-899229b495ea
['Jean-Marc Buchert']
2020-01-11 17:02:48.512000+00:00
['Intelligence', 'Thinking', 'Creativity', 'Writing', 'Self Improvement']
Jay-Z Would Say You Can Work For Yourself — Just Not Yet
It’s great to be your own boss, but don’t need to rush the process “If you can’t buy the building, at least stock the shelf (word) Then keep on stacking ’til you stocking for yourself, uh” — Jay-Z, “Entrepreneur” Talk to me, maaaan! We back at it with another installment of your soon-to-be-favorite Medium column, “What Would Hov Do?,” where I guide you through the pitfalls of life with the help of Jay-Z’s discography. The feedback has been love. I’m throwing up the Roc sign as we speak. Another week, another question. Our anonymous reader asks: “I resigned from my job in the middle of the Covid-19 pandemic because I wasn’t feeling it. Now what do I do — look for another job or start my own business?” This is a really good and fair question. The pandemic has forced many of us to re-evaluate the systems that we’ve adopted as normative, right? Before the whole globe shut down, getting your boss to approve remote work (if you’re even in a privileged enough space and industry to do so) was like battling them dragon-vampire-dogs on Lovecraft Country. We’re all looking at how we can be less reliant on the corporate machine to provide for ourselves and our families, including our rescue dogs. But with that, some of us have opted not to wait for that deluxe apartment in the sky from a brand, a company, or a pyramid scheme. We are venturing out on our own. Hov knows all about this. But what’s interesting is on the Pharrell-produced “Entrepreneur,” the man who once told us boldly, “I am not a businessman, I’m a business, man” tells us that it’s okay to work for the man while working to become the man (or woman, or just becoming). I bring up the example of The Infatuation a lot, the restaurant recommendation and messaging site started by two former music execs. They worked on the platform while holding down their jobs. They didn’t leave their 9 to 5 until they felt comfortable enough financially to leap into being full-time entrepreneurs leading the company, which recently bought out Zagat and is now estimated at $33.3M in funding. All that to say — you don’t need to make your major decision now. You can actually do both, homie. Seek another job, and maybe find one that allows you to somehow leverage whatever work you will eventually put into your own company. It’s a lot more work, but I will always advocate for getting grounded enough so that your company has a solid foundation to build. Who’s to say your next gig won’t connect you with the folks who will be able to help your company in some way? You can still work for someone else while working on and for yourself. And then, when it’s time to dip, you chuck the deuces in a way that is more viable and healthy for you. Bet on yourself, but don’t gamble it all to reap the fruits of your labor — Hov’s orders. For the newbies out there, feel free to get familiar with some of my previous Medium work here. And here. Oh, and here too.
https://iamjoelleon.medium.com/jay-z-would-say-you-can-work-for-yourself-just-not-yet-9f3295a8a7e
['Joel Leon.']
2020-10-16 14:52:11.476000+00:00
['Life Lessons', 'Business', 'Music', 'Entrepreneurship', 'Life']
How to survive and thrive during the 2020s
Time for playing or exploring is not wasted time. We need to provide ourselves more opportunities for pure-play, exploration, and adventure. This means sometimes we will be wasting time and that is OK; as long as we keep learning and exploring. Even if we have no idea on how to solve a problem, we can still figure out what to do and improvise. We may not know how to go forward, but we can always learn, figure out a strategy, iterate, develop, practice, and improve. We need to take more risks — it is OK to look stupid and try out new things. Change is always frustrating and uncomfortable, but it is the only way forward. We need to treat every experience as a learning experience. Go out there and start again. Embrace failure Perhaps we will suck and that is fine — we need to stop taking ourselves so seriously. If we really want to get lucky in the long term, we need to provide ourselves more opportunities for failure. Failures are merely stopping points along the bigger journey. We need to celebrate your failures and use them as learning opportunities. We need to develop more grit and resilience to get back up after a failure. Take a long-term view Imagine your best future self after 10 years. Start acting like that person today. Act like the CEO of your own life. Where do you want to be in 10 years? Do not leave it to others to choose your destiny. Make a plan, and work toward it. Take a small action every day, consistently. You need to think and act long term and not give up on your dreams. Even if the whole world is turning itself against on your ideas, you need to hang in there and be patient. J. K. Rowling was rejected by at least a dozen publishers before she was able to publish Harry Potter. Star Wars was rejected by many major studies until it became a household brand and franchise worth 70 billion dollars. We need to remember that this is a marathon and we are playing the long game. We should not be discouraged by failures and rejections. Keep imagining Imagination has become one of the most successful critical success factors for success and happiness in this age. We can use imagination to tap into rich worlds of possibility, to dream about our lives, to invent new things, to create new theories, and to share our stories with the world. Imagination allows us to act like children, be foolish and curious, let it go, have fun, mess things up, get out of the rut, and invent new ways of thinking. These actions will increase our neuroplasticity, and enable us to come up with a constant stream of fresh ideas. The fascinating thing about imagination is that it is unlimited. The more you use it, the more you will have it. Imagination is the key to success since each plan starts with an image. This might be a vision, a dream, a possibility, or a mental picture of what you want to turn into reality. Imagination is your blueprint, your compass, and your map navigating the future. Imagination allows us to communicate with our subconscious mind and create positive mental imagery. If we believe that our dreams can be realized, we will work harder towards making them happen. Imagination is like a muscle and it can be strengthened through practical exercises and experiments we can easily apply in our daily lives. How do you exercise your imagination regularly? You can give our brain puzzles, adventures, problems, questions, experiments, visualization exercises, and challenges every day. You can hunt for the most interesting stuff. You can spend time every day learning new things that excite and surprise you. You can be a hunter and learner of the most interesting things. You can provide yourself more opportunities for pure-play, exploration, and adventure. Similar to Bill Gates’ think weeks, schedule time in your calendar for learning and exploring new things. Time for playing or exploring is not wasted time. You can be obsessive in following your passions and curiosities. Keep learning Invest in yourself and your learning every day. This is the biggest investment you can ever make. Be an autodidact: A self-taught person individual who initiates and manages his/her own learning and reads voraciously. Assume responsibility for your own learning. Go out of your comfort zone. Learn outside your discipline. Think and act wider. Education as usual is dead. Long live lifetime learning! Be a polymath: An individual whose knowledge spans a significant number of subjects, known to draw on complex bodies of knowledge to solve specific problems. To be a polymath, expand your horizons and read books from diverse disciplines. Be greedy about your learning. Read widely and diversely beyond disciplines. Read at least 100 books every year (which means 2 books per week). To solve wicked problems of the 21st century, you need to think beyond borders and disciplines. Cross boundaries — there are no borders. Play the long term game. Imagine that you are running a marathon. Image created by Author Below is the new success equation: Ask Right Questions + Follow Your Interests + Intense Curiosity + Continuous Learning + Imagination + Passion You need to design multiple lives and multiple careers for your future. You are not cut out for just one job. Your imagination is without boundaries — why do you limit yourself to one job or one career? Think about where you will be after 10 years. Act according to that vision. You can be as big as your dreams and ambitions. So, dream big dreams, be specific in your vision and capture your wishes and desires in your diary. Try to master the mental models of different fields and use them to develop a holistic perspective and make better decisions in your life. Live exponentially and compound yourself for the long term. When you are setting your goals, always leave an open room for X objectives(unknown goals). X objectives are goals that you cannot see right now. They will emerge as you navigate uncharted territories. How can you compound your skills and assets for the long term? Image created by Author Design your life by surprise. Leave more room for flexibility, serendipity, exploration, and adventure. It is not easy to scare yourself and set yourself new challenges and adventures. In order to do this, you must get out of your comfort zone. You must set sail to new horizons and explore your own blue oceans. If you do this, you will be rewarded. You will be happier. Forget the career ladder and start creating your own assets. Remember: Imagination and asset creation are linked very closely. Imagine and create your own game. It is never too late to follow your interests, curiosities, and passions. Show up for creative work and use random prompts or anchors to get going. Start small — small is beautiful. You need to establish a system of creativity to create your own assets: Consistent Small Actions + Smart Moves + Hard Work + Play Your Game Image created by Author You need to astonish and amaze yourself every day. You can use improvisation to increase adventure and quality in your life. Try to apply automated writing, drawing, doodling, story-telling, ideating, designing, creating, dancing, and singing in your life. Image created by Author Create your personal poster and manifesto. The world needs interesting, unique, weird people — like you! What kind of super-hero are you? How will you help other people? Brainstorm ways to amplify your strengths. Image created by Author This is a marathon and you are playing the long game. If you really want to get lucky in the long term, you need to provide yourself more opportunities for failure. These strategies will enable you to develop creative solutions, experiments, and innovations for the challenges of the new decade. This essay has attempted to outline the vision and strategies to create a truly creative life, ready to embrace this crazy world. The new decade is the right time for you to boost our creativity and imagination. You can design and create a happy life for yourself by designing a decade full of creative challenges and entrepreneurial adventures. You can create your own renaissance during the 2020s. Fahri Karakas is the author of Self-making Studio. You can explore more here.
https://medium.com/journal-of-curiosity-imagination-and-inspiration/how-to-survive-and-thrive-during-the-2020s-38050a04487f
['Fahri Karakas']
2020-08-09 21:11:56.451000+00:00
['Work', 'Self', 'Technology', 'Productivity', 'Creativity']
What Machines Don’t “See” … Yet.
As of 2020, we have come a long way with respects to applying novel learning algorithms through deep learning to vision tasks — otherwise a characteristic feature of most living beings that move. In “narrow” tasks like object detection and classification, different labs have reported “better than human” level performance of machines. However, multiple bottlenecks have arose in the application of these algorithms to “real world” where humans still have a massive generalizing upper hand. A rather simple research work on corrupting MNIST data, dubbed MNIST-C, highlights this difference quite well. The gist being that distorting/corrupting images of MNIST (handwritten numbers) while keeping the semantic structure of the numbers intact, confuses some of the best models that report high accuracy on the clean test data, but not humans. One way forward then would be to create massive amounts of training data for machines to generalize to the real world like us. These datasets would account for as much possible distortions or out of distribution events there can be out there. By today's standards this would turn out to be an enormous task with a seemingly infinite “long tail problem” where we keep missing out on possible deviations from the training data. Other alternates, not talked about as often, is to look at the biological underpinnings of vision and their evolution in living beings. One major drawback of datasets for computer vision today, is that the vast majority is made up of labelled 2D images. These images do not represent the real world as it has been appearing to living/moving/seeing beings. Even if we do collect massive image datasets that seem to capture an extremely broad range of objects and their categories, learning models trained on them would still lack 2 key concepts that could be needed for effective generalization. First, before we compare human vision to computer vision, I believe, we need to represent motion to machines. Most things we identify as single whole objects, when moved, move together as a single entity. For things that are fluid, like water and sand, we club together the individual particles/molecules and generalize the collection to, well, water and sand. When we look at a dog, we may have inbuilt mechanisms that identify separate aspects like the eye, ears, nose, fur etc. but the human mind has a bias to detect the dog as a whole and would “know” to expect a tail even when it is hidden from the eye. This is categorically different from current computer vision abilities of deep neural networks where the convolution layers try to identify, in isolation, each minute features first before collating them together as a whole, all while being prone to occlusion. There is also the aspect of real world cause and effect being fluid and biological neurons, by design, have “spiking” signals to model that fluidity. With computational limits, though, we find ourselves discretising signals and operating on instances called frames/images. Recurring through time, or paying attention to events as they unfold in time to even detect objects would perhaps yield a better machine representation of objects. Mechanisms for time series have already been modeled in machines for non vision tasks through Recurrent Neural Networks, LSTMS, and Transformers. At the expense of immense compute, vision algorithms could benefit from attention like learning from videos. Short videos of dogs would consistently expose the beast in its totality to the machine and from “You Only Look Once” we shift to “keep looking till you see the whole damn thing” paradigm. If, instead of MNIST, we had a datasets of videos that capture the written digits while they are being written (like children see in kindergarten), learning models could recognize not only how numbers are, but also how numbers come to be. If knowing what sequence of “something” makes up a number, what that number is made off, could then be deemed irrelevant — be it white pixels in black background, or black pixels in white, or any other distortions that maintains semantics. Only training on such datasets and then testing can reveal how effective these methods turn out to be. Granted, that by current standards even such video datasets would be massive, but efforts in that direction may just provide a better “understanding” to machines about the world we live in. Another aspect is that of depth in a 3-Dimensional spatial world. How far a prey or predator is, has always been vital information for the survival of living beings. By augmenting images to zoom in and zoom out an object for deep learning models, what we effectively tell them is that “the object can come in many sizes” when we actually want to say “the object is close or far away”. This misconstrues the concept of objects and their sizes (both absolute and relative) for machines. Robustness to occlusion can also see improvement when instead of telling machines that dogs “may not always have a tail” we rather imbue the bias “a tail is probably hidden behind the dog”. Again, modeling 3-Dimensional images is already underway but implementing them “just” for detecting hundreds of thousands of objects will test the best of hardware. Now imagine the technical challenges of representing a moving 3-Dimensional world to deep learning systems with 3D convolutions and transformers (or something). Thanks to billions of years of evolution, we humans are extremely good at generalizing concepts from a 3D (space) moving (time) world — so good, in fact, that for us it is enough to look at 2D images and infer not just the third missing spatial dimension but also a bit of the history, possible future, and abstract emotions! It would thus be unfair to show machines 2D, static representations and then expect them to generalize as good as ourselves to real world scenarios. I understand this proposal seems far fetched, yet I think learning is not about the limits of hardware, compute, or data. Effective modeling of the real world is sure going to be challenging and resource hungry but for now they seem to be the better alternative from a purely hypothetical computer vision and machine learning stand point.
https://medium.com/swlh/what-machines-dont-see-82598299bcea
['Mrinal Sourav']
2020-12-19 09:55:31.258000+00:00
['AI', 'Computer Vision', 'Deep Learning', 'Machine Learning', 'Machine Intelligence']
Humanity-Centered Design
Humanity-Centered Design How ethics will change the conversation about AI “Technological change is not additive; it is ecological, which means it changes everything and is, therefore, too important to be left entirely in the hands of [technologists].” — Five Things We Need to Know About Technological Change by Neil Postman An AI may describe an image in various ways. Nothing is neutral: an AI that interprets the world is an AI that creates a worldview. Photo by Meg Via Flickr AI is coming to get us, and everyone knows that. The singularity — the moment when computer intelligence far surpasses human intelligence — is right around the corner, and when it arrives we’ll be jobless and living in an apocalyptic era whose nature we can only guess. Welcome to the future. Today, we are using a type of AI called artificial narrow intelligence (ANI). ANI can generally do one thing very well, sometimes better than humans can — like the AI that defeated the world’s best Go player — but if asked to perform a task outside of its specified function, it wouldn’t be able to. We are surrounded by instances of ANI: autonomous cars, Cortana or Alexa, even some of the news we read is generated by AI. ANI may have limits, but it’s powerful, useful, and everywhere. As a result, a growing number of designers will be working on AI-driven technology. Will it be technology that improves life and makes our world better? Or will it reinforce negative behaviors, strengthen biases, and increase inequality? Today, we generally talk about AI only from a technological perspective: What’s powering its functionality? Is it machine learning, computer vision, or a bot? Instead, let’s shift the conversation from technology and features to how AI changes lives. Viewing AI through a lens that focuses on its impact on humans can highlight what using that AI really means to us. And changing the language we use to talk about AI can expose important ethical issues. New vocabulary that classifies AI by what it does, rather than how it works, may have terms like “AI that interprets reality,” “AI that augments senses,” or “AI that remembers for humans.” Nothing Is Neutral Microsoft’s “Seeing AI” is an app and glasses-like device that helps blind people “see” the world around them by hearing descriptions of it. It’s an amazing and helpful invention, and an example of inclusive technology. If we view it through a technological lens, we see a product that uses computer vision and facial recognition as well as natural language generation. Now, let’s focus less on how it works and more on what it does. If we evaluate how the AI impacts humans, we see it as an “AI that interprets reality.” Suddenly, it’s clear that we need to pay attention. Look at the image above. What do you see? This is a hypothetical example of how an AI could be programmed to interpret an image. All the descriptions are correct, but the AI’s choice of words determines how the blind person experiences the world. What if Coca-Cola were to offer to sponsor the device so that it’s free but ask for one thing: whenever the AI sees a Coca-Cola product, it mentions it explicitly, while it uses generic terms for any of their competitors? There’s a dramatic difference between “a woman drinking a soft drink” and “a smiling, pretty woman drinking a Coke,” though both phrases correctly describe the same situation. We need to think through these kinds of scenarios when we design products. The Rise of the Product Ethicist With this shifted focus on how we view AI, a new profession should also emerge: the “product ethicist.” Someone in that role would aim to keep the product design honest. While not only one person should think or care about ethics, a “product ethicist” could provide more nuanced thinking to guide the design process and hold everyone accountable. Further, communicating an AI’s potential impact to its users is as important as designing an ethical product — it’s actually part of being ethical. Food products have labels with ingredients and nutritional facts to help consumers choose what to buy. What labeling system could help them decide on the right AI product? Designers put people first. They empathize, observe, and listen. They find problems to solve not because they are technically difficult, but because they are hard human issues. How to use AI is one of these challenges — and humanity-centered design could be the solution. Originally published in Issue 35.2 of ARCADE magazine. Tuesday 19th Sep 2017, Fall 2017 To stay in-the-know with Microsoft Design, follow us on Dribbble, Twitter and Facebook, or join our Windows Insider program. And if you are interested in joining our team, head over to aka.ms/DesignCareers.
https://medium.com/microsoft-design/humanity-centered-design-how-ethics-will-change-the-conversation-about-ai-65035b82b046
['Ruth Kikin-Gil']
2019-08-27 16:05:51.651000+00:00
['AI', 'Artificial Intelligence', 'Innovation', 'Design Thinking', 'Ethics']
Will face masks be the next big thing?
Well, according to my calculations… It depends on the number of people. Information cascades occur when people make decisions sequentially, even if it goes against their own private information, based on the observed decisions of other people. Let’s assume we have a group of people (N) who can decide between adopting or rejecting the fashion face mask trend. Everyone can observe the decisions of people in front of them, but not their private information. We can build a mathematical model for what decisions they’ll make, based on the following components: States of the world: The world randomly exists in either of two states, one where wearing face masks for fashion is a good idea (G) and one where it’s a bad idea (B) — individuals will try to figure which one is true. Signals: Everyone has private information, or a private signal, that gives them a clue about whether accepting is a good idea or not — maybe in this case, it’s your existing knowledge of fashion. Signals can be high (H), meaning accepting is a good idea, or low (L) meaning it’s a bad idea. If it is, in fact, a good idea, then high signals will be more frequent than low ones. Mathematically speaking, the probability of getting a high signal given it’s a good idea (denoted by Pr [H |G], or q) will be greater than 1/2. Pr [L | G] will therefore be 1 — q. The same applies for the opposite: Created by Murto Hilali As a rule, the first person will follow their signal, as will the second. (Proof can be found here). If Person 1 and 2 make different decisions, person 3 is indifferent and will follow his own signal. If they make the same decision (accept or reject the trend) person 3 will follow suit regardless of their private information. This speaks to a trend that governs information cascades: If the number of acceptances exceeds rejections by two, it starts a cascade of acceptance. If the number of rejections exceeds acceptances by two, it starts a cascade of rejection. Screenshot by Author | Cornell This means three matching signals in a row will always cause a cascade. Let’s split N up into chunks of 3 consecutive people: the probability that they all get the same signal is: By the same reasoning, the probability none of them get identical signals (i.e. no cascade happens) is As N approaches infinity, this value starts to = 0. That means the more people there are in the sequence, the likelier a cascade is to form. This could be a cascade of acceptance or rejection — if you want to get the former, encourage the first few people to all accept, and the rest will follow suit.
https://medium.com/datadriveninvestor/face-masks-56ec2879564a
['Murto Hilali']
2020-05-20 01:48:21.106000+00:00
['Neuroscience', 'Business', 'Mathematics', 'Psychology', 'Economics']
#EndGasFlaringNG: The Unspoken Dangers of Gas Flaring In Nigeria — by @mista_blak
It’s estimated that about 140 billion cubic meters of gas are flared annually across the oil-producing countries of the world (McGreevey & Whitaker, 2020). Gas flaring has been illegal in Nigeria since 1984, yet the country still ranks among the top 10 gas-flare countries with about 7.4 billion cubic meters of gas flared in 2018; and about 425.9 billion standard cubic feet of gas flared in 2019 (Eboh, 2019). Gas flaring is associated with lots of dangers. This essay aims to explain these dangers in details, and also to provide an insight into the efforts made by stakeholders to end gas flaring, identify the missing links between current strategies, and recommend viable solutions to gas flaring in Nigeria. WHAT IS GAS FLARING? Gas flaring is the controlled combustion of associated gas generated during various processes including oil and gas recovery, petrochemical process, and landfill gas extraction (Generon.com, 2019). GAS FLARING IN NIGERIA: A TALE OF NIGHTMARES The health risks associated with gas flaring are glaring. In the oil-rich Niger Delta, 2 million people live within 4 kilometres (2.5 miles) of gas flare (Schick et al, 2018), which makes them more vulnerable to several health issues including cancer and lung damage, as well as deformities in children, asthma, bronchitis, pneumonia, neurological and reproductive problems (CSL Stockbrokers, 2020). Relatively, agricultural productivity (in the oil-producing areas) has been severely hampered by gas flaring. The combustion process raises the soil temperature, with a decline in crop yield and acid rains as its two major ripple effects. The smokes which emanate from the flares also lead to black rainfall and water bodies which affect aquatic and Wildlife. The economic costs of gas flaring are mind-boggling. Data obtained from the Nigerian Gas Flare Tracker showed that 25.9 billion Standard Cubic Feet of gas, valued at N460.5billion, were flared between January and November 2019. As shown in the figure below, this amount would comfortably finance the capital expenditure of Ministries of Education, Power, Defence and Transport which stood at a combined total of N450 billion. Also, the volume of gas flared is capable of generating 42,600 megawatts of electricity which would have helped solve the electricity problem of the country. Climate change is another danger of gas flaring. In 2019, as much as 22.6 million tonnes of carbon dioxide were emitted into the environment as a result of gas flaring in Nigeria. In fact, the environmental costs of gas flaring in Nigeria amount to N28.8 billion annually (PWC, 2019). There is also the issue of air and noise pollution and a rise in temperature in the oil-producing areas. Issues related to gas flaring also leads to protests and attacks which worsen the fragile security situation of the oil-producing areas. Meanwhile, several efforts have been aimed at checking gas flaring in Nigeria. Some of these efforts include the Associated Gas Re-injection Act (1979), the Nigeria Gas Master plan (2008), and the Nigeria Gas Flare Commercialization Programme (2017). Nigeria is also part of the Global Gas Flare Reduction Partnership of the World Bank. However, all these efforts couldn’t help the Federal Government of Nigeria achieve its dream of zero gas flaring by 2020. This is a result of several combined factors: the “tax-deductible” charges imposed on gas flaring are too cheap for oil companies; insufficient infrastructure to process and transport gas to end-users; institutional challenges in the gas market which make the cost of processing and transporting gas more expensive than the income from its sale; significant amount of gas in oil-fields which makes greater re-injection impossible; and the country’s heavy dependence on crude oil for revenue. RECOMMENDATIONS For the goal of zero routine gas flaring to be truly achieved, the following recommendations should be considered: The 2018 regulations, which increased the charges on gas flaring, should be adequately enforced. The Federal Government should pay attention to the development of critical infrastructure such as gas processing technologies and transportation pipelines to enhance the movement of gas from oil fields to end-users. The Federal Government should address the institutional challenges affecting the gas market in Nigeria. These challenges include underdeveloped local markets and low market prices for gas. Addressing these challenges will incentivize greater investments in gas processing and utilization. Investments in petrochemical industries should be encouraged. These industries utilize gas to produce polymers, ammonia, hydrogen fuel for cars, etc. We can borrow a leaf from Russia where the petrochemical industry is on the rise, with companies like Sibur leading the way. According to the company’s data, in 2018, through recycling hydrocarbon byproducts of oil and gas extraction (about 22.3 billion cubic meters of gas), it helped reduce greenhouse-gas emissions by 71 million metric tons (Frolovskiy, 2019). There should be increased investment in power generation using gas. This would help reduce the quantity of gas flared and also increase the nation’s electricity capacity. Oil companies should include gas processing technologies during the development of new oil-fields, and also ensure transparency while reporting to authorities, the quantity of gas flared. Concrete efforts should be made to diversify the Nigerian economy to reduce dependence on crude oil for national revenue. Reduced dependence on petroleum would increase the government’s tenacity in enforcing measures against gas flaring in the Country. Agriculture, tourism, and processing can provide alternative sources of revenue. CONCLUSION Ending routine gas flaring in Nigeria will lead to an increase in revenue generation, infrastructural development, and power supply. It would also support thousands of jobs and businesses, improve health conditions in the oil-producing areas, and reduce the emission of CO2 into the atmosphere. With these benefits in mind, tackling gas flaring in Nigeria should be a priority. REFERENCES CSL Stockbrokers (2020, February 18). Gas flaring: A never-ending dark tunnel. Retrieved fromhttps://nairametrics.com/2020/02/18/gas-flaring-a-never-ending-dark-tunnel/ Dmitriy Frolovskiy. (2019, December). Gas flaring remains issue for Russia. Retrieved from https://asiatimes.com/2019/12/russias-gas-flare-up-but-less-than-before/ Eboh, M. (2019, December 31). Nigeria: Despite Paucity of Funds, Nigeria Flares N461bn Gas in 2019. Retrieved from https://allafrica.com/stories/201912310199.html Generon.com (2019, September 24). What is Gas Flaring? — Why is It Done & Viable Alternatives. Retrieved from https://www.generon.com/what-is-gas-flaring-why-is-it-done-alternatives/ McGreevey, C.M. & Whitaker, G. (2020). Zero Routine Flaring by 2030. Retrieved from https://www.worldbank.org/en/programs/zero-routine-flaring-by-2030 PWC (2019). Assessing the impacts of gas flaring on the Nigerian economy. Retrieved from https://www.pwc.com/ng/en/assets/pdf/gas-flaring-impact1.pdf Schick, L., Myles, P., & Okelum, O.E. (2018, November 14). Gas flaring continues scorching Niger Delta. Retrieved from https://www.dw.com/en/gas-flaring-continues-scorching-niger-delta/a-46088235 This article was submitted by Sobechi Evans-Ibe, for the Gas Flaring In Nigeria Essay Competition.
https://medium.com/climatewed/endgasflaringng-the-unspoken-dangers-of-gas-flaring-in-nigeria-by-mista-blak-1b0755452f10
['Iccdi Africa']
2020-09-20 10:47:34.812000+00:00
['Health', 'Gas', 'Climate Change', 'Gas Flare', 'Renewable Energy']
The Essence of True Marketing
Originally published at www.beaconsocialmedia.com on April 27, 2018. Anna McCormack — Founder — Beacon Social Media At the core of every society is people. At the core of every person is heart. We are flesh and bone….each of us, sensitive beings who feel much more than we care to admit. Many of us spend a lot of time denying this fact or doing our utmost to numb ourselves to it, but alas, no amount of denial and or numbing can take away the fact that, we are and always will be incredibly sensitive. We feel it all! Why such an introduction in an article about marketing? It’s simple. The Marketing Industry has lost sight of this all important truth. Instead it is focus driven on increasing sales, meeting targets, pushing this way and that to achieve so called ‘success’ in business. The sad part is that we, the people, have allowed it. These days when we attend events that are well intentioned to support businesses to ‘grow and prosper’, rarely is the subject of relationships within business discussed. Why is this? Should relationships not be one of the first topics of conversation when we discuss success in business? True success in business comes from the daily interactions we have with our colleagues, clients and customers. No monetary value can be placed on this. Lived in its truest sense this approach to business is magnetic. The thing we all want the most in a business is real integrity…..not the half hearted kind, not empty words, but deeply felt to the core integrity, and there is nothing more appealing in business then the person who places people before profit. The funny thing is that when we do this, everything tends to take care of itself. Things just start to flow and we are supported from every angle possible. And yet, it is so easy to fall back into the fear or wanting that can lead to a less then human approach to business. So long as our skills are sound, and we are smart with our business planning and budget, all we need is to make it about people first. Its not rocket science at all….in fact we don’t even need a degree for this part — and yet it is indeed the most valuable part. When we approach business in this way marketing becomes simple. A simple practice of sharing your business out into the community, be it online or off — all you need is the practical know how to do this. True marketing is any form of marketing that genuinely holds people at its core. And at the risk of putting myself out of work, it is through this practice that you will discover you need very little marketing at all for a business that is true. For ‘true’ business has an undeniable magnetism that will see people coming from far and wide.
https://medium.com/multiplier-magazine/the-essence-of-true-marketing-b23fe6fdc772
['Beacon Social Media']
2018-04-28 19:22:37.574000+00:00
['Marketing', 'Business', 'Social Media Marketing', 'Entrepreneurship', 'Social Media']
The Rolling Suitcase Could Have Existed 100 Years Earlier. Here’s The Mindset That Delays Innovation
The Rolling Suitcase Could Have Existed 100 Years Earlier. Here’s The Mindset That Delays Innovation The urge to quickly get to a solution often takes precedence over deeply understanding a problem. By Shireen Jaffer, Co-Founder and CEO of Edvo If you’ve ever been through a high school math class, you probably remember the secret about those textbooks-the answers were in the3 back. Sure, you may have had to flip the book upside down, or you may have only been given the odd number answers. But, for the most part, everything you needed was carefully indexed in the back of the book. While many of us tried to figure out the problem two or three times before flipping to the answers, it was definitely easier to jump straight to them and save an hour on homework. Especially since students are led to believe that getting the right answer is more important than anything else, that deferral to the cheat sheet-the urge to quickly get to a solution-often takes precedence over deeply understanding a problem. To do the latter, students have to be able to ask “why”-an interjection that isn’t always welcomed. With timed tests and formulas, students are incentivized to develop the mindset of “find the answer quickly” instead of “dig deeper” when problem-solving. Consequently, many students carry this mindset into adulthood, depriving themselves of being able to truly learn and innovate. The true definition of learning is being able to understand the “Why?” To understand the “why,” it’s crucial to deeply research a topic and then deconstruct it until you’re only left with the most basic parts-or truths-that constitute it. This is referred to as first principles thinking. For instance, there’s a famous example of first principles thinking that involves a bicycle, a tank, and a motorboat pulling a skier. If you ask someone what they could create from these three items, they might struggle to think of something because it’s difficult to combine the entirety of each item. But, if you understand why each item works-what parts it has and how they operate-it becomes easier to see how the different items could be combined to create something new. Take the treads from the tank, the handlebars from the bike, and the skis and motor from the boat. Suddenly, you have a snowmobile. This way of learning, forcing yourself to dig deeper and go back to first principles, requires time and patience. It may mean googling for hours, listening to podcasts, knowing what question to ask next, and being able to process different pieces of information. Unfortunately, school doesn’t train these behaviors-students are told what to learn and given forms, like templates and formulas, as tools for problem-solving. As adults, it’s important to recognize this so we can retrain ourselves to learn and problem solve. To problem solve, focus on the function-not the form. First principles thinking uses a different approach to problem-solving by concentrating on function, rather than form. When people think in terms of form, they’re using a “template” to make improvements on. This results in the creation of new iterations of that same form. For example, the satchel or traveling pack has been around in some form for thousands of years-Roman soldiers even carried a leather satchel known as a loculus. And over the centuries, new versions of the basic “bag” continued to come out as materials and craftsmanship improved. Sometimes a zipper was added, sometimes more compartments-nothing that made it significantly easier to transport heavy materials. The rolling suitcase wasn’t invented until 50 years ago, in 1970. When focusing on the function (moving things efficiently), the suitcase was combined with a heavy machine that operated on wheels. Take the wheels off the machine, combine it with the suitcase, and voila! My team at Edvo takes first principles thinking seriously. We know the job search process is broken because it takes way too long to find a job-let alone the right role for each person. The form for getting a job has always been to submit a resume, go for an interview, and then get hired. So there are many companies focused on improving that form, which is why people can now show their work through portfolios and certifications instead of just through their resume. There are even several technologies that claim to make interviews more efficient. But iterating on those forms is not significantly improving the function: routing people to the best-fit roles quickly. Despite these solutions, we find that people are still on the average job search for six months and 33% of hires fail after six months. So rather than just focusing on the current form of getting a job, my team and I are obsessed with the function. We’ve done countless deep dives, ideation sessions, and experiments. Because to truly solve a problem, especially a global one like finding meaningful work, it’s crucial to learn every aspect of why it exists. Retraining your brain to deeply learn takes time and effort. But the better you become at reshaping your thinking, the more curious, insightful, and thoughtful you’ll be-and that’s what innovation requires. Here are a few other related articles you might find helpful: 3 Ways To Stay Competitive In Today’s Workforce After Getting 1.4 Million Views On LinkedIn, This College Grad Is Still Looking For A Job. Here’s Why The Job Hunt Is Such A Struggle Why Job Seekers Get Ghosted, And What They Can Do To Stop It
https://medium.com/the-mission/the-rolling-suitcase-could-have-existed-100-years-earlier-68b119f0d178
[]
2019-10-11 16:26:36.872000+00:00
['Entrepreneurship', 'Business', 'Education', 'Problem Solving', 'Creativity']
3 different ways to Perform Gradient Descent in Tensorflow 2.0 and MS Excel
Gradient descent in Tensorflow 2.0 Before we start, make sure you have Tensorflow 2.0 installed. Please visit here to know more: https://www.tensorflow.org/install Set up The first function fu is with input x1 and x2 and return a value. Please note that x1 and x2 here refer to x and z in the previous part. The second function fu_minmizie is identical, but without input value, the reason will be covered later. The third function reset is resetting the initial value of x1 and x2 to (10, 10). Way 1: minimize() Doc for minimize() can be found here: https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Optimizer#minimize Code: Sample output First I reset x1 and x2 to (10, 10). Then choose the SGD (stochastic gradient descent) optimizer with rate = 0.1. Finally perform minimization using opt.minimize() with respect to the function fu_minmizie without input values , since opt.minimize() would refer to the provided var_list as the variables to be updated. opt.minimize() updates x1 and x2 for each step. Way 2: tf.GradientTape() and apply_gradients() Doc for GradientTape() and apply_gradients() can be found here: https://www.tensorflow.org/api_docs/python/tf/GradientTape https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Optimizer#apply_gradients Code: Sample output First I reset x1 and x2 to (10, 10). Then choose the SGD (stochastic gradient descent) optimizer with rate = 0.1. Then calling the function fu with input (10, 10) under tf.GradientTape() , so that we can call tape.gradient(y, [x1, x2]) to get the gradients, which is (8.3, 8) in the first iteration. The reason why we need to use the function fu but not the function fu_minimize is that tf.GradientTape() need to watch to process inside the function, so as to compute the gradients, that’s why we call it “tape”. Then update the x1 and x2 using apply_gradients() . Note that processed_grads perform no transformation to the gradients so as to adhere to gradient descent theorem, you can try out different transformation for the gradients in this line of code, for example divided by two, square root, plus two, etc. Way 3: tf. GradientTape () and assign() (No optimizer) Doc for assign() can be found here: https://www.tensorflow.org/api_docs/python/tf/Variable#assign Code: Sample output First I reset x1 and x2 to (10, 10). Then calling the function with input (10, 10) under tf.GradientTape() , so that we can call tape.gradient(y, [x1, x2]) to get the gradients, which is (8.3, 8) is the first iteration.
https://medium.com/analytics-vidhya/3-different-ways-to-perform-gradient-descent-in-tensorflow-2-0-and-ms-excel-ffc3791a160a
['Pak Long']
2019-10-26 12:43:48.904000+00:00
['Machine Learning', 'Python', 'Artificial Intelligence', 'Gradient Descent', 'TensorFlow']
Building resilient services at Prime Video with chaos engineering
Large-scale distributed software systems are composed of several individual sub-systems-such as CDNs, load balancers, and databases-and their interactions. These interactions sometimes have unpredictable outcomes caused by unforeseen turbulent events (for example, a network failure). These events can lead to system-wide failures. Chaos engineering is the discipline of experimenting on a distributed system to build confidence in the system’s capability to withstand turbulent events. Chaos engineering requires adopting practices to identify interactions in distributed systems and related failures proactively, and also needs implementing and validating countermeasures. The key to chaos engineering is injecting failure in a controlled manner. In this post, we present a simple approach for fault injection in systems utilizing Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Container Service (Amazon ECS), and its integration with a load-testing suite to validate the countermeasures put in place to prevent dependency and resource exhaustion failures. A typical chaos experiment could be generating baseline load (traffic) against the system, adding latency to all network calls to the underlying database, and then validating timeouts and retries. We will explain how to inject such failure (addition of latency to database calls), why validating countermeasures (timeouts and retries) under load is essential, and how to execute it in an Amazon EC2-based system. We will start with a brief introduction to chaos engineering, then dive deep into failure injection using the AWS Systems Manager. We will then present our open source library, AWSSSMChaosRunner. This was inspired by Adrian Hornsby’s “Injecting Chaos to Amazon EC2 using AWS System Manager” blog post. Finally, we will provide an example of integration and explain how Prime Video used this library to prevent potentially customer-impacting outages.
https://medium.com/the-cloud-architect/building-resilient-services-at-prime-video-with-chaos-engineering-688f464dc357
['Adrian Hornsby']
2020-08-24 08:03:17.543000+00:00
['Software Development', 'Amazon', 'Chaos Engineering', 'Open Source', 'AWS']
The Best of Better Programming (August 24–28, 2020)
Hey everyone! I hope you had a good week. I’m Zack Shapiro, Editor of Better Programming, with the first installment of Better Programming’s Best of the Week Newsletter to help you get inspired and coding this weekend. Without further ado, here are the 11 best-performing articles that we published this week!
https://medium.com/better-programming/the-best-of-better-programming-august-24-28-2020-72aefcc63193
['Zack Shapiro']
2020-08-28 21:29:31.719000+00:00
['JavaScript', 'Software Development', 'Programming', 'Startup', 'Python']
How Obsessing Over Productivity Is Making You Unhappy
How Obsessing Over Productivity Is Making You Unhappy And four tips for breaking this toxic cycle. Photo by Tima Miroshnichenko from Pexels Everywhere we look, we see people spouting trendy productivity hacks, informing us how to optimize every single minute of our lives. We read articles about how to read a book in a day and hear people bragging about finishing their to-do lists in record time. We embrace this insane productivity culture because we assume the more things we do, the happier we become. If we’re happy when we finish one book, then we rationalize that finishing thirty books will make us thirty times as happy. However, this way of thinking fails to recognize why finishing a book, brings us joy in the first place. We don't get joy from simply checking it off our reading list. No, finishing a book makes us happy because of the journey it took us on, the lessons we learned, the investments we made, and the anticipation of wondering what the next chapter will bring us. Our obsession with productivity robs us of these experiences, as we no longer take the time to appreciate them. All we care about is efficiently finishing our one thing, so we can move onto the next. If you’re nodding along, I hope you find comfort in the fact you’re not the only one with this problem. The good news is if we can recognize a problem, we can also address it. So, here are some suggestions to help you break free from this relentless productivity mindset that has dominated our personal lives.
https://medium.com/swlh/how-obsessing-over-productivity-is-making-you-unhappy-42a75b07c8a7
['Thom Gallet']
2020-12-19 16:49:32.738000+00:00
['Business', 'Mental Health', 'Self', 'Productivity', 'Life Lessons']
ELECTRA: Pre-Training Text Encoders as Discriminators rather than Generators
ELECTRA: Pre-Training Text Encoders as Discriminators rather than Generators What is the difference between ELECTRA and BERT? Photo by Edward Ma on Unsplash BERT (Devlin et al., 2018) is the baseline of NLP tasks recently. There are a lot of new models released based on BERT architecture such as RoBERTA (Liu et al. 2019) and ALBERT (Lan et al., 2019). Clark et al. released ELECTRA (Clark et al., 2020) which target to reduce computation time and resource while maintaining high-quality performance. The trick is introducing the generator for Masked Langauge Model (MLM) prediction and forwarding the generator result to the discriminator .MLM is one of the training objectives in BERT (Devlin et al., 2018). However, it is being criticized because of misaligned between the training phase and the fine-tuning phase. In short, the MLM mask token by [MASK] and model will predict the real world in order to learn the word representation. On the other hand, ELECTRA (Clark et al., 2020) contains two models which are generator and discriminator. The masked token will be sent to the generator and generating alternative inputs for discriminator (i.e. ELECTRA model). After the training phase, the generator will be thrown away while we only keep the discriminator for fine-tuning and inference. Clark et al. named this method as replaced token detection. In the following sections, we will cover how does ELECTRA (Clark et al., 2020) works. Input Data Overview of ELECTRA training process (Clark et al., 2020) As mentioned before, there are 2 models in the training phase. Instead of feeding masked token (e.g. [MASK]) to the target model (i.e. discriminator/ ELECTRA), a small MLM is trained to predict mask token. The output of the generator which does not include any masked token becomes the input of the discriminator. It is possible that the generator predicts the same token (i.e. “the” in the above figure”). It will keep tracking for generating a true label for the discriminator. Taking the above figure as an example, only “ate” will be marked as “replaced” while the rest of them (including “the”) are “original” labels. You may imagine that the generator is a small-size masked language model (e.g. BERT). The objective of the generator is to generate training data for the discriminator and learning word representation (aka token embeddings). Actually, the idea of a generator is similar to the approach of data augmentation for NLP in nlpaug. Model Setup To improve the efficiency of the pre-training, Clark et al. figure out that sharing weight between generator and discriminator may not be a good way. Indeed, they only share token and positional embeddings across two models. The following figure shows that the replaced token detection approach outperforms the masked language model. Performance comparison between replaced token detection and masked language model (Clark et al., 2020) Secondly, the smaller size of the generator provides a better result. Small size generator not only leads a better result but also reducing overall training time. Performance of different generator size and discriminator size (Clark et al., 2020) Tuning Hyperparameters Clark et al. did a lot on fine-tuning hyperparameters. It includes the model’s hidden size, learning rate, and batch size. Here are the best hyperparameters for different sizes of ELECTRA models. Pre-training hyperparameters (Clark et al., 2020) Take Away Generative Adversarial Network (GAN): The approach is similar to GAN which intends to generate fake data to fool or attack models (to understand more about the adversarial attack, you may check out here and here). However, the generator from training ELECTRA is different. First of all, the correct token which is generated by the generator considers as “real” instead of “fake”. Also, the generator is trained to maximum likelihood rather than fool the discriminator. The major challenge of adopting BERT in production is resource allocation. 1 G memory is almost the minimum requirement for the BERT model in production. Can foresee that there are more and more new NLP models focusing on reducing the size of the model and inference time. About Me I am a Data Scientist in the Bay Area. Focusing on state-of-the-art work in Data Science, Artificial Intelligence, especially in NLP and platform related. Feel free to connect with me on LinkedIn or follow me on Medium or Github. Extension Reading Introduction to BERT, RoBERTA and ALBERT Data Augmentation for NLP (nlpaug) Adversarial Attack in NLP (1, 2) Reference
https://medium.com/towards-artificial-intelligence/electra-pre-training-text-encoders-as-discriminators-rather-than-generators-c5661f7ea0d5
['Edward Ma']
2020-09-30 00:02:01.468000+00:00
['Artificial Intelligence', 'NLP', 'Naturallanguageprocessing', 'Bert', 'AI']
Stacked Bar Charts with Python’s Matplotlib
As expected, the chart is hard to read. Let’s try the stacked bar chart and add a few adjustments. First, we can sort the values before plotting, giving us a better sense of order and making it easier to compare the bars. We’ll do so with the ‘Global Sales’ column since it has the total. ## sort values df_grouped = df_grouped.sort_values('Global_Sales') df_grouped Some of the records on the data frame — Image by Author Earlier, to build a clustered bar chart, we used a plot for each region where the width parameter and adjustments in the x-axis helped us fit each platform's four areas. Similarly, for plotting stack bar charts, we’ll use a plot for each region. This time we’ll use the bottom/left parameter to tell Matplotlib what comes before the bars we’re drawing. plt.bar([1,2,3,4], [10,30,20,5]) plt.bar([1,2,3,4], [3,4,5,6], bottom = [10,30,20,5]) plt.show() plt.barh([1,2,3,4], [10,30,20,5]) plt.barh([1,2,3,4], [3,4,5,6], left = [10,30,20,5]) plt.show() Stacked Bar Charts (Vertical/ Horizontal) — Image by Author Cool. We can use a loop to plot the bars, passing a list of zeros for the ‘bottom’ parameter in the first set and accumulating the following values for the next regions. fields = ['NA_Sales','EU_Sales','JP_Sales','Other_Sales'] colors = ['#1D2F6F', '#8390FA', '#6EAF46', '#FAC748'] labels = ['NA', 'EU', 'JP', 'Others'] # figure and axis fig, ax = plt.subplots(1, figsize=(12, 10)) # plot bars left = len(df_grouped) * [0] for idx, name in enumerate(fields): plt.barh(df_grouped.index, df_grouped[name], left = left, color=colors[idx]) left = left + df_grouped[name] # title, legend, labels plt.title('Video Game Sales By Platform and Region ', loc='left') plt.legend(labels, bbox_to_anchor=([0.55, 1, 0, 0]), ncol=4, frameon=False) plt.xlabel('Millions of copies of all games') # remove spines ax.spines['right'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['top'].set_visible(False) ax.spines['bottom'].set_visible(False) # adjust limits and draw grid lines plt.ylim(-0.5, ax.get_yticks()[-1] + 0.5) ax.set_axisbelow(True) ax.xaxis.grid(color='gray', linestyle='dashed') plt.show() Stacked Bar Chart — Image by Author Great, this is way more readable than the last one. It’s important to remember the purpose of this chart before trying to extract any insights. The idea here is to compare the platforms' total sales and understand each platform's composition. Comparing totals across fields and comparing regions inside one bar is ok. Comparing regions from different bars, on the other hand, can be very misleading. In this case, we can compare the NA region across the bars since it has the same starting point for every bar, but it isn't so easy to compare the others. Take the X360, for example, it has a lower value for JP than the PS2, but it’s hard to compare if the Others value is higher or lower than the Wii. Comparable value — Image by Author Uncomparable value — Image by Author Suppose we change the stack's order, with Other Sales as the first bar, and sort the records by Other Sales. It should be easier to tell which is more significant. ## sort values df_grouped = df_grouped.sort_values('Other_Sales') fields = ['Other_Sales', 'NA_Sales','EU_Sales','JP_Sales'] colors = ['#1D2F6F', '#8390FA', '#6EAF46', '#FAC748'] labels = ['Others', 'NA', 'EU', 'JP'] Stacked Bar Chart, emphasizing the Others category — Image by Author There are two essential elements in this visualization, the order of the categories in the stack of bars and the rows' order. If we want to emphasize one region, we can sort the records with the chosen field and use it as the left-most bar. If we don’t, we can sort the records by the total and order the stacks with the categories that have higher values first.
https://towardsdatascience.com/stacked-bar-charts-with-pythons-matplotlib-f4020e4eb4a7
['Thiago Carvalho']
2020-11-23 17:02:36.498000+00:00
['Python', 'Matplotlib', 'Bar Chart', 'Data Science', 'Data Visualization']
Why Creativity is a Cure for Nihilistic Despair
Recently I read a New Yorker interview with Donald Glover. I noticed and appreciated Glover’s existentialist sentiment regarding life itself. Unintentionally tapping into the philosophy of absurdism, Glover and his brother endured a tough childhood and, through this, came to share an understanding that “life was a bad dream and that laughter was a way to wake yourself up”. Glover also talks about the world as a computer program. “When people become depressed and kill themselves, it’s because all they see is the algorithm, the loop,”. Glover sees himself as a hacker, using hard work and self-confidence as his superpower, much like Camus’s Sisyphus, who embraces the process of existence with a lucid awareness and autotelic ambition. Donald Glover is an authentic artist, bound only to his creations rather than his personal life or even career. In this sense, he provides us with a model of existential living at its most profound and compatible with a world growing increasingly bored, tired and hopeless. Ernest Becker and Camus came to similar conclusions as Glover, as do many others, every day as they reflect on the absurdity of existence; from their dead-end jobs to the next round of world tragedies. It’s like we’re in a loop of meaningless trash that leads us nowhere. For Becker, the essence of humanity is paradoxical because of this. We have an instinct for survival and life and yet are, at any moment, cursed with an awareness of our eventual death; the Worm at the Core of our being as William James called it. “The tragedy of evolution is that it created a limited animal with unlimited horizons…men have to artificially and arbitrarily restrict their intake of experience and focus their output on decisive action” For Becker, humans have to come up with a transcendent meaning system to symbolically deny death. This he calls heroism. The first defence, character armour, is the lifestyle we choose. Whether we decide to act as a parent, a student, or a Tik-Toker, we strive for and assert ourselves in dominant cultural roles that provide us a certain security. By participating in the society, we can imagine our being attached to something that will survive us symbolically past our deaths. Another form is the causa sui project: how a person achieves their self-esteem and purpose in life. Usually we’ll adopt some sort of custom or system that serves as a vehicle for earthly heroism. This is our immortality project, something that will last us far past our deaths and assert our lives as meaningful. Photo by Devin Avery on Unsplash Becker extends this arguing that society itself is a hero system, “a living myth of the significance of human life”. However secular culture isn’t the best at providing us with a lot of vehicles for meaning. No God or supreme being is really taking into account what we’re doing so, technically, we can do anything. Which sounds awesome but after a few hedonistic benders can end up kinda depressing. We seem to crave meaning, causing a second paradox: the need for meaning in a universe that is silent. Sure, people might try to hack this with material gain, raising a family or defending certain ideologies even more extremely. However, Albert Camus rejects an avoidance from the paradox, arguing that the absurd; the tension between objective meaninglessness and the human desire for meaning, is actually what makes us unique as a species. If we were animals we would be one with the world, no division. And if the universe was obviously meaningful, we would also be one with the world and there would also be no division. To be human is to see the absurdity; to see life as a bad dream. Camus argues, then, that rebellion is the appropriate response to the absurd. This, personally, means rejecting anything certain and, in a difficult balancing act, also resisting nihilism. The rebel says no but must say no in the name of something. This avoids needless and rage-filled destruction for the purpose of destruction. The artist is, evidently, the rebel. They live a compelling and meaningful life all the while remaining lucid by transforming that lucidity into meaning. Becker and Camus each conclude that creativity is one of the best ways of remaining aware of the existential paradox without succumbing to nihilism. Becker writes: “the most anyone of us can seem to do is to fashion something — an object or ourselves — and drop it into the confusion, make an offering of it, so to speak, to the life force”. For the artist there is a demand for meaning with the constant recognition that such an order is impossible. Photo by Eddy Klaus on Unsplash “In comparing the neurotic and the artist “they both bite off more than they can chew” but the artist spews it back out again and chews it over in an objectified way as an external, active work project” Hence Art is simultaneously the ultimate form of self-expression, an act that saves us from hopelessness all the while without any requirement of hope to begin with. For the artist “existence becomes a problem that needs an ideal answer but when you no longer accept the collective solution to the problem of existence, then you must fashion your own”. This is reminiscent of the latter part of Donald Glover’s interview in which he states”“Authenticity is the journey of figuring out who you are through what you make.” A product of the artist’s struggle with the existential paradox, the authentic personality develops through the art that one makes. Donald Glover, an artist seemingly aware of such a paradox, has nonetheless developed such an authentic personality by, not just dabbling, but throwing himself into different genres and mediums just as soon as his fans became comfortable with calling him a stand-up comedian or actor or rapper. In such an age where Becker’s observations on lifestyle signalling become ever apparent, it’s refreshing to see someone consider themselves an artist first and foremost, presenting us with a fact that one can lead a life of self-exploration and that such an existence is perfectly fine if not preferred. And now that we all have some time on our hand it’s probably not a bad idea to combat the creeping nihilistic despair and loneliness with some art; so we can strive for happiness without ignorance just like Sisyphus.
https://medium.com/swlh/why-donald-glover-is-an-existentialist-and-you-should-be-too-1bf17ccd3a8a
['Ben Thomas']
2020-05-19 21:40:13.236000+00:00
['Creativity', 'Philosophy', 'Art', 'Existentialism', 'Psychology']
The best examples of bad code I’ve come across production mode.
P.S. I also have a lot of bad codes. I have found this in my project, but I don’t understand what the hell is — Set language function )))))
https://medium.com/dev-genius/the-best-examples-of-bad-code-ive-come-across-production-mode-4f13e8d4de2
['Soso Gvritishvili']
2020-06-17 12:26:55.954000+00:00
['Engineering', 'Software Development', 'Development', 'Developer', 'Programming']
The Incredible Power of Being “Onto Something”
Some things in life come along and make you forget about everything else. Forget the lows, the stress, the why-didn’t-they-call-me backs. And then there are other things. Things that come along and make everything worth remembering. Things that shoot electricity and meaning into the vein of everydayness. These things are trying to find us. Trying to get our attention. Waiting, desperately, to add color to our fields of black and white. It might not feel as dramatic as a nearsighted boy who is given sight, but these moments of inflection are available to us all. Many people droll about, unaware or indifferent to the fact they aren’t seeing the whole picture. They haven’t found — or they’ve overlooked — the one outlet that feeds electricity into everything. They haven’t found miraculous little windows to see the world through. Have you found yours? Do you keep it close? Did you lose its signal amongst the static on your line? If you’re someone still waiting to hear The Call, first be aware of the possibility. Never forget or doubt that there is something great waiting for you. There is an outlet waiting to feed electricity into everything you do. Waiting to raise the water and part the sea. Waiting to stir the deepest depths of your creative well. And if you’ve found yours, hold it close. Never allow the static of everydayness to blind you to its importance. Limiting beliefs, external noise, fears of missing out — it will all attempt to derail and shovel dirt over your path. We all must be vigilant in curiosity and our hunger to be, onto something. Life can be spent waiting for death or searching for truth. Find what is meant for you. What makes you feel alive. What makes you human. Continue the search until you feel the veil pulled from your eyes and can say with enthusiasm of a young Theodor Roosevelt, “I never knew how beautiful the world was until…”
https://coreymccomb.medium.com/the-incredible-power-of-being-onto-something-6bb7290159e7
['Corey Mccomb']
2019-06-20 17:40:53.263000+00:00
['Self', 'Creativity', 'Life Lessons', 'Motivation', 'Personal Development']
Tutorial: Hyperparameter Optimization (HPO) with RAPIDS on AWS SageMaker
This video tutorial will show you how to combine RAPIDS and Amazon SageMaker to accelerate hyperparameter optimization (HPO) and find the best version of your model before serving it to the world. HPO can take an exceedingly long time on a non-accelerated platform. When we combine the power of GPU acceleration within a node using RAPIDS and parallel experiments running across multiple nodes using AWS SageMaker, we can get impressive results. For instance, we find a 12x speedup in wall clock time and a 4.5x reduction in cost when comparing GPU and CPU EC2 Spot instances on 100 XGBoost HPO trials using 10 parallel workers on 10 years of the Airline Dataset hosted in an S3 bucket. Learn how to spin up a SageMaker instance and quickly launch a demo notebook from the official sagemaker-examples repository. We’ve built in lots of flexibility into the workflow. We invite you to explore the many configuration options as well as to plug in your dataset. RAPIDS HPO now integrated into the SageMaker Examples UI Tutorial Chapters: 1 — Introduction and Concept Overview 2 — Spinning up a SageMaker Instance 3 — Notebook Walkthrough with Description of Key Choices For more details on getting started, check out the code repo as well as the RAPIDS cloud & RAPIDS HPO webpages. Find us on Slack or file a GitHub issue with suggestions.
https://medium.com/rapids-ai/tutorial-hyperparameter-optimization-hpo-with-rapids-on-aws-sagemaker-45f552b20531
['Miro Enev']
2020-09-28 18:25:00.184000+00:00
['Hyperparameter Tuning', 'Python', 'Machine Learning', 'AWS', 'Sagemaker']
The Chemical Rush of Creative Work Leads to Soul-Soothing Satisfaction
“A non-writing writer is a monster courting madness.” -Franz Kafka It’s the 2013 Tribeca Film Festival and Darren Aronofksy is on stage interviewing fellow filmmaker, Clint Eastwood. He tells him, “You make it seem so easy — and for me filmmaking is pain. I’m in pain the whole time. I’m in pain writing. I’m in pain shooting, and I’m in pain editing, and I just — how do you do it?” Eastwood lets Aronofsky’s question hang in the air before answering with his slow, signature draw, “Well, if it was that painful, I would consider myself somewhat of a masochist.” Everyone has their own sliding scale of pain and pleasure when it comes to making things, but the act of turning inspiration into action is rarely luxurious. The on-demand dopamine machines in our pockets don’t make it any easier. We can get a little high off other people’s work anytime. Why spend hours or years wrangling creative ghosts that promise nothing and might take everything? The answer is simple: On the other side of creative agony is soul-soothing ecstasy. Each hour of effort is a click, click, click of a roller-coaster cart climbing. Climbing higher and higher until it reaches its apex and rolls into a freefall of fulfillment, satisfaction, and pride. Even creative “masochists” like Aronofsky know that once the rewards of creativity hit the vein, the pain disappears. And then it’s time to get back in line and ride again. Humans have had 200,000 years and a million reasons to evolve into industrious and creative beings. It should come as no surprise that when we put our ingenuity to use, our minds race with a primal reassurance that we’ve earned our place in the world. The chemical rush of doing creative work is what keeps people like Aronofksy on the hunt for more. Because once you get a taste of that evolutionary high, the withdrawals can be a real killer…
https://medium.com/publishous/on-the-other-side-of-creative-agony-c2e0ea6af07b
['Corey Mccomb']
2020-12-22 20:07:37.156000+00:00
['Personal Growth', 'Self', 'Inspiration', 'Creativity', 'Writing']
Containerizing Data Workflows
Containerizing Data Workflows (And How to Have the Best of Both Worlds) By Tian Xie As a data technology company, Enigma moves around a lot of data, and one of our main differentiators is linking nodes of seemingly unrelated public data together into a cohesive graph. For example, we might link a corporate registration to a government contract, an OSHA violation, a building violation, etc. This means we not only work with lots of data, but lots of different data, where each dataset is a unique snowflake slightly different from the next. Wrangling high quantities and varieties of data requires the right tools, and we’ve found the best results with Airflow and Docker. In this post, I’ll explain how we’re using these, a few of the problems we’ve run into, and how we came up with Yoshi, our handy workaround tool. If you work in data technologies, you’ve probably heard of Airflow and Docker, but for those of you who need a quick introduction… Introducing Airflow Airflow is designed to simplify running a graph of dependent tasks. Suppose we have a process where: There exist five tasks: A, B, C, D, and E, and all need to complete successfully. B, C and E depend on the successful completion of A. D depends on the successful completion of B and C. Considering each task as a node and each dependency as an edge forms a directed acyclic graph — or DAG for short. If you are familiar with DAGs (or looked them up just now on “Cracking the Coding Interview”), you might think that if a DAG can be reasoned within the time of a job interview, then it can’t be that complex, right? In production, these systems are much more complex than a single topological sort. Questions such as “how are DAGs started?” or “how is the state of each DAG saved?” and “how is the next node started?” are answered by Airflow, which has led to its wide-spread adoption. Scaling Airflow In order to understand how Docker is used, it’s important to first understand how Airflow scales. The simplest implementation of Airflow could live on a single machine where: DAGs are expressed as python files stored on the file system. Storage is written to SQLite. A webserver process serves a web admin interface. A scheduler process forks tasks (the nodes in the DAG) as separate worker processes. Unfortunately, this system can only scale to the size of the machine. Eventually, as DAGs are added and more throughput is needed, the demands on the system will exceed the size of the machine. In this case, airflow can expand to a distributed system. The airflow webserver and scheduler continue running on the same master instance where DAG files are stored. The scheduler connects to a database running on another machine to save state. The scheduler connects to redis and uses celery to dispatch work to worker instances running on many worker machines. Each worker machine can also run multiple airflow worker processes. Now this system can scale to as many machines as you can afford,* solving the scaling problem! Unfortunately, switching to a distributed system generally exchanges scalability for infrastructural complexity — and that’s certainly the case here. Whereas it is easy to deploy code to one machine, it becomes exponentially harder to deploy to many machines (exponentially since that is the number of permutations of configuration that can go wrong). If a distributed system is necessary, then it’s very likely that not only is the number of workers very high, but also the number of DAGs. A large variety of DAGs means a large variety of different sets of dependencies. Over time, updating every DAG to the latest version will become unmanageable and dependencies will diverge. There are systems for managing dependencies in your language of choice (e.g. virtualenv, rubygems, etc) and even systems for managing multiple versions of that language (e.g. pyenv, rbenv), but what if the dependency is at an even lower level? What if it depends on a different operating system? Containerizing Workflows Docker to the rescue! Unless you have been living in a container (ha-ha) for the last five years, you’ve probably heard of containers. Docker is a system for building light-weight virtual machines (“images”) and running processes inside those virtual machines (“containers”). It solves both of these problems by keeping dependencies in distinct containers and moving dependency installation from a deploy process into the build process for each image. When the code for a DAG (henceforth, this set of code will be referred to as a “workflow”) is pushed our remote git host and CI/CD system, it triggers a process to build an image. An image is built with all of the dependencies for the workflow and pushed to a remote docker repository, making it accessible via URL. At the same time, the airflow python DAG file is written. Rather than executing from the DAG directly, it specifies a command to execute in the docker image. At run-time, airflow executes the DAG, thereby running a container for that image. This pulls the image from the docker repository, thereby pulling its dependencies. Docker is not a perfect technology. It easily leads to docker-in-docker inception-holes and much has been written about its flaws, but nodes in a DAG are an ideal use-case. They are effectively enormous idempotent functions — code with input, output and no side-effects. They do not save state nor maintain long-lived connections to other services — two of the more frequently cited problems with Docker. A Double-Edged Sword? Docker exchanges loading dependencies at run-time for loading dependencies at build time. Once an image has been built, the dependencies are frozen. This is necessary to separate dependencies, but becomes an obstacle when multiple DAGs share the same dependency. When the same library upgrade needs to get delivered to multiple images, the only solution is to rebuild each image. Though it may sound far-fetched, this situation comes up all the time: A change to an external API requires an update in all client applications. A security flaw in a deeply nested dependency needs a patch. DRY = “Don’t Repeat Yourself” is one of the central tenets of good software development, which should lead to shared libraries. Code Injection The double-edged sword endemic to Docker containers should sound familiar to anyone working with static builds. One common approach to solving this problem is to use plug-ins loaded at run-time. At Enigma, we developed a similar approach for Docker containers that we named Yoshi (hence, the Nintendo theme for this entire blog post). As previously noted, when a workflow is pushed to our remote git repository and CI/CD system, it triggers an automated process to build an image for that workflow including installing all of its dependencies. Yoshi is a python package that is included as one of these dependencies and gets frozen on the image. Since different workflows change at different rates, they go through the build process at different times and wind up with different versions. This is the nature of working with docker images. Yoshi is also directly installed onto the machine where the airflow worker runs. The latest version is always installed on these machines. At runtime, when the airflow worker executes the docker command, it mounts its local install of Yoshi onto the docker container. This injects the latest Yoshi code into that container, thereby updating Yoshi in the container to the latest version. By keeping code we suspected might need to be updated frequently in Yoshi, keeping the interface to Yoshi stable and injecting the latest code at run-time, we are able to update code instantly across all workflows. The Best of Both Worlds? Injecting code at run-time allowed us to use all of the benefits of Docker containers, but also create exceptions when we needed. At first, this seemed like the best of both worlds, but over time we ran into flaws: A stable interface and backwards compatibility are absolutely essential for allowing newer versions of a library to overwrite an older version, but that’s easier said than done. Maintaining compatibility across hundreds of workflows with different edge cases is even more challenging. Coming from working with containerized processes also required forming some new habits. No code is one-hundred-percent bug-free, but this led to many more bugs than we anticipated. The most frequent use-case for Yoshi was for clients to access external resources. When external resources changed, Yoshi changed with them, which meant that older versions no longer worked. An image is expected to work forever, but the absence of the latest version of Yoshi broke that expectation. Did I say that the most frequent use-case for Yoshi was for clients to access external resources? Turns out that was the only use-case. Initially, we expected to use Yoshi in many different ways, but wound up using it in the same place every time. This meant Yoshi was much larger and more complex than necessary and we only needed it in one node of the DAG. Yoshi caused more bugs and complexity than we wanted, but by revealing where our code changed most frequently, it also revealed a simpler way to deploy updates across many DAGs. Image Injection Heretofore, images were built one-to-one for each DAG, but it does not have to be that way. Each workflow has its own set of dependencies, so an image is built for those dependencies, but each node in the DAG could use a different image. Additionally, Docker images are referenced by URL. The image stored for that URL can change without changing the URL. This means that a DAG node executing the same image URL, could execute different images. Eventually, this led us to inject code by inserting updated images in the middle of a DAG. The Yoshi library remained the same, with all of the same functionality, except now it was also packaged and executable from its own docker image. Workflows were changed so that individual DAG nodes could use different image URLs. Nodes where our code interacted with external resources now used the Yoshi image instead of the workflow image. The URL for the Yoshi image were resolved at run-time with environment variables from the machine so that different environments could use different URLs — e.g. staging could use an image tagged as staging and same for production. When changes to the Yoshi library were pushed to our remote git repository, our CI/CD system built a new image and pushed it to the Docker repository at those URLs. At run-time, the workflow pulls the latest Yoshi image. Image injection not only allowed us to build workarounds to the double-edged sword of static Docker images — without the compatibility challenges of code injection — but building a Yoshi image also opened new doors to run Yoshi utilities from a command-line anywhere and run a Yoshi service. It took us a long time to get there, but our final solution allowed us to have the best of both worlds, and then some. Game Over. *There is a limit to the number of machines that can connect to the same redis host, but that is most likely a lower limiting factor — especially for a start-up.
https://medium.com/enigma-engineering/containerizing-data-workflows-95df1d338048
[]
2019-04-12 17:32:29.857000+00:00
['Airflow', 'Docker', 'Data Engineering', 'Gitlab', 'Engineering']
What do we mean by intelligence, artificial or otherwise?
I’m writing about artificial intelligence but that is too broad and needs to be further defined. We should start with what I mean by intelligence. Intelligence: The ability to accomplish complex goals I’ve come to use this definition put forth by Max Tegmark in his book Life 3.0: Being Human in the Age of Artificial Intelligence. It is both broad enough to include those activities shared by human and non-human animals as well as by computers and not so broad as to get into questions of consciousness or sentience³. Intelligence cannot be measured using one criteria, or scored using a single metric like I.Q. because intelligence is not a single dimension. In the Wired article The Myth of a Superhuman AI, Kevin Kelly states “‘smarter than humans’ is a meaningless concept. Humans do not have general purpose minds, and neither will AIs.” There are cognitive activities at which certain animals perform better than us and some that machines do better than us. This is a good reminder that evolution isn’t a ladder and we’re not at the top of it. Evolution creates adaptations that are best suited for particular circumstances. Artificial Narrow Intelligence (ANI): Specialized in accomplishing a single or narrow goal This is the type of AI we are used to today. Google Search, Amazon Alexa, Apple Siri, nearly all airline ticketing systems, and countless additional products and services are using AI to identify people and objects in photos, translate language (written and spoken), master games (chess, Jeopardy, Go, and videogames), drive cars, and more. We come into contact with this type of AI every day and, for the most part, it doesn’t seem very magical. For the most part these sorts of AI can only do one thing well and they have emerged slowly during the past few decades. Many of these now seem mundane. MarI/O is made of neural networks & genetic algorithms that have learned to kick butt at Super Mario World. Artificial General Intelligence (AGI): As capable as a human in accomplishing goals across a wide spectrum of circumstances You and I excel at various things, we can recognize patterns, combine competencies, solve problems, create art, and learn skills we don’t yet have. We use logic, creativity, empathy and recall — each requiring a different sort of intelligence. We take this diversity and breadth of intelligence for granted but creating this type of intelligence artificially is the great challenge of our time and is what most people in the field of AI research are trying to accomplish — creating a computer that is as smart as us across many different dimensions — and crucially — one that can teach itself new things. “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.” — Donald Knuth There’ve been many tests derived in order to ascertain when machine intelligences “become as smart as humans.” These range from the well-known Turing Test to the absurd Coffee Test. They all portend to assess a general capability that, among other things, involve the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, and learn quickly from experience. I’m not sure, however, that machines won’t be supremely impressive well short of passing these tests. I imagine stringing together hundreds or thousands or disparate ANI’s into something far more capable than any human in many ways. At that point, it may very well appear to us, to be super-human… Artificial Super Intelligence (ASI): Better than any human at accomplishing any goal across virtually any situations This is most often how people describe computers that are much smarter than humans across a wide range of intelligences. Some say all intelligences, but I’m not sure that is the threshold that need be passed. Of course computers are already super human in some arenas. I’ve mentioned some specific games in which humans can no longer compete, but computers have been better than humans at other things for decades — things like mathematical computation, financial market strategy, memorization, and memory recall come to mind. But soon more and more of those competences at which humans excel — things involving language translation, visual acuity, perception, and creativity — will be conquered by computers as the following illustration shows.
https://medium.com/alttext/what-do-we-mean-by-intelligence-artificial-or-otherwise-e5f72fbe8698
['Ben Edwards']
2017-11-16 05:46:58.616000+00:00
['Intelligence', 'Humanity', 'Artificial Intelligence', 'Goals', 'AI']
The Non-Treachery of Dataset
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://medium.com/merzazine/the-non-treachery-of-dataset-4d1fa28d758c
['Vlad Alex', 'Merzmensch']
2020-06-09 14:30:16.136000+00:00
['Published Tds', 'Ai In Discourse', 'Artificial Intelligence', 'AI', 'Art']
Write for Nightingale. Everything you need to get started
Write for Nightingale Everything you need to get started Nightingale is the journal of the Data Visualization Society (DVS). We bring you high-quality articles covering many of the applications of data visualization — including education, entertainment, history, sports, best practices, new techniques, and any other aspect or implementation of visual information design we find compelling. We’ll host interviews with thought leaders and interesting people from the Data Visualization Society community and beyond. Like adventurers, we want to explore the unexplored, to look to the past and dream of the future. We always intended for the DVS Medium publication to share the incredible wealth of knowledge from our community. By partnering with Medium, our articles get premium promotion through Medium’s content-sharing algorithms and channels. This allows us to support the Data Visualization Society’s mission to “support the growth, refinement, and expansion of data visualization knowledge regardless of the expertise level of our members” by getting our members more exposure. We highly encourage any of our members to use Nightingale to further their ideas and spark discussion both within the Data Visualization Society and outside it among the general community. We are here to help you! What you should write about Honestly, that’s up to you! We are interested in stories about all aspects of data visualization and information design for any industry, discipline or mindset! We have an expansive view of what data visualization encompasses and how we use it in our world. We currently have featured sections for Topics in Data Visualization, Data Humanism, Historic Dataviz, Sports Viz, Career Advice, and How To articles. We also intend to expand more into industry news, culture, reported features, and data-viz journalism. But please consider our publication a platform for any subject that you have in mind. We pay all our writers We are pleased to announce that, starting July 15, 2019, our partnership with Medium allows Nightingale to pay all our writers, editors, and illustrators. Please note that in order to be compensated for your work, you need to be part of the Medium Partner Program. To enroll, sign up here: Here’s more information about the process of getting registered. It outlines the exact steps and forms you will need to fill out. After you are registered, Medium will deposit your earnings directly into your bank account on the last Wednesday of every calendar month. If you want to donate your author’s fee back to the Data Visualization Society, let us know. How to get started If you are interested in writing for Nightingale, please send your Medium Profile name and your pitch to any of the editors on the DVS Slack or email us at [email protected]. For other inquiries and general questions about the DVS, you can email us at [email protected]. Once your pitch has been accepted and you’ve been made a writer, you can submit your unpublished draft to us when you’re ready. Once we receive it, we’ll connect you with an editor to prepare it for publication — it’s pretty easy! Here’s how you add a Draft to our publication once we have made you a writer: Please note that we are happy to publish articles in multiple languages, but may not be able to provide editorial support at this time for all languages.
https://medium.com/nightingale/how-to-write-for-nightingale-d5b152545353
['Jason Forrest']
2020-07-10 20:29:34.066000+00:00
['Design', 'Data Visualization', 'Dvsintro', 'Data', 'Writing']
‘Power to Save the World: The Truth About Nuclear Energy’ by Gwyneth Cravens
‘Power to Save the World: The Truth About Nuclear Energy’ by Gwyneth Cravens Why Environmentalists Need to Rethink Nuclear Energy Photo by Thomas Millot on Unsplash As you are probably well aware, climate change is an existential threat to the future of human civilization. Yet, we shoot ourselves in the foot every day that we try and ignore the importance of nuclear energy to curb this disaster. The science and technology of nuclear fission has developed significantly since the days of the cold war, but misinformation and bad PR continues to hold back the full utilization of nuclear power in our clean energy portfolios. Maybe this is all news to you, or maybe you watched the Bill Gates documentary and have a positive but surface-level understanding of the technology. Either way, Gwyneth Cravens’s book Power to Save the World: The Truth About Nuclear Energy should launch right to the top of your environmentally conscious reading list. The book was written in 2007. So, there are times when characters talk about technologies that have seen noteworthy development since the original publication (particularly renewables, hydrogen fuel cells, and electric batteries to an extent). BUT, Power to Save the World is definitely still 99% relevant and 100% worth your time. A Breakdown of the Book Power to Save the World is split up into 6 parts with 21 chapters spread between each section. I’ll spend the rest of this blog post briefly exploring the individual focus of each part, but it’s just as important to understand how they work together — and nothing sums up the overall approach of this book like the very very small Marcus Aurelius quote included underneath the title of Part 1. “Look always at the whole.” - Marcus Aurelius Cravens set out to write a comprehensive book on nuclear energy, and she did not disappoint. Living up to the aforementioned quote, Cravens approaches “the whole” of nuclear energy in 2 important ways: The practical context in which we are talking about nuclear energy (think: climate change, technological advancement, politics, the economy, etc.). The ENTIRE life cycle of nuclear energy and the uranium we use to power it. Every single thing that remotely has to do with nuclear power gets dedicated focus in this book. The book starts with Uranium mines, and it ends with nuclear waste recycling and disposal. Introduction: “Gwyneth’s Pilgrimage” by Richard Rhodes It can be easy to ignore introductions like this and jump straight into “the actual book,” but Rhodes makes an interesting point that should frame your expectations if you’re considering buying this book. “Gwyneth Cravens Evokes an old tradition in this very modern book: seeking understanding by going on a journey… She accumulates knowledge as she goes on, guided by her own Virgil, a steadfast scientist named Dr. Rip Anderson. She achieves greater understanding of the deep things of the world, as her predecessors did, and as they also did, she shares it generously.” - Richard Rhodes Before you get scared off by the idea of a book on nuclear energy, Power to Save the World is not some ridiculously dense collection of scientific essays for 1% of readers to enjoy. Instead, think of this as a modern environmentalist’s version of Pilgrim’s Progress or Dante’s Inferno (only this one is based on the real factual world of American energy). Every time the book runs into dense scientific information, it gets presented in an easy to read dialogue mixed in between sections of Cravens’s own journalistic research and commentary. Part 1: Origins Survival/ Always Look at the Whole/ Ambrosia Lake Like any good “Part 1”, this is where Cravens sets the stage for the rest of the book. Here, we are introduced to Rip (Cravens’s nuclear “Virgil”) and begin their journey touring around the country in a quest to understand nuclear energy (starting with step 1, mining uranium). However, Cravens makes sure to remind us of the big “Why?” behind nuclear energy. In the case of this book, it almost always comes back to climate change. My Favorite Quote From This Section: (Citing a quote from the American physicist, Alvin Weinberg) ‘Carbon dioxide poses a dilemma for the radical environmentalists. Since nuclear reactors emit almost no carbon dioxide, how can one be against nuclear energy if one is concerned about carbon dioxide? To my utter dismay, indeed disgust, this is exactly the position of some of the environmentalists. Their argument is that extreme conservation, and a shift to renewables — that is, solar energy — is the only environmentally correct approach to reducing carbon dioxide. When I point out to them that conservation might be feasible in industrialized countries, but that it is hardly a choice for India and China, they seem to ignore the point. Or when I argue that solar energy is hardly a choice at this time (2007), or even for the next century, my environmental critics simply disagree: spend on solar energy what has been spent on nuclear energy, and solar energy will be cheap. But we have yet to discover a technical breakthrough — the solar equivalent of fission — and unless we do, rejection of fission energy condemns the world to a future of very expensive energy. … And when I point out that France has reduced its carbon dioxide emission by a good 20% in the past decade by aggressive deployment of fission reactors, I am greeted by silence.’ - Chapter Title: Survival
https://medium.com/climate-conscious/power-to-save-the-world-the-truth-about-nuclear-energy-by-gwyneth-cravens-792d59183ed6
['Cameron Catanzano']
2020-12-03 13:02:48.867000+00:00
['Book Review', 'Climate Change', 'Sustainability', 'Science', 'Nuclear']
A Tale of Two Extremities
Photo by Ivan Bandura on Unsplash It was the best of times. It was the worst of times. Some called it a modern Renaissance. Others called it a new Gilded Age. There had been no single war with 100 000 or more casualties in decades. Yet, there were more wars being fought than ever before—proxy wars, wars of ideologies, wars on terror, wars of oppression, and wars of misinformation. How did the world get here? Was it doomed to continue its path toward violence, disease, and death? Or was there still hope for change? A paradoxical Loneliness created by Social Media Social Media has been one of the most significant trends created by the open Internet. Though I was an early participant and it influenced my entry in the technology field, after coming to Silicon Valley and getting an inside look, I realized the psychological toll it takes on your mind, uninstalled every social media app except Reddit, and never looked back. That was in 2015. Five years later, conversations around social media and how it has influenced culture are just now gaining steam because of its potential influence on US politics. By giving a platform to every person, it has diluted the average opinion and amplified opinions on the fringes. Before, some crazy white supremacist or an authoritarian government official might read a history book for information about the state of politics. Now, they read social media posts. Furthermore, the algorithms that have been developed for engaging users will reinforce these users’ fringe beliefs. Our innate narcissism is being used by the robots to keep us hooked onto our devices longer. This is the cycle as it happens on any social media platform. 1) You selectively “like” posts that you agree with. 2) The algorithm, a machine, in an attempt to keep you coming back to the platform, keeps showing you more posts that agree with your view point. 3) This leads to greater engagement on the platform at the cost of ignorance because you are only shown posts that you have been inclined to agree with. You do not know what you do not know. Think about that. To social media companies, this is no problem at all. Why would they want you to think when you can just mindlessly buy products through their advertising? Free comes at the cost of your attention. Free comes at the cost of your sanity. Free comes at the cost of your creativity and willingness to change. The Rise of the Uncompromising Idiot After enough days being caught in the loop above: check media, like posts, find more posts that agree with my view point, write some comments on them, etc. Occasionally, you come across a post that doesn’t agree. You engage. Deep down, you know you’ll regret it but you engage anyways to get a different point of view. After a few messages you realize the discussion isn’t going anywhere because the other person is already set in their point of view and nothing will change their mind. In response, you become a staunch defender of your own view point. Neither side budges. Out of frustration, you go to a different social media platform and participate in boycotting the first famous person’s message you disagree with. Welcome to cancel culture. In your mind, you are a perfectly reasonable person because you found many other people on Facebook groups and Twitter channels that agree with you. If so many other people agree with your opinion, you can’t possibly be wrong. Why are there so many uncompromising idiots who don’t see your point of view?! Nowhere has this behavior been more amplified than during turbulent times. 2020 has been the first global recession in human health and well-being since 2008. Yet the world today seems to be a twisted dystopia compared to before. In 2008, there was a clear person to blame. It was Wall Street. It was bankers and their unbounded greed. Since the problem was easily identifiable, the people were also more willing to act. The USA elected its first black president by a landslide on the promise of hope and change. Not only was Obama a decent human being and a charismatic leader, the timing of his election was a unified American response against the greed of Wall Street and the careless bankers in the financial industry that had destroyed the livelihoods of billions of people around the world. No other President in modern history embodied so many qualities of the American Dream. The premise that not just all men, but that all people are created equal; by creating a culture of a meritocracy, we can lift the brightest and best among us — regardless of race or economic background — to lead the free world and create a more perfect union. Yet, it was too good to last. Over the next decade, the American people realized that the institutional, systemic problems within our society cannot be solved by one person. In fact, the real puppeteers are behind the shadows and they come from all directions. With the rise of social media, a new puppeteer entered the picture: the mob. Keep these uncompromising idiots who make up the mob in mind because this is the first extremity. They make up a small but vocal minority of Internet users. Social Proof is not Proof — it’s evidence Social media and its machinery is also designed to show evidence of what is accepted by its users through likes, favorites, shares, and retweets. This evidence leads us to believe that a particular viewpoint is broadly accepted by thousands (or tens of thousands) of people. So it must be true? Not really. Compared to the population of the world as a whole, a few thousand people is a drop in the bucket. Even 1 million people is less than 0.1% of Internet users. So how do you distinguish between truth and lie? Do you believe your gut and fly by the seat of your pants? Or do you dig deeper? Find a hidden meaning? Rarely does anyone on the Internet (especially on social media) have the time or the inclination to show that the truth is nuanced. There are gray areas, things that are not true 100% of the time but operate according to a rule of thumb — a guesstimate. This is where the modern education system fails us simply because it was not designed for the Information Age. Separating truth from fiction requires a skill that is not taught to students until college: critical thinking. When writing research papers, the first step a student takes is to research the existing material from credible sources. This first and most important step also happens to be the most difficult because it requires something most of us aren’t used to doing when being constantly bombarded with information — thinking deliberately and objectively. Measuring Credibility Aristotle is considered by many to be the father of modern philosophy. Highly respected and revered by people of the ancient world across all religions, he is now known as “The First Teacher” and one of the greatest philosophers because of his influence on Alexander the Great and various branches of ethics, military strategy, public speaking, metaphysics, and religion. Aristotle laid out three ways that a speaker can persuade his audience and by the same measure, three ways that you can evaluate someone’s message. These three methods are credibility, logic, and emotion: ethos, logos, and pathos. Ethos is the credibility of the speaker. Unfortunately in the Information Age, people are known to fake their credibility, listing degrees that do not exist and exaggerating their own expertise. While it’s comforting to know that you are listening to an expert, ethos is also about doing what is ethical. Listen to someone who appeals to higher morals, not just someone with an impressive degree. Evaluate the speaker or writer not only by their previous accomplishments but the goodness within their argument because goodness comes before greatness. Logos is the logical argument made by a speaker or writer. The speaker or writer presents evidence that is connected to the thesis. This evidence can be used to further the argument through inference and conclusions. A reader or listener will evaluate this evidence for consistency and whether it fits the whole of the message that is being conveyed. Highly educated people will often prefer to evaluate a message based on logic and reason. Yet, with the time constraints presented today, most people do not have the time to stop to think and digest the information that they come across. Pathos is the emotional appeal. Whether positive or negative, a viewer will evaluate a piece of media based on the strength of emotion he or she feels after consuming this media. Whether it’s a sad movie, a funny sketch or a political speech about pride and nationalism. The stronger the emotion, the more persuasive the message. In fact, when used properly and with ill will, pathos will override ethos and logos when delivering a message because it provides the viewer with a new lens to look at the message — a lens of optimism or a lens of rage. This is the most difficult part of crafting a persuasive message but one that polished public speakers are good at. Which of the above three techniques do you use for evaluating the information you consume? Are you a logical person? Do you trust the opinion of experts? Or do you follow your own emotions and “gut instinct”? Seekers of Information Most people form their beliefs by absorbing information through just one of the above methods — ethos, logos, or pathos. A few people use two. Rarely, the occasional super-consumer of information will use all three. Just as the Information Age and social media have led to the rise of the Uncompromising Idiot, they’ve also led to the rise of the Seeker. This Seeker of Information recognizes the value of the smartphone is in the hand of its user. When the entirety of the human condition can be tapped into with just a few fingers, a select few people are bound to be fascinated by it. Not just for the sake of reinforcing their own beliefs, but for the sake of truth because this truth is absolute. It is woven into the fabric of life itself. Sometimes it feels harsh because it has no patience for your opinion. But it is there. Inescapable. It is the Hand of God. The Seekers make up the other side of the spectrum balancing out the Uncompromising Idiots. These are the knowledge workers that make up the last standing moat of the shrinking middle class. In a world where automation and robots will slowly whittle away at the lower end of blue-collar jobs, the Seekers-to-be use the tools available to them — namely online education, YouTube, skills training, mentors and peer groups — to join the ranks of the knowledge workers. Some choose to go even further. To become a Seeker is to recognize that the Information Age presents knowledge as an endless quest — that ignorance is not just a trait of the Uncompromising Idiot, but of all of us. The Uncompromising Idiot is not someone outside of us on the Internet, but also within us. So is the Seeker. The greatest adventure of the Information Age, then, is to experience life in its newest and richest forms. To allow for the feeling of emotion (pathos), to allow for being corrected about what is right and wrong (ethos) and to look hard within oneself for the truth of all life and your life (logos). This human condition is inescapable and has been here since the dawn of ancient civilizations. That is the most pressing quest of the Information Age — the discovery of humanity.
https://ythakker.medium.com/a-tale-of-two-extremities-e9403a883964
['Yatit Thakker']
2020-11-27 23:50:38.930000+00:00
['Technology', '2020', 'Psychology', 'Society', 'Philosophy']
The firing of researcher Timnit Gebru from Google teaches an unexpected lesson about women and power
The firing of researcher Timnit Gebru from Google teaches an unexpected lesson about women and power Polina Kroik Follow Dec 23 · 4 min read Photo by TechCrunch. Timnit Gebru was dismissed from Google for voicing criticism of its practices and policies. When I first heard that a Black woman researcher was fired by Google for criticizing the company, it sounded like the same old story. As Gebru herself suggested in an interview to NPR, male-dominated organizations are sometimes willing to appoint women to prominent roles — just as long as those women remain more-or-less silent, affirming the company line. But as the events unfolded over the following weeks, and as I learned more about Gebru herself I reached another conclusion. Though the dismissal was an obvious attempt to silence a critical voice, and very likely an instance of racial and gender discrimination, this age-old oppressor’s strategy wasn’t successful. In fact, instead of silencing Gebru’s voice, Google’s move ended up amplifying it — attracting global media attention to the company’s discriminatory practices, protests from employees, and most recently a petition calling for the researcher’s reinstatement. This may sound like a David and Goliath story — an Ethiopia immigrant with academic credentials taking on a tech giant. But the story unfolded in this way precisely because Gebru, despite her humble beginnings, isn’t exactly a David. In fact, she is a highly accomplished Stanford-educated scholar, who has gained recognition for revealing the racism inherent in AI algorithms. The research essay that sparked her dispute with Google took the company to task over a number of environmental and cultural failings of its so-called “large language models” — apparently one of the Google’s major undertakings. The email that Gebru subsequently sent to some of her Google colleagues on an internal listserv speaks to her social and cultural status. Written in a confident, though exasperated, voice, contrasts the steps Google has taken to dismiss her research with the peer-review process established in academe: Have you ever heard of someone getting “feedback” on a paper through a privileged and confidential document to HR? Does that sound like a standard procedure to you or does it just happen to people like me who are constantly dehumanized? Imagine this: You’ve sent a paper for feedback to 30+ researchers, you’re awaiting feedback from PR & Policy who you gave a heads up before you even wrote the work saying “we’re thinking of doing this”, working on a revision plan figuring out how to address different feedback from people, haven’t heard from PR & Policy besides them asking you for updates (in 2 months). She goes on to criticize Google for “micro” and “macro” discrimination against women and people of color before admonition her colleagues that their internal criticism is essentially futile: What I want to say is stop writing your documents because it doesn’t make a difference. The DEI OKRs that we don’t know where they come from (and are never met anyways), the random discussions, the “we need more mentorship” rather than “we need to stop the toxic environments that hinder us from progressing” the constant fighting and education at your cost, they don’t matter. Because there is zero accountability. There is no incentive to hire 39% women: your life gets worse when you start advocating for underrepresented people, you start making the other leaders upset when they don’t want to give you good ratings during calibration. Even though Gebru laments her powerlessness in the face of Google’s corporate machinery, she in fact speaks from a position of relative power. It is the power of a person who has a voice, cultural standing, social and institutional status outside this particular organization. According to Gebru’s biography, it isn’t a power that she was born with, but rather one she’s accrued through her education and successful career. Gebru’s status and relative power may be lost on some readers, but they are clearly visible to me because, somewhat like Gebru, I entered a PhD program as a lower-middle-class immigrant and worked hard to gain acceptance, and to accumulate some of the markings of status and prestige that allow me to publish articles and stand in front of a college classroom. However, as a mere adjunct professor with a handful of obscure publications, I would not have been able to call attention to Google’s inequitable practices in the manner she did. Unlike a rank-and-file employee — or even a rank-and-file researcher — Gebru was able to speak in an effective public voice, reference the practices of academe (on whose support she could rely), and risk losing her employment. More importantly, she was able to attract the media’s attention, and use that attention strategically to put shine a spotlight on inequality. I’m pointing out these distinctions not to diminish the importance of Gebru’s critique or her courage of speaking out, but rather to emphasize how vital it is for women like Gebru to have a measure of power, and to have institutions that allow women access to such power. * Gebru’s success story, like most success stories of this sort is an individual one. And as much as it tells us about what women in positions of power can accomplish, it also shows what they cannot. Gebru’s has successfully raised awareness of an injustice, but that injustice likely won’t be rectified anytime soon. For real change to happen — in AI, in Google, in other corporations, and in academia — a much larger shift has to take place, one that goes beyond identity politics. In the meantime, though, we are clearly much better off with women and minorities in positions of power than without them. Without Gebru and others like her there may be no one to call attention to the racism and sexism that profit-driven AI technologies perpetuate. I hope that Gebru is able to continue working productively in her field and that there’s enough public pressure on Google and other tech giants to hire diverse professionals who will bring a critical perspective to this powerful field.
https://medium.com/digital-diplomacy/the-firing-of-research-timnit-gebru-from-google-has-an-unexpected-lesson-about-women-in-positions-8b0d85456c86
['Polina Kroik']
2020-12-23 16:51:17.330000+00:00
['Women In Tech', 'Google', 'Technology', 'Culture', 'Artificial Intelligence']
Is Your Quarantine Self-Care Oriented Outward or Inward?
My Facebook feed these days seems evenly split between two types of people. The people who are all like: I’m spending this quarantine leveling up my life. And the people side-eye those people and have one word for them: Nope. I’ve never gone through anything like what we’re all going through now. I started to say ‘like pretty much everyone else alive,’ but of course that’s not true. I’m an extraordinarily privileged American who has lived with deep American-level poverty, but there are millions of human beings who suffer more than I can even fully imagine. But, as an American, I have gone through the same really scary, collectively traumatic times that all Americans have gone through in twenty-first century. Namely: 9/11 and the 2009 recession. So, I know a few things about myself and how I deal with crisis and trauma. I’ve got this theory brewing that the way we all cope is similar to being introverted or extraverted. In other words, one way is not better than the other. It’s all about what we need the most in the moment. Let’s call the ‘leveling up’ folks Outward oriented. If you’re oriented Outward, you cope with trauma by looking around and seeing what you can do to get the hell out of the situation you’re in. How can you make this shit better? If you’re extremely Outward oriented, you go beyond yourself and are constantly on the lookout for how you can make this shit better for everyone around you. This is where you wind up with people who, in the midst of crisis, create programs to make sure that elderly people have what they need or nurses who volunteer to go into the breech. And we’ll call the ‘nope’ folks Inward oriented. If you’re oriented Inward, you cope with trauma by curling into your self in a protective way. It’s all you can do, maybe, to remember to follow basic convention rules: three hours of sleep, two meals, and one shower every day. Or maybe the sleep part isn’t a problem, because your body is suddenly right on board with catching up with every hour of sleep you’ve ever lost — and you find yourself sleeping twelve hours a day. You’re making this shit better by intensely protecting yourself and your immediate loved ones. I’m Outward oriented. When shit gets hard — when the world (or just my world) starts falling apart — my brain goes into overdrive and I get all the ideas. All of them, all at once. I’m not joking. Every idea. They bombard me. Which is why in the last month I’ve: Pitched two non-fiction books, one to a publisher and one to my agent (who loved it, so now I’m writing a proposal.) Started a new, complicated, intense tract for Ninja Writers that in calmer times might have taken me a year to ramp up. Hired a new member to the Ninja Writer team. Started planning a virtual writers conference (at least I managed to force myself to aim for summer for this and now RIGHT NOW.) Had coaching discovery calls with a dozen people. Completely decluttered my closet. Focusing inward when I’m in the midst of a crisis makes me feel like I’m losing my mind. I can not curl up in my comfy clothes and binge watch Outlander. I have to do something. Work is my self-care. Interestingly, I don’t thing the Outward/Inward orientation idea is tied to Introverted/Extraverted. In other words, I don’t think that people who are Outward oriented are necessarily extraverted, or those who are Inward oriented are always introverted. I know clearly extraverted people who are genuinely angry at Outward oriented people who are advocating for productivity during this time of quarantine. And I know introverted people who find themselves waking up in the morning bursting with ideas, ready to implement them right now. I’m personally an ambivert who tends toward introversion. I can be in groups and enjoy myself, but eventually I need some time alone to recharge. And I am strongly, strongly Outward oriented. So — bottom line. If you’re Outward oriented, you’ll find yourself responding to social distancing with a nearly desperate need to do something. Go ahead and do it. Why not? Start something new. Learn something. Level up your life for the sake of your sanity. This is your self care. And if you’re Inward oriented, you’ll find yourself responding to social distancing with a strong need to curl up into yourself. Your self care will look like getting enough sleep, eating good food, reading books (or watching Netflix — whatever entertainment feeds your soul.) And all of that is okay. Do it, if it’s keeping you sane right now. Both are valid self-care strategies.
https://medium.com/the-write-brain/is-your-quarantine-self-care-oriented-outward-or-inward-9f39f0505dc9
['Shaunta Grimes']
2020-04-04 16:07:30.679000+00:00
['Self', 'Mental Health', 'Health', 'Life', 'Covid 19']
Hacking Our Brain Through Data Visualization
Hundreds of millions of data transactions are processed every day, every hour and every minute within our Mercado Libre ecosystem, which are consumed by thousands of analytical users in 18 Latin American countries. In turn, all this data generates a huge amount of information which needs to be translated into knowledge for efficient and quick decision-making. Try to imagine how the brain could process so much information and understand its meaning without a proper data visualization. But… what does data visualization involve? Data visualization is the visual representation of (high-quality) information into elements such as a map or graph, with the purpose of making it easier for the human brain in order to understand and extract useful insights. Additionally, the aim is also to facilitate the identification of patterns, trends and outliers in large volumes of data. For example, when data is represented in a visual way, we can quickly detect behaviors that might go unnoticed if we view them statistically. It makes sense that data visualization gives us more meaning, since 90% of the information transmitted into the human brain is visual. We also process visual images 60.000 times faster than text, as the brain finds it easier to remember and analyze them. We have a premise in Mercado Libre that is to make data-driven decisions, and we take it seriously when working on the design of our dashboards, applying best practices from the field of visual and cognitive neuroscience. And… how do we apply it? First of all, we need to know the difference between perception and cognition. There is a very big difference between these concepts, and it’s important to understand them in terms of how the human brain works. Then we could see how to apply that within a dashboard. Perception is the organization, identification, and interpretation of stimuli from your senses. A simple way of describing it is as the acquisition of information through the senses. Cognition is the processing of information and acquiring of knowledge through reason, intuition, and perception. A simpler way to think of it is as the processing of perception into knowledge. The main difference between those terms is that perception is fast while cognition is much slower. So if we can take advantage of perception we can actually make our visuals work much more intuitively for our end-users. How is visual and cognitive neuroscience relevant? And… how is this relevant to us? We can make use of visual perception principles to design dashboards that result in an immediate understanding of the information arising from the data. The recommendations derived from visuals and cognitive neuroscience are referred to as visual best practices or VBP. Let’s take a look at the following example…one of the dashboards below applies VBP, while the other doesn’t. Let’s see if you can guess which is which. Both were made from the same data extract and both were built using the same application. In this case it is Tableau software. The visuals above have been blurred to hide sensitive data The visuals above have been blurred to hide sensitive data Hacking the Human Brain Now let’s get into the relevance of VBP when building a dashboard and from there, to the main purpose of this article: how we can hack the human brain by taking advantage of perception and getting our users to understand the visuals much more quickly. Visual Perception and Visual Memory Seeing (perception) and comprehending (cognition) happen all the time, every day from second to second. The shorter the time between seeing and comprehending, the more effective the visualization is, thus the more efficient our end-users in Mercado Libre will be as well. The human brain can process a small amount of visual information very quickly, but only for a very short period of time. This is called Visual Short-Term Memory (VSTM). Each element in the visual must be seen (perception) within roughly 40 milliseconds…any longer and it must be processed attentively (cognition). If we take advantage of VSTM, our end-users can interpret entire visuals within a few seconds. Making use of VSTM requires us to know what works and what doesn’t when visual processing is happening. Attributes which work well with VSTM are called pre-attentive visual features. If we don’t make use of this, the brain must send the visual information to another of its areas for more robust processing, but this additional processing is considerably slower than VSTM. Let’s make a simple example After showing some information we’ll have to try to understand what we are looking at. So, I am going to apply visual pre-attentive features to making the process much more understandable. The information below shows a sequence of numbers. We have to figure out how many fives there are in a quick way. Answering this question must be done consciously, or attentively. We can process it either using linguistic skills (find the number 5s) or using pattern matching skills (find things that are shaped like 5) and both options require attentive (cognitive) processing. So, let’s make a little change to all those numbers and let’s see if you can find out how many there are much faster. Answering this question now can be done subconsciously, or pre-attentively. We took advantage of a VSTM pre-attentive feature called color hue. This technique is known as perceptual pop-out and is often implemented as a highlight in visualization tools. Let’s move on to another simple example Has our GMV (Gross Merchandise Volume) increased or decreased over the last five years? And… which category has grown the fastest? This is even worse because now we not only have to look at numbers but to do the math as well. So this is even slower for us; it’s a trying process that clearly highlights the drawback of using cross tabs or text tables in your visuals. It is the slowest possible way for a person to understand the meaning of the data that they are looking at. This cannot be processed pre-attentively as text and math require post-attentive processing. So, we are going to take the exact same information and apply visual best practices to it making the same questions… Now, we should be able to process the information very quickly. In fact, we should be able to put that up on the screen for about 40 milliseconds and take it away, and we would still be able to figure out that one of the lines was ahead of the others and that they all tend to go upwards. That is using pre-attentive features, in this case we are using color and position. Our end-users don’t look at the dashboards, they scan them. As developers and designers, we have a lot of control over what our users in Mercado Libre look at when viewing a dashboard. To create the right path the user’s eyes are going to follow, we need first to understand how our eyes process information. Visual Patterns How do we make sense of what we see? There are several categories of pre-attentive processing; let’s explore a few: Visual patterns or (visual hierarchies) allow us know what data engage our users first and which visualizations they interact with (and in what order) while on the dashboard. By establishing a visual hierarchy, we ensure that communication between our users and dashboards is seamless considering the following techniques: Scale : Different sized elements will guide the user’s attention — larger elements draw more attention compared to smaller elements. : Different sized elements will guide the user’s attention — larger elements draw more attention compared to smaller elements. Color: People are drawn towards bold, contrasting colors. People are drawn towards bold, contrasting colors. Contrast: Color changes can be used to attract attention. Contrasting the color of one element against another draws focus. Color changes can be used to attract attention. Contrasting the color of one element against another draws focus. Alignment: Columns and grids can create alignment between elements and make the user take notice. Columns and grids can create alignment between elements and make the user take notice. Proximity: This helps separate and group certain data visualization elements together (or apart) to help distinguish between them. This helps separate and group certain data visualization elements together (or apart) to help distinguish between them. Scanning patterns: Eye tracking studies show where the users focus their gaze once they are on a dashboard and where they proceed afterwards. And…talking about scanning patterns, it is important to mention that there exists more than one kind, but I’m going to focus on the one we use most -or recommend- to our Data Visualization team when designing our dashboards…and that is the Z-Pattern layout. This kind of pattern is a design concept that considers that users tend to view highly-visual information in a Z-Pattern. i.e. people tend to: first look at the top left and then move horizontally to the top right. then draw their eyes diagonally to the bottom left. and finally, make one last horizontal movement to the bottom right. This is the viewing pattern that emerges from the eye movements in one of our dashboards: Visuals Matter… Every day we face data challenges -both in quantity and quality- and business questions that we need to answer on the spot. That’s why we empower our users by fostering their analytical capabilities through our data visualization best practices. At Mercado Libre, and from our DataViz team we democratize knowledge by building dashboards that respond to strategic data-driven decisions, trying to understand how the brain works in terms of cognitive and perceptual processes. Thanks for your reading us!
https://medium.com/mercadolibre-datablog/hacking-our-brain-through-data-visualization-73556a82d8ac
['Pablo Garcia Casasola']
2020-10-27 20:14:28.640000+00:00
['Mercadolibre', 'Brain', 'Dashboard', 'Data Visualization', 'Hacking']
Why Your Genes Are Not Your Destiny
What is epigenetics? When most people think of evolution, the first name that comes to mind is Charles Darwin, but a contemporary of Darwin’s named Jean Baptiste Lamarck had proposed a theory of “acquired characteristics” by which individuals evolved certain traits within their lifetimes. The most oft-cited example discrediting this theory is that of giraffes elongating their necks by stretching to reach the treetops and then passing on this trait of long necks to their progeny. In contrast, Darwin proposed that those giraffes that had the longest necks went on to find food, survive, and reproduce. Eventually, Darwin’s theory of natural selection prevailed, but his 18th century French naturalist contender may have simply foreseen the field of epigenetics, the study of those drivers of gene expression that occur without a change in DNA sequence. The prefix “epi-” means above in Greek, and epigenetic changes determine whether genes are switched on or off and also influence the production of proteins. If you imagine your genetic code as the hardware of a computer, epigenetics is the software that runs on top and controls the operation of the hardware. Epigenetic changes control the expression of genes through various mechanisms. The epigenetic mechanism of DNA methylation involves tagging DNA bases with methyl groups, a process that tends to silence genes. DNA methylation is responsible for X-chromosome inactivation in females, a process necessary to ensure that females don’t produce twice the number of X-chromosome gene products as males. Methylation is also responsible for the normal suppression of many genes in somatic cells, allowing for cell differentiation. Every somatic cell in the human body contains nearly identical genetic material, but skin cells, muscle cells, bone cells, and nerve cells exhibit different properties due to different sets of genes being turned on or off. Methylation of cytosine bases leads to changes in gene expression. Mariuswalter [CC BY-SA 4.0] Many epigenetic changes associated with the development of cancer have been found to be transgenerational; in other words, the altered epigenetic pattern can be passed down to subsequent generations without re-exposure to the agent that originally caused the epigenetic change. The effects of treatments, such as radiation and chemotherapy, and environmental toxins, such as endocrine disruptors, are often transmitted to offspring via reprogramming of DNA methylation patterns. Inappropriate DNA methylation has been referred to as a “hallmark of cancer,” along with uncontrolled cell growth/proliferation. Almost all types of human tumors are characterized by two distinct phenomena: global hypomethylation or loss of methylation, which may result in the expression of normally suppressed oncogenes, genes that promote tumor formation, as well as regional hypermethylation or increased methylation near tumor suppressor genes. In other words, genes that promote tumor formation are turned on while genes that suppress tumor formation are turned off. Cigarette smoke has been shown to promote both demethylation of metastatic genes in lung cancer cells as well as regional methylation of other specific genes via modulation of enzymatic activities. To succinctly summarize, genes themselves are not driving tumor formation; rather inappropriate gene expression is increasing the risk of tumor development. National Human Genome Research Institute (NHGRI) from Bethesda, MD, USA [CC BY 2.0] Considering DNA methylation’s crucial role in regulating health and disease, insights gleamed from genetic models devoid of DNA methylation may not be relevant to the human condition. Epigenetic changes tend to be species-specific; the yeast Saccharomyces cerevisiae and the roundworm Caenorhabditis elegans don’t seem to exhibit DNA methylation at all, and the fruit fly Drosophila melanogaster displays only a minimal amount. A 2018 study conducted on the nematode C. elegans concluded that the continual expression of genes responsible for developmental growth contributed to aging in later life; the authors argued the results could also be applied to humans. However, DNA methylation is widely prevalent in human genomes and responsible for gene suppression at different stages of development, which could prevent the unnecessary gene expression described in the study. Using models that are subject to similar epigenetic changes is critical when attempting to extrapolate findings to humans. As previously suggested, our attempts to unravel the interface between genes and disease without accounting for genetic expression may therefore lead to incomplete or inaccurate models of disease and aging.
https://medium.com/medical-myths-and-models/why-your-genes-are-not-your-destiny-952efa377d2e
['Nita Jain']
2020-02-03 20:53:57.796000+00:00
['Health', 'Education', 'Science', 'Ideas', 'Self Improvement']
Creating Visualization for Influence in Power BI
Photo by Samuel Clara on Unsplash Data can be complex, which often results in multiple data types: character, binary, numerical, and more to describe a set of facts. Familiarity of dealing with repositories is an association. However, having a small amount of data in a spreadsheet and checking semantics to gauge response in slices is a viable way to analyze data with efficiency producing meaningful results. Once an analysis is completed on a small set, the same method can be applied at scale in a larger framework. Semantics: meaning of words, language. This is a common term in Artificial Intelligence for Natural Language Processing to describe the inference relationship of communication, typically speech. Slice: is a thin piece cut from a larger piece. Definitions often include that this term is only with food items. For our purposes, and in analytics, slice is not edible. It is data. Objective Show example of data mining with Power BI and output a hierarchy of results based on inference of data. Using Artificial Intelligence (AI) technique in Power BI, we will understand where heart failure if more likely to occur and by what amount showing geographical influence of a diagnosis projecting probability of health outcomes. Description of tools in Power BI Analytics for language processing like semantics, image detection, and integration with Azure Machine Learning recent additions. Visuals now include slicing for AI features to represent results in hierarchy within Power BI not requiring integration for analysis of locally stored low feature data. Power BI is an application from Microsoft that is a more powerful analysis tool than Excel and a unique portfolio of capabilities. Example This example uses a dataset of Medicare claims. The claims are analyzed to find where an event is most likely to occur. Within Power BI, go through data import for the claims file. Selecting the visualization tab on the left of the window, select “key influencers”. This will slice the data to find the most likely places of heart failure, or the places where heart failures are more common as reasoned by Medicare claims. The conclusion are the three states with highest rate of heart failure are Colorado (CO), Mississippi (MS), and New Hampshire (NH). This shows where this event is more common and likely to occur. The Finish This is a sample and solution can be changed to scale from the spreadsheet to linking database for big data analytics data to mine semantics for reliable outcomes. Try it and find slicing to be a great visual for telling a story with data. Data available at : https://catalog.data.gov/dataset/center-for-medicare-medicaid-services-cms-medicare-claims-data
https://medium.com/ai-in-plain-english/implementing-a-visualization-for-influence-in-power-bi-798642461572
['Sarah Mason']
2020-11-16 08:33:00.961000+00:00
['Probability', 'Artificial Intelligence', 'AI', 'Power Bi', 'Data Visualization']
React.js — Basic Hooks (useState, useEffect, & useContext)
Getting Started useState useState(), as defined above, hooks state into a component. It’s very easy to incorporate into functional components: In the screenshot above, we can take note of a few things. useState() has two constants you should recognize during the invocation of a useState() hook: The state data structure/type itself (e.g. — state ) which will hold the initial value(s) for that instance of state . You can only pass one argument to useState(), but you can have multiple useState() invocations as you’ll see outlined further in the reading. ) which will hold the initial value(s) for that instance of state . You can only pass one argument to useState(), but you can have multiple useState() invocations as you’ll see outlined further in the reading. A setter function (e.g. — setState ) that is executed during a particular point in the component’s lifecycle to update the value(s) of the state’s data structure/type. Another thing to take note of in the React.js Docs regarding the array destructuring you see in the example above: When we declare a state variable with useState , it returns a pair — an array with two items. The first item is the current value, and the second is a function that lets us update it. Using [0] and [1] to access them is a bit confusing because they have a specific meaning. This is why we use array destructuring instead. (React.js Docs) One thing to take note of is that you can call these two variables, [state, setState] , whatever you want. There are two common naming conventions you’ll come across and see most engineers using: Encapsulate everything in one JavaScript object and assign all state properties to their respective default values. If you’ve worked with class-based React.js components before, this will be very familiar to you. It’s very common to see these two constants named state and setState , but again, feel free to name them whatever you want. A nickname I’ve given to this approach in managing state, and one that I’ll reference to throughout the rest of this article, is “vertically-scaled component state”. Example: const [state, setState] = useState({ loading: true, error: '', signUpFormFields: { username: '', password: '' }, scoreCount: 0 }); Name the first constant in the destructured array a camel-cased version of whatever part of that component’s state you’ll be keeping track of and updating (e.g. — loading, error, isAuthenticated, signupFormFields, etc. ). Then, name the second constant the same as the first, but with set- prefixed to the naming convention along with proper adjustments in capitalization to follow the camel-casing practice. A nickname I’ve given to this approach in managing state, and one that I’ll reference to throughout the rest of this article, is “horizontally-scaled component state”. Example: const [loading, setLoading] = useState(true); // boolean const [error, setError] = useState(''); // string const [signUpFormFields, setSignUpFormFields] = useState({ username: '', password: '' , }); // object const [value, setValue] = useState(0); // int Both do the same exact thing (for the most part), albeit they do have their own individual pros and cons: When it comes to horizontally-scaling your state, a negative is that you will find yourself having to run many more setter function invocations (e.g. — setState() ) to get your component’s state updated to the desired result. Solutions do exist to mitigate this from happening, such as using the useReducer() hook, but that is outside the scope of this article. If you’re looking for a tutorial to implement this solution, I would highly recommend Harry Wolff’s Why I Love useReducer video on YouTube where he does an incredible job going in-depth into the use of the useReducer() hook. A positive in horizontally-scaling your component’s state is when you are using asynchronous calls, handling promises, or using libraries like fetch or axios to perform CRUD operations on APIs that may directly affect your component’s state. The reason for this has to do with how much data you’ll be retrieving from these services, and rewriting or copying over the entirety of a vertically-scaled state object will be far more costly than if you were to just make a single invocation of useState() mutually exclusive to that part of your application’s logic. ) to get your component’s state updated to the desired result. Solutions do exist to mitigate this from happening, such as using the useReducer() hook, but that is outside the scope of this article. If you’re looking for a tutorial to implement this solution, I would highly recommend Harry Wolff’s Why I Love useReducer video on YouTube where he does an incredible job going in-depth into the use of the useReducer() hook. A positive in horizontally-scaling your component’s state is when you are using asynchronous calls, handling promises, or using libraries like fetch or axios to perform CRUD operations on APIs that may directly affect your component’s state. The reason for this has to do with how much data you’ll be retrieving from these services, and rewriting or copying over the entirety of a vertically-scaled state object will be far more costly than if you were to just make a single invocation of useState() mutually exclusive to that part of your application’s logic. Vertically-scaling your state has the upside of whenever you want to update your JavaScript object, you only have to call a single setter function. Some negatives, though, are that unless you are updating every single property in the JavaScript object, you’re going to have leverage JavaScript’s spread syntax so as to not reset altered state values back to their original declarations as defined in the useState() hook. This problem also exists in horizontally-scaled component state, but the spread operations are typically far less costly as you’re typically only handling very small changes. If this sounds a bit confusing, here’s what you would have to do if I took the vertically-scaled example above and wanted to update loading: false : const [state, setState] = useState({ loading: true, error: '', signUpFormFields: { username: '', password: '' }, scoreCount: 0 }); setState({ error: 'A new error entry!' }); /* If you wanted 'error' to persist its newly set state value ('A new error entry!'), you will have to use the spread operator the next time you updated your state (as I do below). Something to take note of: We don't have to use the spread operator in the first setState() invocation update above because there haven't been any previous changes to the initial state values. */ setState({ ...state, loading: false }); /* Your updated state object would now look like this: { loading: false, error: 'A new error entry!', signUpFormFields: { username: '', password: '' }, scoreCount: 0 } */ If you’re looking for a best practice on whether you should horizontally or vertically scale your state, the React.js docs recommend using multiple state invocations in place of a single state (i.e. — use horizontally-scaled component state): “However, we recommend to split state into multiple state variables based on which values tend to change together.” (React.js Docs). The reasoning behind this has to do primarily with the memoization of React.js components and the state(s) they handle. This is basically a fancy way of saying you should optimize the speed of costly function calls by caching the initial call once, then only doing a fetch on data that’s been altered from the cache. This approach means you will most likely have to incorporate more complex React.js hooks such as useMemo() and/or useCallback(), which falls outside of the scope of this article. If you’re looking for a source to learn how to leverage these more complex React.js hooks, I would recommend Ben Awad’s useMemo and useCallback tutorials on YouTube. useEffect useEffect(), as defined earlier, hooks the primary concepts of componentDidMount() , componentDidUpdate() , and componentWillUnmount() into a component via one function invocation. It’s very easy to incorporate into functional components: useEffect() Hook In the screenshot above, we can take note of a few things. There are two arguments that are passed to useEffect(): An anonymous callback function that houses your useEffect logic. This logic is executed based upon how you set up useEffect() to run (we will discuss this further below). The second is an array that takes in comma-delimited variables called the dependency list. This is how you change the way useEffect() operates. Important mention: If you don’t pass the second argument in the useEffect() hook (i.e. the dependency list), then the hook will run on every single render — this can be problematic if/when you’re using this hook in conjunction with something like useState() because your component could spiral into a re-rendering loop where; First, the component runs the useEffect() hook on the initial render. Then, some data in useState() is updated via the setter function as described above within the useEffect() hook. After the update, the component re-renders because of the state update and executes the same useEffect() hook again. I discuss how to prevent this with how one should use useEffect() below. Although useEffect() does leverage aspects of componentDidMount() , componentDidUpdate() , and componentWillUnmount() , it’s best not to think of useEffect() as a hook that carbon-copies over the functionality you get from each of these three three lifecycle methods and compresses them into a single function that can be invoked across many instances within your component. Rather, think of useEffect() as a function that executes specific tasks for your component before rendering (often referred to as “cleaning”), after rendering, and before the component’s unmount. Since the useEffect() can be used in a plethora of different ways, most of which I will not cover in the scope of this article (no need to worry — I’ll provide resources below that will cover the more edge-case uses of the hook), I will only be covering the ways that you’ll see useEffect() implemented 80% of the time if I were to use Pareto’s principle for this specific React.js feature. Here are the more common ways useEffect() hooks are implemented: For a useEffect() invocation to only run on every mount and unmount, use the useEffect() hook in the following manner: useEffect(() => { // some component logic to execute... }, []); /* Notice the empty array as the second argument above. We don't pass anything to the array as we don't was useEffect() to depend on anything - thus the purpose of having the dependency list in the first place. */ For a useEffect() invocation to run less, or more, often based upon what that useEffect() invocation is dependent on (i.e. — what is passed through to the dependency list), use the useEffect() hook in the following manner: const [value, setValue] = useState(0); useEffect(() => { // some component logic to execute... }, [value, setValue]); /* Notice the dependency array as the second argument above. We pass 'value' to the array as an example to showcase how this hook can work. This useEffect() invocation will execute every single time 'value' is updated. Another thing to mention is that arguments in the dependency list don't have to come from other hooks like they do in this example - they can be other forms of data that are assigned to a particular variable where the underlying assigned values can be/are mutated. */ Just like useState(), you can use as many instances of useEffect() as your component desires. Hooks let us split the code based on what it is doing rather than a lifecycle method name. (React.js Docs) If you need to make bespoke DOM alterations that wouldn’t be covered in the two examples provided above, I would recommend consulting the React.js Docs with a primary focus on the section, “Timing of Effects”. If you’re looking for an all-encapsulating resource regarding the useEffect() hook that could solve any and all of the problems you have with this particular hook, there is one article I would recommend reading as though it’s React.js gospel: A Complete Guide to useEffect. Written by Dan Abramov, Co-Author of Redux, Create React App, and all-around incredible engineer, the author dives into the nuances and the functionality this particular hook has to offer all providing easy-to-follow examples along the way. I couldn’t recommend a better resource and person to have written about this hook than this exact article. useContext Preface — At the time of publishing this article, there is another React.js solution to global state management which is currently in the experimental phase of development. This package is known as Recoil that seems to be a crossover between something like the React.js Context API and an agnostic global state management tool like Redux or MobX, all while bringing new design patterns to the table as to how to manage a front-end’s global state. I will not be discussing this library within the scope of this article, but I felt it was important to mention as the product may progress to a point in the future where it could become the best practice for implementing a global state management solution for React.js front-ends. Probably one of my favorite additions from the 16.8 update, the React Context API is a suite of API features that provides a mutable, global state data structure to hook into components at any point throughout the component tree, thus preventing a React.js anti-pattern known as prop drilling. Take the following React.js component tree architecture into account as an example: React.js Component Tree Example Let’s say you have some piece of state logic in the index.jsx file located in the Authenticated folder (components/Authenticated) and you want to pass that data down to, for example, the Section3 component (components/Authenticated/Dashboard/Body/Section3). Before the React Context API, you would have to “drill” through each mediating component with props, even if you weren’t going to use that prop data in that said component, before you got to the desirable ancestor component. There are a few reasons why this may not be the most suitable solution for passing data down the component tree, all of which are summed up perfectly in Kent C. Dodds’ blog post on this exact topic: It [prop drilling] inevitably leads to a very confusing data model for your application. It becomes difficult for anyone to find where data is initialized, where it’s updated, and where it’s used. Answering the question of “can I modify/delete this code without breaking anything?” is difficult to answer in that kind of a world. And that’s the question you should be optimizing for as you code. As an application grows, you may find yourself drilling through many layers of components. It’s not normally a big deal when you write it out initially, but after that code has been worked in for a few weeks, things start to get unwieldy for a few use cases: - Refactor the shape of some data. - Over-forwarding props (passing more props than is necessary) due to (re)moving a component that required some props but they’re no longer needed. - Under-forwarding props + abusing defaultProps so you're not made aware of missing props (also due to (re)moving a component). - Renaming props halfway through making keeping track of that in your brain difficult. (Kent C. Dodds’ Blog) This is where React’s Context API seeks to mitigate any issues that arise. You can think of React’s Context API as a similar solution to that of well-known global state management tools like Redux or MobX, but with far less overhead and boilerplate along with an approach that feels more native to React.js in place of a tool that operates agnostic to what front-end library/framework you’re using. Both solutions, of course, have their own pros and cons: For example, Redux or MobX makes more sense than the React Context API if the state in your application is constantly changing or if you can take on a larger bundle size because you know your product will benefit more from a far more comprehensive suite of global state management. Since a Context.Provider acts basically like one data/structure (i.e. — vertically-scaled state) that can be accessed anywhere in the component tree, updates can also become very costly very quickly if you’re storing lots of properties with continuously updating values within a single invocation of Context.Provider. On the other hand though, if you’re application is smaller and only requires interaction with the global state very infrequently, or if you’re using multiple invocations of Context.Provider (Not the best solution — you should only use React’s Context API sparingly as stated below) to store different parts of the global state, then it may make more sense to use the Context API over Redux or MobX. Context is primarily used when some data needs to be accessible by many components at different nesting levels. Apply it sparingly because it makes component reuse more difficult. If you only want to avoid passing some props through many levels, component composition is often a simpler solution than context. (React.js Docs) Now that we’ve covered the general overview of the React.js Context API, the problem(s) it solves, and its “competition”, here is an example of what a Provider/Consumer relationship looks like when using the Context API along with how to handle updates: src/providers/SomeProvider/index.jsx src/index.js useEffect() Hook from Above + SomeContext Update The photos above provide one example of how one may use a Context.Provider in parallel with the function createContext() and hook useContext(). If the photos were a bit hard to follow, here is how you can think of it in a potentially more mentally digestible fashion: A Context object is created via the createContext() function in React.js (line 3 in the first photo). This object always comes with a Provider component (as can be noted by object destructuring in line 5 of the first photo) and Consumer component. The Provider allows Consumers to subscribe to changes to the Context object. The Provider component has a prop called value (line 14 in the first photo) which takes in data that the Provider wishes to pass onto descendant Consumer components. In the example above, we are passing both the state and setState variables from the useState() hook in the Provider’s parent component down to any Consumer components so that two things can happen; First, so Consumer components can access all of the values that initially exist within the Provider’s parent component's state via state . Second, so they have the ability to mutate the state data at any point within the encapsulated component tree (I’m mutating the data in the third photo as can be seen in lines 14–18) via the setState setter function we pass down. The Provider can then be wrapper around any component, at any level, and multiple times within the component tree; In the second photo, you can see that I’m wrapping the Provider component around the entire application. This puts the Provider and the data it can pass down to be consumed at the top of the component tree. If I wanted to nest the Provider component, I could do that as well too. There are theoretically no limits as to where you cannot wrap a Provider component around a Consumer component — You can even wrap multiple Providers around each other as well as allow Consumer components to consume multiple nested contexts (see ‘Consuming Multiple Contexts’ in the React.js Docs for more on this). One thing you might notice is that I didn’t use the Context.Consumer property for handling consumers. You only have to use Context.Consumer property of the Context object if you want to access the Context object itself, not the Context.Provider. In the example above, we’re accessing the Context.Provider which passes down state and setState via the value prop. This way, we can not just access the state, but also update it in an elegant manner. If you merely wanted to just have your Context object be consumed and not mutated, you could put everything that’s inside the useState() hook directly into the createContext() function in place of null . If you need more of an understanding with the React.js Context API, I would highly recommend a few video resources:
https://towardsdatascience.com/react-js-basic-hooks-usestate-useeffect-usecontext-f13bf3ad8206
['William Leiby']
2020-06-12 15:18:09.022000+00:00
['React', 'Programming', 'JavaScript', 'Software Engineering', 'Software Development']
The Top 6 Reasons Why I’m Grateful Not to be Vincent van Gogh
Most people are at least vaguely aware of Vincent van Gogh as the wacky painter guy who cut off his own ear. What they might not know is the full extent of the agony that was his waking life. So, let’s fix that! I now present to you six ways Vincent van Gogh’s life totally sucked that I’m glad I can’t relate to. He died when he was 37. And guess how old I am? I’m thirty freaking seven. I don’t know about you, but I’d rather not be dead so young. I’d especially rather not be dead from suicide, though I have to admit the thought does cross my mind occasionally. Do you even know how van Gogh killed himself? He shot himself in the stomach. In the stomach! Then he died from the infected wound two days later. Not only is that not an ideal way to die, it sounds absolutely miserable, making it a great way to sum up Vincent van Gogh’s life. He died before his paintings went viral. These days teen girls on Instagram have infinitely more fame than Mr. van Gogh ever had in his entire life, and most of them have done nothing! I, for one, am glad he didn’t live to see this day. Mostly because he’d be 166. Vincent did all of his painting in the last ten years of his life and most of his famous work in the last two years of his life. He was busy failing at everything else he tried before that. In fact, the only painting we’re positive he sold, The Red Vineyards near Arles, was sold a mere seven months before his death at an exhibition in Brussels. A fellow painter, who was his friend, bought it for 400 francs. It was clearly purchased out of pity! After van Gogh’s death, his brother Theo, who was his biggest fan, took possession of most of van Gogh’s paintings. Then he up and died six months later. Thanks bro. The only reason van Gogh has any fame at all is due in large part to the work of Theo’s wife, Johanna van Gogh-Bonger. His mental illness far exceeded my own. Look, I’ve had major depressive disorder and anxiety my whole life, and I’ve learned to not exacerbate it needlessly. Plus, I have way more options for treatment these days than van Gogh did in his time. But our boy Vincent? He just drank himself stupid and signed up at his local asylum. His most famous work was actually painted in that damn asylum. Remember The Starry Night? It’s meant to describe “the view from the east-facing window of his asylum room at Saint-Rémy-de-Provence,” with some creative liberties taken like adding a freaking town that didn’t exist. Also, van Gogh hated Starry Night and said it meant nothing to him. There’s actually a theory out there that says van Gogh didn’t even shoot himself. Supposedly somebody else did and he just decided he’d been done a favor and it was his time to go. “Hey there death. It’s me, ya boy!” The man suffered unknowable mental anguish to the point that his last words were reportedly “la tristesse durera toujours,” which translates to “the sadness will last forever.” Count me out of that. Syphilis. He had syphilis. Everyone had syphilis. It was incurable at the time. Now I don’t want to point fingers at his live-in prostitute girlfriend, but she’s definitely on the short list of suspects. Syphilis is actually how van Gogh’s brother Theo died. He had late stage syphilis and developed general paralysis of the insane. Pretty gnarly way to go out if you ask me. Either way, even if it is curable now, I don’t really want syphilis. I like both of my ears. Sure, you’ve probably heard about Vincent van Gogh losing an ear. How it represents the convergence of creativity and madness and all that. Cool story, but I’d rather keep mine intact. Some say van Gogh cut off his ear by himself. Others say fellow artist, and suspected lover, Paul Gauguin chopped that bad boy off with a saber. Cool, right? As for van Gogh? We’re not sure if he even remembers what happened that night, but according to his letters to his brother Theo he’s definitely scared of what havoc Gauguin might wreak with “more serious weapons.” He was poor and relied on his brothers generosity. Van Gogh’s brother Theo was his life boat. I’m not sure if my brother can even float. Without Theo’s unending financial support, Vincent wouldn’t have been able to be a painter. He once remarked about the difficulty he faced even with Theo’s support in a letter to Theo: I am privileged above many others, but I cannot do everything which I might have the courage and energy to undertake. The expenses are so extensive, beginning with a model and food and housing, and ending with the different colours and brushes. What a pain in the booty it must have been to paint literally anything in the 1800's. We’re spoiled by today's standards, and that’s the way I like it.
https://natedoesart.medium.com/the-top-6-reasons-why-im-grateful-not-to-be-vincent-van-gogh-3d5eee7d3946
['Nate Miller']
2019-09-26 17:26:15.159000+00:00
['Art', 'Life', 'Mental Health', 'Creativity', 'Humor']
How The Fuck Does A Person Get To Be 400 Pounds?
Fat and lazy–that’s what most people think about morbid obesity, right? The most widely held belief about a super fat person is that they must have gotten that way simply by being lazy and stuffing shit in their face all day long, 365 days a year. That’s the ultimate message we get when experts call weight loss the sum of eating less and moving more. End of story. So if a person is fat–really fat–we readily accept the notion that they must sit on the couch all day long, watching TV and binge eating donuts and potato chips. As a society, we don’t really believe that fat people are much like us “normal people”–but they could be like us, if only they made better decisions as we do. Except that I’m not writing this as any normal person. I am writing this as a fat woman, a really fat woman, a morbidly obese American who weighs 400 pounds. Sometimes more, sometimes less. That number is probably shocking to readers who don't know me, and I imagine that the big question is how on earth did you let yourself get so big? Allow me to try to explain it-–because laziness or eating too much junk food isn’t a completely honest answer. First off, let me be clear when I say it’s different for every fat person. Obese people get slapped with so many labels and generalizations, yet weight gain–and weight loss–is highly individual. One of my missions as a writer is to speak about my experience as a fat woman in a way that is completely honest and unfiltered. Hormones and genetics play a role in morbid obesity. Does it seem like every fat person has a health condition for an excuse? Then surely, I won’t disappoint you. When I was 5 or 6 years old, I was diagnosed with Central Precocious Puberty. That’s an endocrine disorder where your body enters puberty earlier than it “should.” I was treated with medication for more than 6 years to stunt my body’s changes, and had my first period medically induced. My hormone levels and symptoms were constantly monitored, and I was given a diagnosis of another hormonal disease–Polycystic Ovarian Syndrome--at age 14. PCOS is commonly linked to excess weight, and studies have shown that women with PCOS burn fewer calories than women without the disease. For most women, when we’re talking about diet and weight loss, 1200 calories is a fair starting point. In fact, many health professionals advise women not to fall below that number. Women with PCOS however, have been shown to either not lose or lose less weight than other women who go on a 1200 calorie diet because we naturally burn fewer calories. And genetics? Yes, obesity seems to run in families whether or not relatives actually know and have contact with one another. In my own family, I have seen a tendency for women to hold disproportionate weight in their calves and thighs, though I seem to have gotten the worst of it. Lipedema is a huge factor–a poorly understood condition where your calves and thighs (and eventually, upper arms) collect unusual fat deposits which do not respond to proper diet and exercise. Of course, no one told me I had lipedema until a year ago. My endocrinologist was concerned with my weight my entire childhood. I still recall how nervous I was each month for my doctor appointments because my weight seemed to creep up every damn time. In second or third grade, I quit drinking my milk at lunch because I knew I needed to be on a diet. My principal encouraged me to drink it anyway and when a friend loudly announced that I was dieting, I was humiliated. I looked normal–unless I wore shorts or a swimsuit. Then I was clearly and unusually pear-shaped. My body was embarrassing and stressful, even from a very young age. Upbringing and lifestyle play their parts as well. I learned very few healthy habits in my childhood. Don’t get me wrong–that’s no excuse to be unhealthy as an adult, but it certainly makes healthy living more complicated. My mom either starved herself or overate, and never managed her own weight well. And although there was a brief period in my childhood where we walked everywhere, physical activity was not a significant aspect of our lives. Food was definitely the fixture. As an adult, I can see that my childhood wasn’t normal–it was far too sedentary and isolated. But we were also very poor–my mom didn’t work, and we never had the luxury of a car. There was no money for swimming lessons, dance classes, or many other ways to get kids active. I didn’t have a bike or roller skates that fit… so I never learned how to do those things. I did play softball for a couple of seasons because my father agreed to pay for it–but I was terrible. I can still see the exasperation on my coach’s face as I failed to catch the ball again and again. Looking back on our holidays as a child, food took precedence over everything else. Which to an extent makes sense because when you’re dirt poor, there’s only so much comfort available to you, and there's only so many resources with which you can treat yourself and your family. Clearly, a lot of junk food is cheap! So for us, food was more of a love language than fuel, a coping mechanism even, and eventually, it was a very poor stand-in for connection and relationship. Weight loss and weight gain is a process. Like many super obese people, it took years of struggling with my weight for me to get to the point of weighing nearly 400 pounds. In my freshman year of high school, I went vegan and lost ten pounds to get down to 135. I was about 5 and a half feet then–and still am. My mother took me to a free consultation about liposuction for my calves and the doctor never mentioned I had lipedema, but he said I needed to lose about 25 pounds for him to do it. Honestly, losing 25 pounds felt even more impossible than ever paying for cosmetic surgery. In my senior year, I went off of my vegan diet and my weight gain spiraled out of control. I weighed 225 pounds at graduation and felt miserable. In college, I made myself stick to my own 800 calorie diet and got down to 185 pounds. After a while, I found the diet to be too stressful, and my weight crept up as I prepared for my wedding at age 20. My marriage was short-lived and unhappy; by the end of that 2.5 years, I weighed 308 pounds and had zero confidence in myself. After my divorce, I began working out every day and went back on my own very low-calorie plan that was 800 calories or less each day. I also worked a retail job that kept me on my feet. Within 8 months, I lost over 100 pounds and got to 196. But once again, I suffered burnout from the diet and began to gain weight as I dated again and went out to eat at restaurants. I still remember the terrible feeling of waking up in the morning and touching my tummy–I knew I was gaining back the weight that I had worked so hard to lose, but I felt powerless to stop it. My weight surely did creep… this time up to 355. After a few years of that, I decided to get serious about my weight loss again when I was 30. I began to dabble with a raw vegan diet and lost about 20 pounds on my own. Then I began a “raw food boot camp” program and went down to 285 in 6.5 months. I spent two hours on the treadmill at work every single day. But I felt great and planned to keep losing the weight. I got down to 250 in less than a year of seeing 355, but derailed myself in 2013 with a spectacularly poor relationship choice. By the fall of 2013, I was quickly gaining weight again after moving across the country for the wrong kind of partner, and I was unexpectedly pregnant to boot. I was diagnosed with gestational diabetes, and I controlled my blood sugar well with my diet, but my weight had already gone up to 330. After giving birth to my daughter my diet suffered again and I put on more weight while I breastfed her a bit more than 2 years. With motherhood, food became my coping mechanism. Today my daughter is nearly 5-years-old and my weight finally quit ranging between 340 and 375. It's right at 400 today. Yes, I have tried Keto, low carb, zero carb, fasting, vegan, intermittent fasting, and LCHF in the past 4+ years, but I never get below 340 pounds. And I was holding pretty stable at 340 for many months, when I began to battle more eating addiction over the past year. Plus, I can easily gain 30 pounds in a month, partly due to lipedema which increases water retention. Obesity is more than a physical issue. I don’t want to try to speak for other obese women, but for me, my mental health is very strongly linked to my weight. If you haven’t seen me for a while, and my weight goes up in the meantime? It’s a telltale sign that I’m not doing well emotionally or personally. Furthermore, I have found that when I experience a triple-digit weight loss, I also go through extreme mental and emotional upheaval which I still don’t fully understand. I am convinced that a significant missing component of most weight loss journeys is our mental health. I’m a rather isolated person, and practically a shut-in at certain times. That, without a doubt, impacts my mind and body. And my inner voice impacts my weight. When I call myself fat, lazy, or hopeless, I am much less likely to make healthy choices and much more likely to fall back upon my eating disorders. When I learned a couple years ago that I have lipedema, I sank into a deep depression because suddenly, I had no hope of ever having “normal” legs. No matter how much weight I lose, I will still have legs… well, legs like tree trunks. That devastated me for a very long time. Sometimes self-care sucks... hard. If you want to lose any significant amount of weight, you must prioritize your self-care. Which is in its own right is somewhat depressing because self-care is a discipline that takes plenty of resources. Whether it’s time, money, transportation, childcare, or simply a better living arrangement with greater access to healthy food-–self-care isn’t completely free. This is something that I think nearly every mom can relate to. We all know we need to fill our cups first before we can adequately care for our families, but we still put ourselves last on the list because it’s so damn hard to find the time or money. Self-care isn’t a luxury, but it sure feels like one, and I’d say it can be a bit of a privilege. Any time we talk about being about being morbidly obese, and how that even happens, we are talking about a truly weighty issue. It isn’t all physical. It isn’t all about food. It is emotional, spiritual, mental, physiological, social, and so much more. I’m positive that if obesity and weight loss were a cut and dry simple science, most everyone would lose the weight and keep it off. And the experts would agree about why we get fat and exactly what we can do about it. So the next time you see a person who is really fat? I hope you can consider that it might be a much more complicated issue than you as a “normal person” could ever understand.
https://medium.com/60-months-to-ironman/how-the-fuck-does-a-person-get-to-be-400-pounds-675a60d2683f
['Shannon Ashley']
2019-02-28 18:05:01.411000+00:00
['Mental Health', 'Health', 'Weight Loss', 'Culture', 'Lifestyle']
Insights and My Experience From My Interview at Facebook
Onsite Interview Facebook AR/VR division office. There are two more Facebook offices in London. Facebook’s interview process is pretty quick. I heard back from the recruiter within two days and flew to London for my onsite round. A total of four interviews were scheduled Coding interviews (x 2) Two questions are asked which need to be solved within 45–50 minutes. Areas covered were binary trees, string, stack, and list. Discuss the solution first with the interviewer and then write your code on the whiteboard. Tip: There are always edge cases that need to be tackled in the code, and it’s not easy to take care of them while coding under pressure. One technique that has helped me get through this is the test run. Immediately after coding the solution, I tell the interviewer that I am going to test-run my solution on a generic example and debug myself first. The obvious mistakes in code are brought out by the first test run itself — better to find them yourself than to have them pointed out by the interviewer. Running through these test runs provides the time to think about edge cases which can then be incorporated into the code. System design interview This is an interesting round new for entry-level software engineers. The aim is to design a system from scratch. The problem statement usually looks like this: Design an existing product like WhatsApp, Facebook, Google search, etc. Design a particular feature of one of these applications; say, implement a timeline in a Facebook app. Design a completely hypothetical scenario; say, create a system to store logs of three servers situated far apart. This interview definitely requires a special kind of preparation, and the most famous aid is Grokking the System Design Interview. The good part is there are no correct answers. You should be able to justify your design choices and know the tradeoffs you have made. Behavioral interview This is the easiest of all, but do not take it lightly because any red flags raised in this interview can cost you your selection. The questions revolve around non-technical experiences like leadership skills, team spirit, how you tackle disagreement, etc. This set of questions is almost fixed, hence answers can be prepared. Tip: A day before the interview, go through the list of popular behavioral questions and think about anecdotes from your professional life that support your answer to the question. Use this interview to relax in between the series of technical interviews. The confidence boost from this round helps in raising morale.
https://medium.com/better-programming/facebook-interview-experience-and-insights-51e383f3c70d
['Kriti Joshi']
2020-12-29 13:59:41.411000+00:00
['Programming', 'Facebook', 'Women In Tech', 'Interview', 'Engineering']
Improving Deep Learning for Ranking Stays at Airbnb
Search ranking is at the heart of Airbnb. Data from search logs* indicate it is a feature used by more than 90% of guests to book a place to stay. In ranking, we want the search results (referred to as listings) to be sorted by guest preference, a task for which we train a deep neural network (DNN). The general mechanism by which the DNN infers guest preference is by looking at past search results and the outcome associated with each listing that was shown. For example, booked listings are considered preferred over not booked ones. Changes to the DNN are therefore graded by the resulting change in booking volume. Previously, we’ve focused on how to effectively apply DNNs to this process of learning guest preference. But this process of learning makes a leap — it assumes future guest preferences can be learned from past observations. In this article, we go beyond the basic DNN building setup and examine this assumption in closer detail. In the process, we describe ways in which the simple learning to rank framework falls short and how we address some of these challenges. Our solution is what we refer to as the A, B, C, D of search ranking: Architecture: Can we structure the DNN in a way that allows us to better represent guest preference? Can we structure the DNN in a way that allows us to better represent guest preference? Bias: Can we eliminate some of the systematic biases that exist in the past data? Can we eliminate some of the systematic biases that exist in the past data? Cold start : Can we correct for the disadvantage new listings face given they lack historical data? : Can we correct for the disadvantage new listings face given they lack historical data? Diversity of search results: Can we avoid the majority preference in past data from overwhelming the results of the future? Architecture The motivation for a better architecture came from the fact that the DNN-inferred guest preference seemed out of touch with the actual observed preference. In particular, guest bookings were skewed towards economically priced listings, and the median price of booked listings was lower than the median price of search results shown. This suggested we could get closer to true guest preference by showing more lower priced listings, an intuition we referred to as cheaper is better. However, explicitly applying price-based demotion to the DNN ranked results led to a drop in bookings. In response to this, we discarded the cheaper is better intuition, realizing what we really needed was an architecture to predict the ideal listing for the trip. The architecture, which is shown below in Figure 1, has two towers. A tower fed by query and user features predicts the ideal listing for the trip. The second tower transforms raw listing features into a vector. During training, the towers are trained so that booked listings are closer to the ideal listing, while unbooked listings are pushed away from it. When tested online in a controlled A/B experiment, this architecture managed to increase bookings by +0.6%. Figure 1. Query and listing tower architecture Bias One challenge in inferring guest preference from past bookings is that the booking decisions are not solely a function of guest preference. They are also influenced by the position in which the listings are shown in the search results. Attention of users drops monotonically as we go down the list of results, so we can infer that higher ranked listings have a better chance of getting booked solely due to their position. This creates a feedback loop, where listings ranked highly by previous models continue to maintain higher positions in the future, even when they could be misaligned with guest preferences. Figure 2 below shows how the number of clicks a listing receives decays by its position in ranking, independent of the listing quality. The decay is shown per device platform. Figure 2. Click through rates by position in search results To address this bias, we add position as a feature in the DNN. To avoid over reliance on the position feature, we introduce it along with a dropout rate. In this case, we set the position feature to 0 probabilistically 15% of the time during training. This additional information lets the DNN learn the influence of both the position and the quality of the listing on the booking decision of a user. While ranking listings for future users, we then set the input position to 0, effectively leveling the playing field for all listings. Correcting for positional bias led to an increase of +0.7% in bookings in an online A/B test. Cold Start One clear scenario in which we cannot rely on past data is the case where previous data does not exist. This is most obvious in the case of new listings on the Airbnb platform. Via offline analysis, we observed there was much room for improvement when it came to ranking new listings, especially relative to their “steady-state” behavior, once enough data had been collected. To address this cold start issue, we developed a more accurate way of estimating the engagement data of a new listing rather than simply using a global default value for all new listings. This method considers similar listings, as measured by geographic location and capacity, and aggregates data from those listings to produce a more accurate estimation of how a new listing would perform. These more accurate predictions for new listing engagement resulted in a +14% increase in bookings for new listings and an increase of +0.4% for overall bookings in a controlled, online A/B test. Diversity of Search Results Deep learning enabled us to create a powerful search ranking model that could predict the relevance of any individual listing based on its past performance. However, one angle that was missing was a more holistic view of the results shown to the user. By only considering one listing at a time, we were unable to optimize for important properties of the overall result set, such as diversity. In fact, we observed that many of the top results seemed similar in terms of key attributes, such as price and location, which indicated a lack of diversity. In general, diverse results can contribute to a better user experience by illustrating the wide breadth of available choices rather than redundant items. Our solution to address diversity involved developing a novel deep learning architecture, which consisted of Recurrent Neural Networks (RNNs), to generate an embedding of the query context using the entire result sequence. This Query Context Embedding is then used to re-rank the input listings in light of the new information about the entire result set. For example, the model could now learn local patterns and uprank a listing when it is one of the only listings available in a popular area for that search request. The architecture for generating this Query Context Embedding is shown in Figure 3 below. Figure 3. RNN for search result context Overall, we found this led to an increase in the diversity of our search results, along with a +0.4% global booking gain in an online A/B test. The techniques described above enabled us to go beyond the basic deep learning setup, and they continue to serve all searches on Airbnb. That being said, this article touches on just a handful of the considerations that go into how our DNN works. Ultimately, we consider over 200 signals in determining search ranking. As we look into further improvements, a deeper understanding of guest preferences remains our guiding light. Further Reading Our papers published in the KDD conference go into greater technical depth: Improving Deep Learning for Airbnb Search goes into the details of the neural network architecture, tackling positional bias and cold start. KDD’2020 goes into the details of the neural network architecture, tackling positional bias and cold start. KDD’2020 Managing Diversity in Airbnb Search is dedicated to the techniques we used to improve diversity in search results. KDD’2020 is dedicated to the techniques we used to improve diversity in search results. KDD’2020 Applying Deep Learning to Airbnb Search describes how to effectively apply DNNs to search ranking. KDD’2019 We always welcome ideas from our readers. For those interested in contributing to this work, please check out the open positions on the search team. *Data collected during first two weeks of Aug 2020.
https://medium.com/airbnb-engineering/improving-deep-learning-for-ranking-stays-at-airbnb-959097638bde
['Malay Haldar']
2020-10-06 17:16:36.767000+00:00
['Engineering', 'AI', 'Airbnb', 'Neural Networks', 'Search Engines']
Swimming Alongside Grief
The only footprints are mine, but grief is with me as I dive into the waves. Together we delight in the heat of the sunshine and the cool of the water. Grief is there not as an unwelcome visitor but as an invited companion. It has taken me two years and several weeks of forced isolation to reach out and hold her hand. In the last few days, I have involved myself in all sorts of creative endeavours that I have neglected for years. I have begun writing music just for the love of it. I remember the flow of that process from over 30 years ago. I have been making storytelling videos, harking back to a career as a performing storyteller last practised professionally 20 years ago. Now I must write. Not to achieve anything, just because I must write. Holly by a friend Holly took her own life almost two years ago. At 28 she had just completed a Masters Degree in Behavioural Psychology and was working in her dream job supervising the tutors of autistic children. She had married her childhood sweetheart, sang in a choir and had many friends she could call on. It was a complete shock. My grief process has mainly involved me pulling myself together and getting on with things. My first reaction was to find ways to be there for others. I assumed this was my duty as Holly’s father. We spent ten days in London walking down to the hospital where her body was held. In a park by the river, we met her friends, colleagues and family members. We exchanged hugs and ate food together and shook our heads in disbelief. Far from home, I could only think of one personal friend in London. I called him up, and he came to join us and took me to a Sushi bar. I treasure that as a moment where I was doing something for myself. He was the perfect friend to share grief with. That one meeting amongst so many serves to illustrate the imbalance of my grief process. At the funeral, I left my wife to bawl her eyes out by the graveside while I walked around the burial field, greeting each of the 300 or so family and friends. I am not judging myself here, just noticing that my process was focused on what I have felt to be the needs of others. Similarly, my first significant action within days of her death was to sign up for a Counselling Training course. The questions driving me were — How can I help others avoid having to live through such trauma? Or maybe — How can I help other young people going through similar mental distress? Now 23 months down the line I am beginning to notice how I am grieving for myself. The loss of Holly is such a massive trauma that my body, emotions and spirit have taken over the grieving process. I have given my mind space to step aside. I am feeling the grief in my body. I am allowing the emotions to bubble up and not trying to fix them. I have a clear sense of purpose and direction which is not driven by the needs of others. This is new to me. It is as if I had a concept in my mind that to create meaning and purpose in my life, I had to focus on the needs of others. Now I am finding purpose and meaning by focusing on my own needs, and this has allowed my creative processes to open up in new ways. The music of grief. Lockdown has enabled me to create a music studio for myself in the spare bedroom. Till now it has always been temporary. Dismantled for visitors or to pack up instruments for gigs. Now with none of that possible, it is stable. One morning last week, I moved from my second breakfast into the studio, and I started playing a melody on my little melodica. I became engrossed. The theme grew, an accompaniment flowered in the background. I made a rough recording layering up the parts as if I was writing music for a yet to be created film. There were several things unusual about this. For 30 years, I have only written music for a specific purpose. I worked as a Storyteller in schools and wrote music to accompany a particular story. Or I had a gig coming up that needed padding out. Always specific and with a clear brief and goal in mind. Today I wrote music for no reason just as I had in my youth. It flowed easily. There were times when I had tears in my eyes. There were times when I felt like I was observing myself composing music. As if the observer and the composer were two separate entities. Eventually, I pulled out of the process and left the music playing on a loop now on monitor speakers. I made coffee and sat with my wife. “That sounds like your grief,” she said. I realised at that moment that at last, I was allowing my grief to flow. Not judging it or trying to turn it to some purpose or managing it. I could sense it in my body. As I lifted the coffee cup, there was sadness there. But it was not stopping me from drinking my coffee or being a creative person. It was is if it was driving my creative process, and I was allowing it to do that. I was no longer in a battle with or struggling with grief. I had achieved some sort of partnership with it. The story of grief. In the next few days, the song took hold. For my grandson and two nephews, I have been making youtube videos under my old performing name of Dragonfly Stories. I have not performed professionally as a storyteller for 20 years. I would however always be asked to resurrect a few for family birthdays and other occasions when we gathered together. The creating of the music now took on a new life in creating a story performance to go with it. I found myself researching myths, legends and folk tales related to grief. Many I could not relate to at all. Western culture and religions particularly seem to have a peculiar perception of the grief process. I found more resonance with Asian religions but finally managed to settle on using a common African folktale around which to hang my song and story. I have called it Aaliyah — The girl with the golden hair. A young woman suffering the grief of losing her entire extended family travels through a dark forest and finds a village of people very unlike her. She settles in and is accepted even though she is clearly different to them. Each morning she walks around the village singing a song to awaken everyone to enjoy the day. The villagers have never sung or even heard singing before. They are entranced and ask Aaliyah to teach them to sing. She brings much joy and happiness to the village. However, the medicine man is jealous and thinks she has enchanted them with magic that he does not possess. He begins to implant doubt and fearful thoughts into the mind of Aaliyah. “They don’t really love you. They pretend they do because they are afraid.” “They are plotting to kill you” “They will cut off your golden hair in the night.” Finally, she is so afraid that she leaves the village never to return. Now the villagers express their grief in a song. Every evening they gather by the forest and sing to Aaliyah. Aaliyah, We miss you when the morning comes. Where are you, when will you return? We miss you when the evening comes. Will you return to us? Please return to us? Although she never returns. The gift of singing and the joy she brought to the village remains and has become part of their lives. The flow of grief. The most used metaphor for traumatic grief is that of a black hole. You never get rid of it, but you grow a new life around it. I have never related to this metaphor. It suggests the grief is a singular item with a permanent place in your life which will never change. The metaphor that I have with me now is that of a flowing stream. The grief is not one of the rocks or a deep pool which life flows around. The grief is part of the flow. The grief is there flowing in shallow waters over pebbly ground catching the sunlight. It is part of the flow through dark underground caves. It is part of the joyous splashing of a waterfall over rocks. It is part of the wide river on its determined journey to the ocean. It is part of the ocean. Grief will always be with me, but not as an obstacle or a distraction. Grief will now inspire me to write, compose or perform. This is quite different from using my creative skills as an outlet for my grief. I can now allow grief to be part of the driving force of my life. Not as an unwelcome visitor but as a companion.
https://medium.com/invisible-illness/swimming-alongside-grief-e5d3e323ad42
['John Walter']
2020-05-29 15:58:49.742000+00:00
['Creativity', 'Family', 'Mental Health', 'Self', 'Grief']
Image Registration: From SIFT to Deep Learning
What is Image Registration? Image registration is the process of transforming different images of one scene into the same coordinate system. These images can be taken at different times (multi-temporal registration), by different sensors (multi-modal registration), and/or from different viewpoints. The spatial relationships between these images can be rigid (translations and rotations), affine (shears for example), homographies, or complex large deformations models. Image registration has a wide variety of applications: it is essential as soon as the task at hand requires comparing multiple images of the same scene. It is very common in the field of medical imagery, as well as for satellite image analysis and optical flow. CT scan and MRI after registration In this article, we will focus on a few different ways to perform image registration between a reference image and a sensed image. We choose not to go into iterative / intensity-based methods because they are less commonly used.
https://medium.com/sicara/image-registration-sift-deep-learning-3c794d794b7a
['Emna Kamoun']
2020-01-30 13:31:19.271000+00:00
['Machine Learning', 'Neural Networks', 'Deep Learning', 'AI', 'Computer Vision']
Microsoft just added 3 interesting new Features to Lobe
Microsoft just added 3 interesting new Features to Lobe Microsoft just released a new version of its tool that lets you train AI models without writing a single line of code Microsoft released a brand new version of Lobe that will open a new range of possibilities and projects that can be built using machine learning. Lobe is a Windows or Mac desktop software program that allows everyone to create machine-learning models for image classification. It lets you build machine learning models with the help of a simple drag-and-drop interface. This new update introduces four significant improvements — the ability to select which camera you want to use in Lobe, new export formats, accelerated GPU training, and increased app performance. Input from any connected camera Now Lobe’s Data collection happens in more places than just in front of your computer. This new update adds the ability to select any connected camera right inside Lobe, for example, a USB camera, that makes data collection more seamless than ever. Export your models Another exciting update is that Lobe introduced ONNX and TensorFlow.js support to export your model universally or use it specifically on the web. And worths noting that TensorFlow Lite export now works with Android, and this is a breaking change for existing exported models. Training models with GPU Acceleration But maybe the most exciting update is that Lobe now supports GPU acceleration, which will make training faster and more efficient. This will allow training to recede to the background, even more, making the overall experience of building a model more enjoyable. Accelerated GPU training is available on Windows today, and Microsoft says that macOS support will come soon. As a first impression, in this new release, Lobe is faster and runs smoother in scrolling and reacting to my interactions. You must try it yourself! Conclusion Lobe is an exciting software that can bring AI and Computer Vision to the masses since the first release. You can download it for free to get started on your machine learning models and join the community to share your feedback and see what others are building. Now, by making Lobe faster, more performant, and expanding its capabilities with more export formats and camera sources, Microsoft is opening a whole new range of possibilities for easy, powerful, and portable machine learning projects. For everyone! Further related content If you want to read more about Lobe, Artificial Intelligence, and Data Science, here you have some other articles I’ve written about it:
https://medium.com/dataseries/check-these-3-interesting-new-features-in-lobe-39bf4dfedcc7
['Jair Ribeiro']
2020-12-17 10:02:48.503000+00:00
['Machine Learning', 'Computer Vision', 'Microsoft', 'AI', 'Software']
Create a Dataset for Object Detection
Computer Vision Introduction The first step for most computer vision tasks such as classification, segmentation, or detection is to have custom data for your problem set. There are multiple ways of creating labeled data; one such method is annotations. The annotation technique manually creates regions in an image and assign a label. Now to keep things simple, we will be using two tools Pixel Annotation tool and Microsoft VoTT. You can read more about this tool, Pixel and Microsoft VoTT. Pixel Annotation Tools Installation for macOS. Then update brew using brew update. Next, you need to install a cross-platform application development framework such as qt. brew install qt Pixel Annotation tool uses a watershed algorithm to do image segmentation. Readers can use this link to read more about the watershed algorithm in detail. brew install opencv In Mac curl is already installed, you can check it by using typing curl -V in the terminal. Something like this will appear, else install curl using brew. brew install curl Pixel Annotation tool does not come up with a .dmg file or a Graphic interface, so you need to transform source code to a stand-alone form via build. cd PixelAnnotationTool Inside this directory create the build mkdir build cd build Next, inside build use the following command: cmake .. -DCMAKE_BUILD_TYPE=$CONFIG -DDISABLE_MAINTAINER_CFLAGS=off -DCMAKE_PREFIX_PATH=$(brew --prefix qt) -DQMAKE_PATH=$(brew --prefix qt)/bin Finally, cmake --build . We are all set to run and use the Pixel Annotation tool. Go to the spotlight and search for the Pixel Annotation tool. Pixel Annotation Tool Creating a Dataset. Go to the File option at the top left and select Open a directory. On the top right, see all file names. Select one image, say ‘Sachin.jpg.’ Go to the color panel on the left side and select any color, let me set the sky. Move your cursor around the person (Sachin). Then select another color say ‘out of roi’ and move the cursor around the entire region except for a person. Then click on the watershed option at the bottom left and press Command + S to save the image. Finally, you will get this mask. Result Input Output This mask serves as input for any object detection model. Microsoft Visual Object Tagging Tool (VoTT) Installation for macOS. Unlike Pixel Annotation Tools, VoTT comes with Disk Image(dmg) file. You can download and install the tool from below link https://github.com/microsoft/VoTT/releases/download/v2.1.0/vott-2.1.0-darwin.dmg. Go to the spotlight and search for VoTT and launch. Tool Appears like this. Click on New Project Fill Display Name, say Cricketers in my case. Add Source Connection, click on that a screen will pop up like In Provider, select Local File System if your file to annotate is on the Laptop. Select the folder where the images are and click on “Save Connection.” Once saved, something like below will appear, from source connection drop-down select ‘Cricketers,’ which we have created. Next, go to the target connection and add a connection. Same as source, for target select Cricketers_annotations. At the bottom, there will be Tags. Enter the label you want, cricketers in our case. Save Project A screen similar to this will appear. In the left panel, you will see an arrow mark (fourth row ), click on that below screen will appear. From the Provider drop-down menu, select Pascal VOC and enter “Save Export Settings.” Create a box around the player and select tag from right, then save it from the save option at the top. Repeat this process for all images in the folder. Once done, we need to export the output. At the top of the image, there is an Export project option. Once exported, go to that folder. In this blog, we learned how to create a dataset for object detection and segmentation. Next, I will walk through the conversion of this mask into polygon co-ordinates, annotations. A directory Cricketers-PascalVOC-export creates at the target location provided earlier. Enjoy!
https://medium.com/towards-artificial-intelligence/create-your-dataset-for-object-detection-99f1ed04f2e5
['Pushkar Pushp']
2020-06-02 14:43:53.292000+00:00
['Object Detection', 'Programming', 'Computer Vision', 'Data Science', 'Artificial Intelligence']
Positive, Negative, or Neutral? Sentiment Analysis
Photo by Tengyart on Unsplash Communication is complex. There is written, verbal, and non-verbal. With technology, communication is very important in written form. We type more often than speaking to our devices. Using algorithms for classification and natural language processing, it is possible to score text on a scale to find sentiment. What is the value of sentiment? Sentiment is the tine of the selected piece of text. That shows positive, negative, or neutral in tone for applications in marketing, healthcare, and other domains. How It Works Sentiment Analysis is found by ranking. It is a logical operation of sorting terms and scoring by association to find cues to grammar to determine parts of each sentence. Afterwards, there is scoring on grouped words by trained algorithm to assign a score for finding how positive or negative a selected text is based on generated rules. This can be reused with a reliability on different texts to determine sentiment. Classification and Ranking There are apps and libraries for using and creating an automated sentiment toll, Python has Natural Language ToolKit (NLTK), Lexalytics has several apps for sentiment analysis. The demo is featured here using a press release about the covid19 vaccine and Walmart Clinics. Grouping terms is classification to create rule based probability selecting themes from the text. The list is highest to lowest with “19 vaccine progress” the highest. It is interesting that this grouping does not include “covid”, but the number “19”. The rules select and group nouns and adjectives for assigning value on a scale to classify as a term based on evidence generated from the algorithm. The picture shows the text, word cloud, and sentiment. This used the information to generate the “positive” sentiment showing anticipation and relief from a working covid-19 vaccine. Following-Up Sentiment analysis makes scoring a passage of text easier and faster with “smart technology”, Artificial Intelligence enabled analysis of text. Often, associated with responses and reviews, Sentiment Analysis can provide insert to pharmaceutical communications to gauge reaction for estimate of supply and demand. In the example, sentiment is positive for the covid-19 vaccine and that demand is likely to be high.
https://medium.com/ai-in-plain-english/positive-negative-or-neutral-finding-sentiment-d6199710c008
['Sarah Mason']
2020-12-14 16:52:22.743000+00:00
['Machine Learning', 'Artificial Intelligence', 'NLP', 'AI', 'Sentiment Analysis']
How Can We Reduce Community Impacts of COVID-19?
The UHVI considers ages 65+ and serious medical conditions as “high risk” factors. The UHVI does not visualize occurrences of COVID-19. It does, however, include state-level data about overall testing and confirmed cases. It also includes hospital data to show geographic proximity, and it shows nursing home locations should additional safety protections in surrounding areas be required. How does identifying vulnerable neighborhoods support a strong response to coronavirus? The novel coronavirus, as reported by the CDC, disproportionately affects older adults and people of all ages with chronic conditions, like diabetes, lung disease, and heart disease. The UHVI quickly communicates where high-risk populations are located to help decision-makers keep people safe in the following ways: Helps government and health officials target public health interventions. Offers empirical insights for urban communities at the local level. Allows organizations to target outreach and assistance where it is most needed across cities. Provides information about the prevalence of chronic conditions related to those the CDC has identified as at-risk conditions. Identifies proximity to hospitals. Keeps vulnerable communities top of mind. How should we prioritize this information? The Urban Health Vulnerability Index offers a view of a community’s vulnerability based on health factors. This provides a good foundation for a variety of analyses to explore how we can combine health and other types of risk to better understand where populations may be most impacted by COVID-19. For example, we combined health vulnerability with social vulnerability to identify where populations are at particularly high risk from a health and social perspective. Social factors such as poverty, crowded housing, and language barriers can inhibit access to care, increase transmission, and predispose people to severe economic hardship as a result of measures taken to “flatten the curve”. For this analysis, we used the CDC Social Vulnerability Index (SVI). Similar to our Urban Health Vulnerability Index, the SVI measures factors that play a part in health and wellbeing: Housing Type and Transportation, Housing Composition and Disability, Socioeconomic Status, Minority Status and Language. Variables used to measure the CDC Social Vulnerability Index. Source: CDC Using New York City as an example, we evaluated overall vulnerability based on health and social variables, and narrowed our focus to census tracts that fall in the top 25% of each type of vulnerability. Close-up of legend. © RS21 Purple areas indicate high vulnerability of both health and social factors; red areas denote high health vulnerability; blue shows high social vulnerability; and gray tracts represent areas in the lower 75% of risk for both health and social factors. We then segmented the social vulnerability factors to look at how each of the four SVI sub-themes correlated with health indicators. We found a high correlation between the socioeconomics and health indicators. TIME Magazine has already reported on how the coronavirus could disproportionately hurt the poor. Lower income communities are often less protected, with circumstances such as reduced access to health care, being underinsured, or experiencing food insecurity.
https://medium.com/rs21/how-can-we-reduce-community-impacts-of-covid-19-b5f706aedf4e
[]
2020-04-09 17:33:27.808000+00:00
['Data Science', 'Technology', 'Health', 'Vulnerability', 'Coronavirus']
Practices That Doubled My Productivity as a Developer
Set an Easy-to-Define, Concrete Endpoint for Each Day I would usually find myself slacking in the middle of the day. I would be writing many lines of code per minute only to find myself looking at motorcycle reviews a while later. One day, I had to fix a serious bug before the end of the day since it was affecting a large number of customers. On that day, I worked better, faster, and more clearly than ever before due to one simple reason: I had a concrete and easy-to-understand goal. I know now that when I’m not actually creating a solution when I should be, it’s most likely due to not knowing what my goal is. The second I have a goal for the day that can be written in a line or two, I’m effective. Part of the reason this works is the Zeigarnik effect, which theorises that humans are, in simple terms, “closure-seeking animals.” In other words, we hate starting things and not finishing them. When you have clearly defined criteria, you know exactly what the next step will be. Here’s an example of how I use it. When I pick up a new feature to implement, I will write down a simple one-line goal. Real examples I have used include: “Update persistence code so that it uses the AndroidX Room library.” “Refactor feature X (or part of it) so that it uses the MVVM pattern.” “Create the first two UI screens for user journey X.” No finish line, no finishing, no progress.
https://medium.com/better-programming/practices-that-doubled-my-productivity-as-a-developer-70375a5f0c33
['S Pats']
2020-11-09 16:41:44.737000+00:00
['Programming', 'Software Development', 'Productivity', 'Coding', 'Startup']
10 habits I borrowed from python that I use in React (Part I)
Make your code as obvious as possible, but don’t exaggerate (Explicit is better than implicit) One way that I like to do this (that aligns with the previous point) is to use the power of React Hooks that allows us to isolate functionality. One problem that always shows up is tracking some binary object, which can easily be done with a boolean. So what do you always end up doing? useState. It looks something like this: An example of a perfectly valid use of useState to track some binary state. But “True” and “False” are internal logic states, they’re not very expressive in terms of what the ExampleComponent is supposed to be used for, so it’s not clear what “setSelectedTrue” means in terms of functionality when reading this code. One would have to read the code thoroughly to figure out what happens. One thing that I like to do is to isolate such logic into a hook: The “useBoolean” hook is a very re-usable hook (that I use all over the place!) to keep track of a boolean state. One thing that you often end up realizing by doing such re-factorings is that by naming your variables properly, you realize that your code didn’t make sense as well! In the original case, you could have reasoned “well if my ComponentOne is selected, I want to render those two, and if it is not, I only render ComponentTwo”. But by re-phrasing “if my ComponentOne is shown, I show it, otherwise I don’t, but in any case I need to render ComponentTwo”, it becomes much more obvious that there is no need for ComponentTwo to be rendered the same way the same way each time, and it sounds better when you’re thinking about it. The other nice thing about using custom hooks to re-use functionality and returning the results as an array is that if the callbacks in the array are generic enough, it becomes very expressive to be able to name them whatever you want, depending on the context. Notice how I used verbs like “show” and “hide” on a callback as simple as one that sets a boolean value. This makes code really obvious to the reader.
https://patrickdasilva.medium.com/10-habits-i-borrowed-from-python-that-i-use-in-react-part-i-463e241deaca
['Patrick Da Silva']
2020-11-26 14:16:06.466000+00:00
['Python', 'Developer', 'React', 'JavaScript']
Biology in Deep Space: the BioSentinel Mission
Biology in Deep Space: the BioSentinel Mission BioSentinel, a small satellite designed to test the effects of deep space cosmic radiation on living organisms, will be part of NASA’s Artemis I mission (Wikimedia commons, NASA) Unfriendly space Space is not a friendly place. No gravity, vacuum all around, radiation aplenty… Not exactly an environment where biological life (as we know it?) thrives. No wonder robotic mission are the preferred means of space exploration. Cheaper, less mess, less chance of people dying. In fact, there are several voices who suggest that (deep) space exploration is better left to our artificial colleagues. And yet. Some of us are not satisfied with seeing the marvels of the cosmos through a robot’s eyes. We want to go there, plant our feet on the moon, or watch a double sunset on an exoplanet. But getting there is not without its challenges. Distance and time are probably the biggest ones. Orbits of important satellites and the ISS (Wikimedia commons, cmglee) Even if we figure out how to bend the laws of physics or curb our enthusiasm to remain close to home, we’ll have to cross the sea of deep space. So far, we don’t really have a lot of experience with that. The farthest out humanity has been in person is the moon, and that was a relatively short trip. Longer stays have taken place on the International Space Station (ISS), but that is still in a low Earth orbit and well-buffered against the onslaught of cosmic radiation by Earth’s atmosphere. Biology in deep space If we persist in our vision of sailing the sea of stars, it’s probably a good idea to understand the potential effects of deep space travel on biological organisms. Many things, such as zero gravity and vacuum-like conditions, can be tested very well on the ISS, but deep space radiation is not one of them. Sure, there are facilities on earth where we can try to mimic the cosmic rays eager to smash our DNA. Getting our metaphorical hands on the real deal, though, can be very helpful as well. BioSentinel (Wikimedia commons, NASA) Thus, BioSentinel. BioSentinel is a proposal for a low-cost CubeSat (aka mini-satellite). It is being developed by NASA to be one of the thirteen CubeSats that will tag along with the Artemis I mission as secondary payload. The Artemis I mission will test NASA’s Space Launch System and was originally planned for 2020. Now, the tentative launch date is sometime during November 2021. Oriiginal plan for Artemis 1 (Wikimedia commons, NASA) The primary aim of Artemis I is to bring the reusable Orion capsule into a six-day orbit around the moon. This will officially certify the Orion capsule and the SLS system for manned flights, such as the 2023 (planned) Artemis II mission. Yeast and radiation But before human beings will be the Orion’s crew, it’s yeast’s turn. Once BioSentinel is deployed, it will: …first undergo a ∼700-km lunar fly-by, then enter into a stable heliocentric orbit. As it orbits the sun, BioSentinel will conduct science experiments for 6–12 months, accumulating doses of radiation analogous to that of a potential human Mars mission. At the 6-month nominal mission duration, it will be ∼0.2 astronomical units (0.2 AU, ∼30 million km) from Earth. These science experiments involve Saccharomyces cerevisiae, or baker’s yeast, a veritable workhorse in current (molecular) biology. Two strains of this yeast will be part of the BioSentinel experiments: a ‘normal’ strain, and a strain which has a faulty DNA repair mechanism. Even though yeast and humans are quite far apart in the web of life, the genes and cellular processes that control DNA damage and repair show many similarities. In other words, studying how deep space radiation affects yeast DNA can help us hypothesize about what it might do to the DNA of human spacefarers. Microfluidics card for the yeast strain on BioSentinel (Wikimedia commons, NASA) In preparation for the mission, the yeast strains will be desiccated. Once they enter the experimental phase, they will be rehydrated with a nutrient-rich broth. Then, the science begins. Different batches of yeast strains will be rehydrated at different times. Cell growth and metabolism will be monitored, as well as the amount and type of radiation, and the data will be beamed back to earth. Another BioSentinel payload will be delivered to the ISS, and a third one will stay on earth for comparison. The data is expected to teach us more about the effects of deep space cosmic radiation on DNA and its repair mechanisms. Maybe, it will help us develop tools and procedures to prevent DNA damage during long spaceflights. From there, to the stars.
https://medium.com/predict/biology-in-deep-space-the-biosentinel-mission-a7c77cf34527
['Gunnar De Winter']
2020-07-30 20:51:01.061000+00:00
['Biology', 'Science', 'Space', 'Future', 'NASA']
The Route to Mass Adoption
When you try VR now, quite honestly, it may be hard to see how this medium is touted to be bigger than TV by 2025 (Goldman Sachs). It isolates you from the people right next to you, it’s expensive and it’s out there in relatively small numbers. However, this is all going to change. Not overnight, but gradually. There’s a huge expectation and impatience for VR to be the next big thing right now. However, it’s been dawning on people that this uptake is going to take longer than originally expected. The technology is still in it’s infancy, at a stage we’ll find laughably clunky in the future, exactly like the first mobile phones with giant car batteries. So, without further ado, here are the factors which I think will contribute to the mass adoption of VR… Apple & Microsoft Sitting in the VR wings for a while, Apple have finally committed to supporting VR, and with quite a fanfare. Next up, the biggest single platform in the world — Windows. Microsoft see a big gap in the market for affordable VR headset and are partnering with the likes of Asus and Dell to create these, starting at £250. These are expected in Autumn 2017. Gaming Often touted as the key driver in mass adoption of VR, I disagree. Whilst important, I actually think it’s part to play is smaller than cinematic VR. Last year Samsung made public their figures that 50% of the content viewed on their headsets is gaming, and 50% 360 video. According to Armando Kirwin, in his brilliant series on VR, the figure for gaming is now actually lower — around 35%, a trend that I see continuing in favour of 360 films and experiences. One of the most important aspects of gaming VR though is it’s multiplayer ability — making VR social — something I’ll come on to in more detail and one of the most vital parts to the route. Allowing people to communicate and play together, virtually, takes away one of VR’s biggest achilles heels — isolation. You’re still isolated from the real world but you’ll be so deeply connected to other players that the collaboration and sense of camaraderie created will be deeply memorable. Cost Right now, high-end VR headsets are too expensive, an HTC Vive is £750, a computer to run it is minimum £1000, putting it out of reach of the vast majority of potential users. Playstation VR is more reachable, if you look at the additional cost to a user with a Playstation, it’s £350, still a lot though. For mass adoption VR needs to be far cheaper — in the realms of the Google Daydream and Gear VR, this is never going to happen to a tethered system like Oculus, HTC Vive or playstation — we need to look to mobile. Mobile For VR to reach the masses it has to be on mobile. Phones will continue to get more powerful, to the point that they are good enough to deliver high-end, interactive VR experiences and volumetric content. For this, the phones need to be able to track their environment, allowing users to walk around the physical and virtual space, just like on the Vive now. This ‘inside out’ tracking is what is used on project Tango and is currently unique to only a couple of extremely powerful handsets. Spending <£100 on a VR headset as an addition to an existing device like a phone is a way that we are going to encourage mass adoption. Augmented Reality Augmented reality is coming back and it’s going to stay, in a big way. It had a stuttering start with Google Glass but the next generation of glasses will allow people huge advantages to everyday work and life. If you’ve not seen Keiichi Matsuda’s Hyper Reality then you should watch it now. Although frankly terrifying, it brilliantly illustrates how AR could work. Like it or not, this kind of technology will be prolific and very quickly so. AR will have a smartphone-like take up, getting millions of people accustomed to using wearable screens. This is going to be huge for VR — these same headsets/glasses will allow VR content to be played. It won’t look as good on AR headsets — you’ll see the real world at the peripherals of your vision — but it will be so easy to do. People won’t need to put headsets on to view VR, they will already be wearing headsets, a massive change. Cinematic VR There has been a messy, and entirely pointless argument about whether 360 video is VR. The pedants argue that it’s not, and by the dictionary definition they may be right. As far as I’m concerned, who cares. I can watch a 360 video in a VR headset and get as much pleasure and value from it as if I am playing a game. We watch films on our iPads, as well as playing games and other applications, so why does anyone care if we watch 360 videos in a headset?! Hollywood is experimenting with 360 content too. Titles including Wild, Jurassic Park, Jungle Book and The Martian have VR taster experiences. Alejandro González Iñárritu, director of The Revenant (and a host of other top Hollywood films), has recently launched a VR experience at Cannes which goes beyond a simple 360 video and allows you to walk around a physical space as part of the film. By doing so it ceases to be a film and moves into the realm of an experience, blurring the lines between 360 and VR. ‘Experience’ Iñárritu’s experience shows a glimpse of what will be possible in the future in VR and the breadth of possibility in this new medium. As filmmakers we need to open our minds from the simple linear track of narrative and into the broader world of choice and experience. The richness and immersiveness of what can be created will be a huge driver to adoption. You can watch a film, but you actually take part in an experience. Ready Player One A book by Ernest Cline about a dystopian future where people escape reality into a magical and boundaryless ‘metaverse’ has been essential reading for those of us in the industry for a few years. Ready Player One is being made into a film by Steven Spielberg, set for release in spring 2018. It’s going to be a huge title with a massive launch and accompanying VR experiences. I think this title is going to singlehandedly have a massive impact on adoption in VR — mainly because it exposes the potential, sense of magic and also practical range of ways that VR can be used in the future. Live The ability to watch a gig or a sports event live in VR is a fantastic pull to potential VR recruits. To date, the quality of this experience has been lacking, due in equal part to the speed of internet connections and the quality and reliability of the technology involved in streaming. This is all changing fast though and there are a number of out of the box live streaming solutions, including the Orah 4i, simpler systems like the Ricoh Theta and Samsung Gear 360. All of these systems now stream live to YouTube and Facebook in 360, removing a big technological barrier to streaming — distribution. Streaming in 360 is one thing, it’s going to allow people those money-can’t-buy-experiences, backstage, on stage, courtside etc but things will get really amazing as the technology keeps evolving. As volumetric capture develops we will be able to live stream events in a way that will allow the viewer to actually walk around and explore the experience — really rivalling, and perhaps exceeding, the real thing. Retail I did a talk for Wired a couple of years ago on the future of VR in retail. I still feel like this will be one of the biggest aspects of the future of VR, both from a revenue generation perspective and also from a popularity and practicality perspective. Already automotive marques such as Audi have been grasping the potential of VR to sell their cars and Thomas Cook have also seen massive ROI on using VR in retail, selling 190% more holidays when they use it. I recently wrote an article about ROI in VR here. People will be inspired to buy into VR if they think they can have shopping experiences that are better than they can online or even in person. You can’t instantly change the colour of the car in front of you, or try different wheels in the real world, at the click of a button. Sure, there’s a lot that’s never going to be as good as the real thing, but there is so much time you can save and so much iteration of choice you can have in the virtual world. ‘VRcades’ Hugely successful already in China and growing fast in the US — VRcades where people pay for a VR experience in high footfall locations are going to be a great way of onboarding new users. HTC have a licensing model for this already, expect other manufacturers to follow. B2B Whilst not the glittery front page of companies’ communications, there will be a huge number of B2B virtual reality applications emerging. Training in VR is already proving itself strongly, saving companies cash and improving engagement of staff being trained. Companies will also increasingly use VR for design, collaboration, and education. Whilst not a big driver in adoption, it will certainly help drive awareness. Social Facebook sees VR not as isolating, as it may fairly be seen now, but as one of the biggest social enablers the world will know. What Zuckerberg has been very smart to identify is that people will be able to play, meet, communicate, work, learn and more in the virtual world. Right now, it may seem strange to talk to people as avatars in a virtual space, but it’s amazing how quickly you forget it’s not a real person. Emotion and expression come across surprisingly well with clever movements of mouths and expressive, if cartoon like, faces. This Engadget review of Facebook Spaces helps to explain the feeling of social VR. In conclusion… As the technology improves, so will the capabilities and potential of VR content that can be viewed in headsets. We’re not only discovering how to make better experiences but also how to create completely new experiences. Right now, we don’t even know what we will be able to access or do in the virtual world of the future, but here’s the thing, there are no limits. You will be able to travel to the moon, explore deep sea wrecks, visit zoos of extinct animals (or dinosaurs!), collaborate on projects virtually at work, learn about geology from the summit of Everest, watch Wimbledon live from the umpire’s chair, watch sport from the pitch itself, moving unseen amongst the players & even be part of a zombie film. You’ll be able to operate as a surgeon, collaborate in construction, virtually network and so much more in B2B. As the technology improves, these experiences are going to feel more real, more magical and provide far more value.
https://medium.com/cinematicvr/the-route-to-mass-adoption-94457e5f91b
['Henry Stuart']
2017-07-27 09:44:35.709000+00:00
['Augmented Reality', 'Future', 'Virtual Reality', 'Tech', 'Storytelling']
Cannot attend The Fifth Elephant and Anthill Inside? Here’s how you can keep up.
Anthill Inside and The Fifth Elephant are packed with talks, discussions and un-conference events so you can make most of the three days. For any reason, if you cannot attend the events, you can now catch up on the talks online. HasGeek’s events have produced recorded videos of most talks and sessions where permitted. You can check out the archive of videos available on https://hasgeek.tv. This year, talks from The Fifth Elephant and Anthill Inside will be live-streamed on YouTube. Anthill Inside 2018 Date: 25 July 2018 Auditorium 1: https://www.youtube.com/watch?v=6aZ1RkH2WO0 Auditorium 2: https://www.youtube.com/watch?v=XOCyCQeNq_4 The Fifth Elephant 2018 Date: 26–27 July 2018 Day 1 (26 July): Auditorium 1: https://www.youtube.com/watch?v=NhBW05jdzpY Auditorium 2: https://www.youtube.com/watch?v=kWoMXQL0pAM Day 2 (27 July): Auditorium 1: https://www.youtube.com/watch?v=1BmaCtb9sZ8 Auditorium 2: https://www.youtube.com/watch?v=mF9KjgDV_xA The Fifth Elephant and Anthill Inside brings together practitioners and academics from across India and Asia working on data science, machine learning and AI. Speakers include: Subscribe to HasGeek TV on YouTube to stay updated when we release new videos or stream live. The recorded versions of the talks will uploaded on YouTube as well as https://hasgeek.tv, so stay tuned for the videos.
https://medium.com/the-fifth-elephant-blog/cannot-attend-the-fifth-elephant-and-anthill-inside-heres-how-you-can-keep-up-46389df50612
['Abhishek Balaji']
2018-07-23 09:07:50.873000+00:00
['Live Streaming', 'Engineering', 'AI', 'Research', 'Data Science']
I built a Machine Learning Platform on AWS after passing SAP-C01 exam: Use Cases Layer
Let us discuss every step of the Machine Learning workflow and see some challenges that could be faced when dealing with these steps. 4.1 | Data cleansing and preparation From S3, and using Jupyter Notebooks spawned and managed by Kubeflow, the raw data is loaded and being studied. Data Cleansing Pattern, by the author During this phase, a lot could happen to data. For example: Balancing the data : in some cases, the data is imbalanced. For instance, when dealing with a use case like email classification into SPAM and not SPAM, we could have a non-significant number of SPAMs compared to real emails. Multiple techniques exist to balance the data like Oversampling , Undersampling , and SMOTE . : in some cases, the data is imbalanced. For instance, when dealing with a use case like email classification into SPAM and not SPAM, we could have a non-significant number of SPAMs compared to real emails. Multiple techniques exist to balance the data like , , and . Labeling more data: in supervised learning, sometimes the number of labeled examples is just insufficient to train a model. In this case, it is possible to label more data. Amazon Mechanical Turk could be used to accomplish this task. It is “a web service that provides an on-demand, scalable, human workforce to complete jobs that humans can do better than computers, such as recognizing objects in photographs.“ ⁵ . in supervised learning, sometimes the number of labeled examples is just insufficient to train a model. In this case, it is possible to label more data. could be used to accomplish this task. It is “a web service that provides an on-demand, scalable, human workforce to complete jobs that humans can do better than computers, such as recognizing objects in photographs.“ . Data Augmentation: when we don’t have enough data to properly train a machine learning model, data augmentation could be a solution as well. It is a technique used to generate more labeled data from the existing examples. When used for images, for example, new images could be generated from one image by flipping that image, applying some filters on that image, or cutting some parts of that image to introduce loss. The resulting data is stored in the cleansed data storage. 4.2 | Feature engineering This is one of the most challenging steps. Identifying features needs a lot of brainstorming and imagination. Examples of Uber’s features: ‘restaurant’s average meal preparation time over the last one hour.’, ‘restaurant’s average meal preparation time over the last seven days.’⁶ Feature Engineering Pattern, by the author As explained in the previous article, two types of features can be computed: Online features : computed on streaming data and should be accessed with very low latency. These features are stored in DynamoDB. Storing a feature definition means storing its properties: feature’s name, feature’s type(numerical, categorical), min and max value of the feature, etc. To compute online features, the ML platform uses the stream processing capability of the Data Platform. In other words, this will be a job running on the Data Platform with respect to the integration protocols between the two platforms. : computed on streaming data and should be accessed with very low latency. These features are stored in DynamoDB. Storing a feature definition means storing its properties: To compute online features, the ML platform uses the capability of the Data Platform. In other words, this will be a job running on the Data Platform with respect to the integration protocols between the two platforms. Offline features: computed on historical data and have no constraint of low latency access. These features will be stored in S3. Like the online features, offline features are computed with the batch processing capability of the Data Platform. Offline features store must be synchronized with the online feature store regularly to “guarantee that the same data and batch pipeline is used for both training and serving”⁶. 4.3 | Model training The first step of the training phase consists of splitting the data into three sets: training set for training the model, validation set for hyperparameter optimization, and test set to evaluate the model’s performances. Model Training Pattern, by the author Cleansed data along with the features are fed into the Model Training capability of the Data & Model preparation layer. Several models are then applied to data aiming to shortlist a performant model. As explained in the infrastructure layer article, FSx for Luster is used as a storage to host training data copied from S3. With this storage, the training phase is accelerated as worker nodes could have much better data access performances than accessing the data directly from S3. The final step of this pattern is to store the trained model in a versioned repository of the trained models’ storage along with its versioned metadata. Model’s metadata could be the name and version of the model, the version of data used to build the model, features used by the model, the model’s output formats, etc. 4.4 | Model evaluation Using Katib, along with the validation set, different hyperparameter combinations are tested on the chosen ML model to finally select the most performant combination. The ML model is then tested using the test set to evaluate its performances. Finally, the resulted model is stored in the trained Models repository along with its metadata. Model Evaluation Pattern, by the author 4.5 | Model packaging and deployment Two steps compose this pattern: Model packaging and Model deployment. Each step has its own specifications. Model Operations Pattern, by the author First, the trained model should be packaged with the feature extractor and a sample data set which will help validate the model once deployed. One tricky thing to pay attention to though: in case the ML model is served in real-time, the model’s code must be efficient to respect SLAs. Sometimes, it should be reimplemented in a compiled language like C++ or Java instead of python. Second, the resulting package is a docker image stored in Elastic Container Registry (ECR). Finally, the model is deployed as a docker container. Different deployment strategies could be considered: Single deployment : The already deployed model is simply replaced by the new one. Only one model is deployed at the same time. This strategy is risky because: - It will cause a certain downtime during the replacement of the existing model. - The new model version could contain some bugs and a rollback strategy should be ready to be executed. : The already deployed model is simply replaced by the new one. Only one model is deployed at the same time. This strategy is risky because: - It will cause a certain downtime during the replacement of the existing model. - The new model version could contain some bugs and a rollback strategy should be ready to be executed. Blue/Green deployment : This technique keeps two versions of the model alive: a new version (Green) and another already running version (Blue). The traffic is then progressively shifted to the new version depending on the new model’s performances. With this strategy, downtime is minimized, and it is possible to do some A/B testing. : This technique keeps two versions of the model alive: a new version (Green) and another already running version (Blue). The traffic is then progressively shifted to the new version depending on the new model’s performances. With this strategy, downtime is minimized, and it is possible to do some A/B testing. Multi-Armed Bandit: with this “smart” method, the traffic is gradually directed to the most optimal model. 4.6 | Model serving Two possible patterns for serving the model: Real-time serving: When the model is deployed online, it is exposed to the end-user in the form of a REST API. Seldon Core is well integrated with Kubeflow and its Ambassador API Gateway to manage ingress traffic and the API endpoint exposition. The serving pattern is as follows: 1. As explained earlier, online features store must be synchronized with the offline feature store on a regular basis to guarantee that “the same data is used for training and serving”⁶ 2. The user sends its contextualized data to the model using the API endpoint: in addition to the core data, some important information for traceability could be included in the request, like the sending time, the IP address of the sender, etc. 3. The features extractor processes the received data and uses the model’s metadata to identify the right features. 4. The features vector is then constructed and served to the model. 5. In the last step, a prediction is sent back to the user as well as to the prediction store. This prediction store is a must for monitoring the model’s performance. Real-Time Serving Pattern, by the author Batch serving: As explained in the previous article, this mode of serving is used when the ML model is applied to a large input size like a week history of songs chosen by the user to recommend the right songs for the next week. The pattern is as follows: 1. Like in the real-time serving, offline and online are synched. 2. For the batch serving, features are pulled from the S3 storage as there is no constraint for fast access. 3. Finally, the predictions of this model do not go straight to the user but instead, go to another storage for future use. Batch Serving Pattern, by the author 4.7 | Model Monitoring As human behavior is unpredictable, the ML model’s performances are prone to degradation. That is why a monitoring pattern should be considered when deploying an ML model in production. Changes in data are not the only performance degradation origin, other causes could be the unavailability of some worker nodes or a certain service, which could result in the non-respect of SLAs. Model Monitoring Pattern, by the author To monitor the model, some metrics should be continuously calculated and injected into the performance metrics storage. A common practice is to: Get the data from the predictions store and consider it unlabeled Construct the right predictions for this data: the right labels could be recovered from the user for example, or by using Amazon Mechanical Turk Run a job to compare the model’s prediction with these labels. Prometheus and Grafana can be used to respectively collect and visualize these metrics as well. These two solutions are well integrated with Seldon Core. Conclusion In this article, I tried to explain how to land a use case on the machine learning platform. I started by discussing the challenges faced when a new use case is selected by stakeholders like studying the real nature of the use case and identifying the data sources needed to solve the use case. Then, with the Framework layer as a foundation, I studied the different patterns used during the lifecycle of a Machine Learning model and gave some known best practices. I believe a lot of topics encountered during this journey of building a Machine Learning Platform on AWS should be more detailed. One example could be to dive deep into the API Gateway layer to efficiently serve a real-time model. This will be the topic of my upcoming articles. If you have any questions, please reach out to me on LinkedIn. [1] https://freeandopenmachinelearning.readthedocs.io/en/latest/ml-business-use.html [2] https://en.wikipedia.org/wiki/General_Data_Protection_Regulation [3) https://en.wikipedia.org/wiki/Health_Insurance_Portability_and_Accountability_Act [4] https://en.wikipedia.org/wiki/Payment_Card_Industry_Data_Security_Standard [5] https://docs.aws.amazon.com/mturk/index.html [6] https://eng.uber.com/michelangelo-machine-learning-platform/
https://towardsdatascience.com/use-cases-layer-of-the-machine-learning-platform-aff2cfec21c5
['Salah Rekik']
2020-11-14 17:12:42.095000+00:00
['Machine Learning', 'Data Science', 'Technology', 'AWS', 'Artificial Intelligence']
The Age of the Privileged Revolutionary
At this juncture we ought to segue from the street revolutionary with their open mouths, posters, and weapons to the general population. There is something glaring in our midst: It is the share of the general population that shares in the revolutionary sentiment. They are perhaps less violent than their street counterparts but they are willing to espouse the-system-is-irredeemably-corrupt ideas just the same, and to back them up with an intellectualized vindication. This is historically significant and very revealing. Americans are increasingly souring on many of the precepts our country has long held dear. But ill feelings are not confined to cynicism, as there is also rage; the souring of cynicism and the smoldering of rage. A rage that government is insufficient to solve income inequality, racial tensions, and massive health care reform in one fell swoop. A rage that is also a little muddled and chaotic. Accordingly, the talk of radical initiatives such as universal health care and the defunding of law enforcement have heated up in the national conversation, these colossal projects that are posed as panaceas of existing corruption. These are crusades mostly impelled by the progressive middle-and-upper classes, crusades fueled by a somewhat naïve faith in projects of vast, top-down destruction and subsequent reconstruction. All this to say, the age of the privileged revolutionary is riddled with an undisguised irony, informed by the philosophical winds blowing in the academic sphere, and suffused with a confused, angry energy which wishes to tear down and reject, all of which deserves to be explored. The Tug-of-War Democracy, capitalism, civil rights — all of these things, to some degree, have been problematized and subject to the scowl of suspicion. Democracy, for example, is somewhat morbidly paraded around as “hanging on by a thread”, at the same time that it is indirectly maligned. Democracy as this time-tested model of western governments is not only criticized for its insufficiencies and bathed in a kind of disappointment for failing to meet the demands of the masses, but it is also regarded somewhat anxiously as an endangered system we need to preserve. But there is a lack of coherence and consensus, for people cannot make up their minds. Which one is it, to preserve or to tear down and make anew? This is the question of the era, isn’t it? For every adamant adherent of the latter option (and they usually make up the street protestors) there are just as many everyday citizens conflicted by this preserve-or-tear-down drama, best represented as a tug-of-war. Futility Progressives who hurl around the words “institutional” and “systemic” inject a kind of futility into the activism endeavor. And this is precisely the point. For example, it is a popular claim to make that racism is an “institutional” and “systemic” problem — racism is not something to be encountered in the individual human heart, it is more pressingly a living, breathing part of existing “systems” that troublesomely, everyone is, by default, “in denial” about. As it is, the notion of the “institutional” is so vastly large and the notion of the “systemic” too overwhelmingly abstract and complicated that no one can really engage them. So too is the notion of unwitting complicity (as in the proposed systemic racism example) imbued with a hopeless quality. People are understandably defeated by these terms. They are repeated so endlessly that they begin to feel suffocating and individuals begin to feel their choices narrow. People throw up their hands and concede that perhaps every structure our nation’s history has wrought is rotten to the core and can’t be trusted. Hence, “burn the system down” sentiment has escalated and is it any surprise? What’s ironic is how willing many are to reject the systems which have clearly benefited them. For example, sitting amongst the plush spoils of capitalism, westerners are apt to sully its image, to mercilessly poke holes in it (leading to a crisis of faith) rather than to make note of its achievements. After all, capitalism’s track record of furthering the collective standard-of-living is second to none but recognition of this fact is strikingly absent. Praise for capitalism is scant these days, replaced by the nose-wrinkling of its cynics. So too is democracy sourly regarded by the modern revolutionary as another classic western system that introduced a corruption of the sophisticated, ideal society and smeared the tempting fantasy of what-could-have-been. Cynicism We see this cynicism at play in the illiberal desire for censorship, a clear violation of democracy’s free speech doctrine. It need not only be the authoritarian hardness of outright censorship that we can point at; there are also the soft, passive-aggressive “language fashions” in which the party-line of particular political topics must be toed — a certain “correct” way of saying something. The idea abounds that the principle of free speech is outdated in its unsophistication and that its inherent permissiveness is “messy” rather than the mark of the human dignity of language liberty, say. To such a supposition, I flatly say no. Some things are fine just the way they are and ought to resist the toxic “sophistication upgrades” that the revolutionaries insist upon (believing that they have graduated from principles to projects, which I will talk about in a bit). Note that there are two breeds of revolutionary types: a) the street variety, who are “hard,” tolerant of violence (à la “the ends justify the means”), and do not mince their words or cloak their wishes and b) the intellectual variety, who are “soft,” are fond of employing academic jargon to advance technically radical ideas, and are more apt to use social pressure rather than force when it comes to furthering their aims. What happens when you didn’t fight for what you have? If history teaches us anything, human memory is prone to being washed of the lessons of the past with surprising ease. When we do not reflect on the past and safeguard timeless principles (and defend them in our speech) we are prone to ingratitude and worse, ignorance. It is true that those who fought for freedom have a far more visceral stake in preserving it than those who have lazily inherited it. The former will likely have a sharply-defined, fiery vindication for freedom, but their progeny is not likely to feel quite so convicted. This watering-down may be partly inevitable, simply the consequence of being bequeathed something that you did not have to work for, but there is more to the story. Principles vs. Projects Modern generations of prosperous westerners who have been mostly spared the kind of gritty, life-or-death struggles of war and/or merciless political oppression in their own backyards lack the clarity, in some ways, of what ought to be fought for — that is, classical western principles (they are few but clear-cut) such as liberty and equality of opportunity that have been historically held in high repute. The modern revolutionary has lost sight of principles and now finds themselves hungrily grasping towards projects instead. Principles need preservation and require historical memory. They are not especially exciting but they are exceedingly necessary. Liberty, for example, is an uncomplicated principle theoretically, but is vast in scope when enacted. It also does not take from anyone in order to label itself a general right of all people. This is not so with projects, which increasingly flutter about in the the modern revolutionary’s imagination. Projects generally take from others in order to muscle particular goals into place. And so, such individuals are beguiled by the promise of a future that can somehow incorporate a teeming multiplicity of demands stemming from a multiplicity of supposedly world-improving projects. Recall the slightly odd privileged-status of the modern revolutionary. It is highly interesting to observe that it is they, with their overly-ambitious (though necessarily vague) projects of re-ordering society who march in the streets. A thin undercurrent of entitlement might even be spotted. The underlying observation is this: these activist-types are curiously dissatisfied, certain in their need (and right) to do away with the problematic fashions of the day, and weirdly content to not press into details of plans for change. So, western populations have been facing trouble without worthwhile principles to fight for. They have gotten their hands entangled in the seductive project instead, and because historical memory of principles-worth-defending is fading, individuals have become both restless and a little entitled. That the west’s success (which has made enormous strides in the arenas of law (human dignity) and economy (prosperity) might produce a people who sour on these very sentiments — and behave “restless and a little entitled” — may not be quite so astonishing when we consider society’s underbelly of human psychology in all its complicated and paradoxical glory. More on that next. The Psychological Underbelly Hear me out: Take democracy, for example. It is far from an innate feature of human society but this reality can be difficult to recognize and value when you are disconnected from the work and the vindications that went into it. I have earlier explored this truth, writing the following: “[Democracy is] an ingenious intellectual creation predicated on the inevitable tension vibrating between freedom on one end and security on the other. Democracy originated as a project of the human mind, which is to say that it’s not natural in the sense of our primitive makeup. Authoritarianism is closer to nature and is sloshing with power — that Darwinian albatross that haunts human society. Democracy, then, requires much vigilance to be maintained. Power is to nature as liberty is to civilization.” When we gain success and prosperity (which any honest accounting of America’s progress will indicate we indeed have), the tautness that exists during gritty periods of history collapses. Simply put, the higher we ascend, the greater our chances of forgetting simple, sobering realities such as “democracy isn’t natural and ought to be maintained” and taking the sacrifices of the past for granted. When we believe we largely have something in the bag, we have energies to divert elsewhere. This, I am precariously speculating, can account for the modern revolutionary’s shoveling of energies into finding fault with systems like democracy rather than fighting for them. Call it a theory of (psychological) energy equilibrium. We are subconsciously compelled to work on something, and if we have already built something, it is scarily possible that we will eventually reach a point where we will start to claw at what we have built for no other reason than we are discontented with inaction. It would appear that in the second decade of the 21st century we have reached that point. Hence the cynicism, the futility, the disorienting demands of confusingly “privileged” revolutionaries. The role of higher education One must also account for the inarguable responsibility of academia not only for producing the modern revolutionary but for producing enough of a lather that the mainstream has begun to adopt these ideas. Steeped in the cynical theories of systemic (and necessarily unsolvable, short of the system being overthrown) racism and systemic patriarchy (again, unsolvable, short of destruction and a new installment), a new generation of Americans is taught to regard celebratory historical achievement and slow-but-steady improvements as an affront to what-could-be (a totally reworked society, the plan for which is rather fuzzy and undeveloped). Weirdly, race and gender relations seem to have worsened in this era. Little wonder when the aforementioned theories are so often unabashedly alienating. Their allure lies in their futile reading of race and gender relations and the accompanying justification for some kind of radical action plan (which plays to the energy-equilibrium psychological dynamic I pointed out earlier.) And steeped in fashionable notions such as American un-exceptionalism and introduced to the degradation of capitalism, so too is a new generation of Americans taught that it is something of a virtue (or at least a necessary quest) to find ways to discredit ideas that have long been venerable. From looking at what is handed down in college classrooms (and subsequently maneuvers its way into journalism and HR and so on) we can begin to see how the those graced with the spoils of education have embraced a soured look at America.
https://medium.com/discourse/the-age-of-the-privileged-revolutionary-cfe49dc02874
['Lauren Reiff']
2020-12-22 03:18:05.978000+00:00
['Philosophy', 'World', 'Psychology', 'Society', 'Politics']
Writing For Money Isn’t What You Think
Sometimes work is all about the money. In lieu of pursuing my passions, I wound up choosing jobs that would pay the bills and keep me off government aid. The funny thing is that some people believe my work is all about the money for me now that I’m earning a good income. But in the past when I wasn’t writing, my work was only about the money. I worked one job after another which I hated, and I did it just to get by. I used to work at Bruegger’s Bagels, Michael’s Arts and Crafts, The Dollar Tree, GE Money Bank, and Ecolab, Inc. More recently, I worked for a social media marketing agency. Nobody ever accused me of “just being in it for the money” when I had any of those jobs. And ironically, I was 100% in them for the money. Some work is not all about the money. I began my online writing career because my former work at home gig wasn’t going well. As a struggling single mom, I was desperate to create a better life for me and my daughter. Desperate to avoid another shitty job I’d only hate in the end. If I was going to build a better life, I decided that I needed to take a big risk and finally create a career that I love. I was already about to lose everything (again), and I think there’s something really beautiful about that space. People say the timing is never going to be just right, but I think that desperation is a pretty damn good time to put everything on the line and see what you can do. I put everything I had into starting an online writing career in April 2018 because I was sick to death of only working for money. Sick of spending my days emotionally drained. I love what I do because I’m following my heart. I’d like to believe that there’s a place where the passion and money can intersect. Needing money is unavoidable as a single mom without a real family or support network. If I get into a jam, it’s all on me. Money is a necessary part of life, but I don’t want to dedicate myself to soul crushing work. This is how I’ve always been, but I didn’t do anything about it until I became a mom. I had a therapist in Minnesota who once told me that some people need a creative career to function at their best. It’s not enough for some of us to simply have an outlet after work. Doing what I love makes me a better mother. People drain me. The world drains me. I get easily overwhelmed and overstimulated, so I need an awful lot of time alone to regroup from everyday stressors. One of my biggest fears during pregnancy was that I wouldn’t be able to handle work and parenthood at the same time. I knew that if I took a job I hated, my kid would suffer because I’d come home emotionally spent. Writing for a living is a dream come true to me. Well, if I’m writing whatever I want--that’s the real dream. You know why? Because that’s what I really love to do. I grew up voiceless. I grew up afraid. I have all of these issues from being on the spectrum, having borderline personality disorder, and being abused that I now have a lot to write about. I’m passionate about telling stories to help other people realize they aren’t so alone. Passionate about showing folks that it’s okay to be flawed and honest. Regardless of whatever else is going on, writing never actually burns me out. If anything, it fuels my fire. And it helps me cope much better with my life, which in turn allows me to be a more patient and even-tempered mother. I don’t actually write for the clicks. Sometimes, it’s hard to explain to people that I’m not putting on a show. My writing comes from me and my life experiences. Me and my own feelings. It’s not contrived to fit into a certain box for success, mostly because that would be really exhausting. I’m enough of a rebel and Aspie that any pretense gets really old really fast. I cover a wide variety of topics because I’m interested in a wide variety of issues. I publish stories according to my particular mood. Sure, I’m human and sometimes I think a certain story might get “big.” But that doesn’t actually change the way I write or what I write about. I’ve been doing this thing (writing online and earning money for it) since April 2018, and I still can’t predict which stories will pay well and which ones won’t. Some folks like to say that writers like myself only write for clicks. Lazy writing, clickbait headlines, and TMI details. But guess what--those people aren’t my audience. It’s no wonder they dislike my work when I’m making money writing about issues they don’t even understand. I write for my daughter. And I write for people like me. So, I’m a little bit awkward. I don’t do great with social cues. Some of it is the way I’m wired. And some of it is my upbringing. It’s hard for other people to understand the value of vulnerable writing if they never grew up being silenced and shut down at every turn. I can look back at my life and see every point where if I’d known better I would have made better choices. But I often didn’t know better because I thought I couldn’t speak up and use my voice. Even now, using my voice means shutting out all of my demons that say I have no right to tell my truths. It’s not about selling out and prospering because I somehow write juicy garbage the public likes. When I write, I do it for the people who have found themselves just as voiceless. I write for my daughter to let her know that there is no shame in our honesty. Writing for money doesn’t need to be a passion killer. A lot of people talk about how writing for the wrong reasons (like money) kills the passion. Again, there’s this backwards assumption that if you’re making money doing what you love it’s not really legit. In my case, writing for a living is the best job I’ve ever had. Do I think about how to write more and become better at what I do? Sure, that’s natural. But money doesn’t drive me to write anything I don’t want to write about. And money doesn’t change my mind about what I want to do. So, here’s what I want you to know. If you want to make a living as a writer, you’ve got to stop selling yourself short. Stop replaying the myth that only writers who sell their soul or stifle their passion can earn a decent living because that simply isn’t true. The world is full of people who feel stuck in jobs they hate so they can pay the bills. If you’re a writer who doesn’t love or even like your day job, there’s nothing wrong with making writing your day job. You don’t have to write shit you don’t believe in to make a living. But you do need to believe in something and write about that. Be passionate, be bold, and be brave. Don’t let the world ignore you or count you out. Get over the pain of being disliked by some people. People aren’t going to like it that you have the audacity to write and make money. They’ll feel better about themselves if they tell everybody else that you’re shady. That’s just something you’ve got to get past. It doesn’t make the critics right.
https://medium.com/honestly-yours/writing-for-money-isnt-what-you-think-54d664d0304d
['Shannon Ashley']
2019-09-10 16:04:18.030000+00:00
['Success', 'Creativity', 'Money', 'Life Lessons', 'Writing']
K-Means & Other Clustering Algorithms: A Quick Intro with Python
Clustering is the grouping of objects together so that objects belonging in the same group (cluster) are more similar to each other than those in other groups (clusters). In this intro cluster analysis tutorial, we’ll check out a few algorithms in Python so you can get a basic understanding of the fundamentals of clustering on a real dataset. The Dataset For the clustering problem, we will use the famous Zachary’s Karate Club dataset. The story behind the data set is quite simple: There was a Karate Club that had an administrator “John A” and an instructor “Mr. Hi” (both pseudonyms). Then a conflict arose between them, causing the students (Nodes) to split into two groups. One that followed John and one that followed Mr. Hi. Getting Started with Clustering in Python But enough with the introductory talk, let’s get to main reason you are here, the code itself. First of all, you need to install both scikit-learn and networkx libraries to complete this tutorial. If you don’t know how, the links above should help you. Also, feel free to follow along by grabbing the source code for this tutorial over on Github. Usually, the datasets that we want to examine are available in text form (JSON, Excel, simple txt file, etc.) but in our case, networkx provide it for us. Also, to compare our algorithms, we want the truth about the members (who followed whom) which unfortunately is not provided. But with these two lines of code, you will be able to load the data and store the truth (from now on we will refer it as ground truth): # Load and Store both data and groundtruth of Zachary's Karate Club G = nx.karate_club_graph() groundTruth = [0,0,0,0,0,0,0,0,1,1,0,0,0,0,1,1,0, 0,1,0,1,0,1,1,1,1,1,1,1,1,1,1,1,1] The final step of the data preprocessing, is to transform the graph into a matrix (desirable input for our algorithms). This is also quite simple: def graphToEdgeMatrix(G): # Initialize Edge Matrix edgeMat = [[0 for x in range(len(G))] for y in range(len(G))] # For loop to set 0 or 1 ( diagonal elements are set to 1) for node in G: tempNeighList = G.neighbors(node) for neighbor in tempNeighList: edgeMat[node][neighbor] = 1 edgeMat[node][node] = 1 return edgeMat Before we get going with the Clustering Techniques, I would like you to get a visualization on our data. So, let’s compile a simple function to do that: def drawCommunities(G, partition, pos): # G is graph in networkx form # Partition is a dict containing info on clusters # Pos is base on networkx spring layout (nx.spring_layout(G)) # For separating communities colors dictList = defaultdict(list) nodelist = [] for node, com in partition.items(): dictList[com].append(node) # Get size of Communities size = len(set(partition.values())) # For loop to assign communities colors for i in range(size): amplifier = i % 3 multi = (i / 3) * 0.3 red = green = blue = 0 if amplifier == 0: red = 0.1 + multi elif amplifier == 1: green = 0.1 + multi else: blue = 0.1 + multi # Draw Nodes nx.draw_networkx_nodes(G, pos, nodelist=dictList[i], node_color=[0.0 + red, 0.0 + green, 0.0 + blue], node_size=500, alpha=0.8) # Draw edges and final plot plt.title("Zachary's Karate Club") nx.draw_networkx_edges(G, pos, alpha=0.5) What that function does is to simply extract the number of clusters that are in our result and then assign a different color to each of them (up to 10 for the given time is fine) before plotting them. Clustering Algorithms Some clustering algorithms will cluster your data quite nicely and others will end up failing to do so. That is one of the main reasons why clustering is such a difficult problem. But don’t worry, we won’t let you drown in an ocean of choices. We’ll go through a few algorithms that are known to perform very well. K-Means Clustering Interactive on original post. Source: github.com/nitoyon/tech.nitoyon.com K-means is considered by many the gold standard when it comes to clustering due to its simplicity and performance, and it’s the first one we’ll try out. When you have no idea at all what algorithm to use, K-means is usually the first choice. Bear in mind that K-means might under-perform sometimes due to its concept: spherical clusters that are separable in a way so that the mean value converges towards the cluster center. To simply construct and train a K-means model, use the follow lines: kmeans = cluster.KMeans(n_clusters=kClusters, n_init=200) kmeans.fit(edgeMat) # Transform our data to list form and store them in results list results.append(list(kmeans.labels_)) Agglomerative Clustering The main idea behind agglomerative clustering is that each node starts in its own cluster, and recursively merges with the pair of clusters that minimally increases a given linkage distance. The main advantage of agglomerative clustering (and hierarchical clustering in general) is that you don’t need to specify the number of clusters. That of course, comes with a price: performance. But, in scikit’s implementation, you can specify the number of clusters to assist the algorithm’s performance. To create and train an agglomerative model use the following code: agglomerative = cluster.AgglomerativeClustering( n_clusters=kClusters, linkage="ward") agglomerative.fit(edgeMat) # Transform our data to list form and store them in results list results.append(list(agglomerative.labels_)) Spectral The Spectral clustering technique applies clustering to a projection of the normalized Laplacian. When it comes to image clustering, spectral clustering works quite well. See the next few lines of Python for all the magic: spectral = cluster.SpectralClustering(n_clusters=kClusters, affinity="precomputed", n_init= 200) spectral.fit(edgeMat) # Transform our data to list form and store them in results list results.append(list(spectral.labels_)) Affinity Propagation Well this one is a bit different. Unlike the previous algorithms, you can see AF does not require the number of clusters to be determined before running the algorithm. AF, performs really well on several computer vision and biology problems, such as clustering pictures of human faces and identifying regulated transcripts: affinity = cluster.affinity_propagation(S=edgeMat, max_iter=200, damping=0.6) # Transform our data to list form and store them in results list results.append(list(affinity[1])) Metrics & Plotting Well, it is time to choose which algorithm is more suitable for our data. A simple visualization of the result might work on small datasets, but imagine a graph with one thousand, or even ten thousand, nodes. That would be slightly chaotic for the human eye. So, let me show how to calculate the Adjusted Rand Score (ARS) and the Normalized Mutual Information (NMI): # Append the results into lists for x in results: nmiResults.append(normalized_mutual_info_score(groundTruth, x)) arsResults.append(adjusted_rand_score(groundTruth, x)) If you’re unfamiliar with these metrics, here’s a quick explanation: Normalized Mutual Information (NMI) Mutual Information of two random variables is a measure of the mutual dependence between the two variables. Normalized Mutual Information is a normalization of the Mutual Information (MI) score to scale the results between 0 (no mutual information) and 1 (perfect correlation). In other words, 0 means dissimilar and 1 means perfect match. Adjusted Rand Score (ARS) Adjusted Rand Score on the other hand, computes a similarity measure between two clusters by considering all pairs of samples and counting pairs that are assigned in the same or different clusters in the predicted and true clusters. If that’s a little weird to think about, have in mind that, for now, 0 is the lowest similarity and 1 is the highest. So, to get a combination of these metrics (the NMI and ARS), we simply calculate the average value of their sum. And remember, the higher the number, the better the result. Below, I have plotted the score evaluation so we can get a better understanding of our results. We could plot them in many ways, as points, as a straight line, but I think a bar chart is the better choice for our case. To do so, just use the following code: # Code for plotting results # Average of NMI and ARS y = [sum(x) / 2 for x in zip(nmiResults, arsResults)] xlabels = ['Spectral', 'Agglomerative', 'Kmeans', 'Affinity Propagation'] fig = plt.figure() ax = fig.add_subplot(111) # Set parameters for plotting ind = np.arange(len(y)) width = 0.35 # Create barchart and set the axis limits and titles ax.bar(ind, y, width,color='blue', error_kw=dict(elinewidth=2, ecolor='red')) ax.set_xlim(-width, len(ind)+width) ax.set_ylim(0,2) ax.set_ylabel('Average Score (NMI, ARS)') ax.set_title('Score Evaluation') # Add the xlabels to the chart ax.set_xticks(ind + width / 2) xtickNames = ax.set_xticklabels(xlabels) plt.setp(xtickNames, fontsize=12) # Add the actual value on top of each chart for i, v in enumerate(y): ax.text( i, v, str(round(v, 2)), color='blue', fontweight='bold') # Show the final plot plt.show() As you can see in the chart above, K-means and Agglomerative clustering have the best results for our dataset (best possible outcome). That of course, does not mean that Spectral and AF are low-performing algorithms, just that the did not fit in our data. Well, that’s it for this one! Thanks for joining me in this clustering intro. I hope you found some value in seeing how we can easily manipulate a public dataset and apply several different clustering algorithms in Python. Let me know if you have any questions in the comments below, and feel free to attach a clustering project you’ve experimented with!
https://medium.com/learndatasci/k-means-other-clustering-algorithms-a-quick-intro-with-python-438fc0deaef1
[]
2017-05-19 15:55:41.494000+00:00
['Machine Learning', 'Data Science', 'Clustering', 'Python', 'Algorithms']
“Am I going viral?” An AWS Data Engineering project.
With 2020 being a straight-up rubbish year, the need for respite never has been greater. Enter: Nathan Apodaca on TikTok. One man, skateboarding with some Ocean Spray juice to the sound of Fleetwood Mac’s Dreams. His video quickly went viral. And what was the effect? A lot of people became interested in Ocean Spray juice. Interest in Ocean Spray. Source: Google trends It’s a sign of the times that someone with a skateboard, singing some Fleetwood Mac can generate more interest than the marketing department of a multibillion dollar company. I thought about this for some time, if interest in a company could come from anywhere, how do you detect this? My question really boiled down to this: How do I know if my company is going viral? This was the starting point for my project. Data Engineering Project Here’s what I did. Use Tweepy to Scrape Twitter API to get tweets about a topic (i.e the name of the company) Publish to AWS Kinesis Stream Analyse using AWS Kinesis Stream Data Analytics Send Data to AWS DynamoDB and Publish to AWS SNS if going viral 1. Using Tweepy to Scrape Twitter API My first step to scraping twitter was signing up as a developer to use the Twitter API. To proceed I needed to create a new project and get four things: Consumer Key Consumer Secret Access Token Access Token Secret After getting these keys and tokens, it was time to use Tweepy. I chose to use Tweepy because it seemed intuitive and allowed me to get streaming data fast. Following Tweepy’s documentation, I created a StreamListener and started listening for tweets about a company. 2. Publish to AWS Kinesis The next step is to publish this to AWS Kinesis. At the moment we’re only printing the tweets to the console. To send it through to AWS, we’ll make a post request to API Gateway and use a lambda to write the data into a kinesis stream. We first setup a new API Gateway with a POST method. Creating a New API We also need a Kinesis Stream for the tweets. To write the Tweet from the API Gateway to the Kinesis Stream, we will use a AWS Lambda function. To make things easier we’ll use boto3 and write this in Python. We also need to give our lambda function the appropriate permissions to write to kinesis. The code for this Lambda function can be seen below: In terms of config, we’ll also add the mapping template application/json for our API Gateway. After this, we’re ready to deploy our API Gateway to a stage (I chose dev). Once deployed, AWS gives us a URL to make POST requests to. For the client, all that was needed was to update our Python client to make the post request to our new URL. 3. Create AWS Kinesis Data Analytics Application Now that we had data being sent to AWS, our next goal is perform some aggregation to group the number of tweets that are arriving. To begin, I created an AWS Kinesis Data Analytic Application. For this task we are looking to aggregate and count the number tweets that are coming in during a period of time. There are 3 main components to this, a source, the analytics and then the destination. The goal of this step is to perform the aggregation and then send the aggregated result to DynamoDB (via Lambda). Thinking about streaming data, we quickly run into the concept of windows. With streaming data we need to draw some line to determine when we begin aggregating. There a couple of different windows we can use: tumbling — distinct time-boxed windows (say every 30 minutes) sliding — windows that slide across the data according to a specified interval. For this project, I chose to use a tumbling window with 30 minute windows. In order to aggregate and get the number of tweets coming into the stream, I needed to use the real-time data analytics tool. I wrote some SQL which outputted the results of this to a destination. Our Kinesis Data Application is now outputting the number of tweets about a topic every 30 minutes. This data can now be connected to a destination, like a database. It’s worth collecting this data so it can be explored later. There a number of choices to use for a database here. Ideally, this kind of data would be perfect for long term trend analysis. Using Redshift seems the natural choice. For this use-case though, I’ll be using DynamoDB as its straightforward to setup and experiment with. 4. Setting up AWS DynamoDB Our next step is to setup DynamoDB. I created the table “tweets_kinesis_dynamodb” with Partition Key “row_id” and Sort Key “row_timestamp”. 5. Setting up AWS SNS In order to inform a user that their Twitter is trending, we use AWS SNS to deliver an email notification. SNS uses a publish/subscribe model where messages are published to different topics and this message is past to different subscribers. To do this we need to setup a topic and a new subscription. In this subscription, we specify the email we want to receive the message from. 6. Connecting AWS Kinesis Data Analytics to DynamoDB and SNS using AWS Lamba In Kinesis Data Analytics, we can setup a destination to deliver our aggregated tweet count. For this, we use a Lambda function that we invoke. Note: This Lambda needs IAM permissions for accessing DynamoDB and SNS. Whenever the Lambda is invoked, it puts the number of tweets to the DynamoDB table we setup before. In order to determine if our twitter subject is currently going viral, we check if the number of tweets is above a certain threshold. In the Lambda above, this number is 20 000 (this of course may differ depending on the company). Result To test our system is working properly, we can specify a topic on our Python Client (step 1, line 18) and set a number in the lambda (line 26) we just wrote. Ta da!
https://benjaminarodgers.medium.com/am-i-going-viral-an-aws-data-engineering-project-a7efe8dec6a6
['Ben Rodgers']
2020-11-13 06:01:22.313000+00:00
['Data Engineering', 'AWS']
Your “Dream Job” is a Lie. Work can be crappy at times. And that’s…
At one point, I think we all knew “dream jobs” didn’t exist. You went to work, did the thing and then it was over. No giant expectations. Your identity didn’t hang on how successful you were from hour to hour or day to day. Now, that concept has faded. Our expectations for a phenomenal work-life show up in our conversations, online posts, and of course, television. “For the first time in my life, I’m doing work that I love to do every single day,” says character Ben Wyatt in the final season of Parks and Recreation. If you’re anything like me, you don’t watch that and think “Wow, what a nice career stage this Ben Wyatt fellow has found.” No, since characters are a reflection of self, you think “Hmmm. Why don’t I love my job every single day? Did I miss the boat somehow? All my friends seem pretty happy with what they do. Maybe I should start browsing through LinkedIn to see what else is out there. Am I too old? Too young? Too stupid?” If you’re feeling really desperate, you might even think: “What does this character do? Maybe I should do that as well.” Since fiction stories hack our brains, even the most intelligent person can forget: it’s all fake. Every single line of dialog is made up to drive fake stories of fake people. In this case, Ben offers the “I love what I do every day” line to encourage another character — April — who is looking for hope in her own career. It’s total fantasy. A line written to serve the purpose of entertainment. This becomes obvious two episodes later when Ben does NOT love his job. He spends an entire day attempting to get two signatures on a document. He claims, in his final request to the signing parties: “Not that it matters. I’m definitely going to wake up tomorrow morning with the same forms for you to sign. Because I’ve died somehow and now I’m a ghost living in purgatory.” A long fall from “I’m doing work that I love,” isn’t it?
https://medium.com/mind-cafe/the-dream-job-is-a-lie-c8c18f6e7212
['Todd Brison']
2020-05-08 14:39:31.631000+00:00
['Work', 'Self', 'Creativity', 'Culture', 'Entrepreneurship']
Writing Accents, Slang, Cultural Dialects, and Period Languages
Several times I have started reading a novel or story that featured a character with “non-standard” language. For example, a poor person in medieval times. Or a thick regional accent. Or a deep-south American dialect from a previous century. The author, in trying to accurately replicate the accent or language, used phonetic spelling of words that made it very difficult to read and understand. In these cases, I usually gave up and moved on to another book or story. The struggle to understand the language was not worth the effort. … Language is fluid. Should we try to capture it exactly as it was spoken at the expense of comprehension by the reader? Here are two examples of how drastically the English language has changed over the centuries. This is an excerpt from Canterbury Tales in 14th century English: Thanne longen folk to goon on pilgrimages, And palmeres for to seken straunge strondes, To ferne halwes, kowthe in sondry londes; And specially from every shires ende Of engelond to caunterbury they wende, This is mostly unreadable to modern English speakers. It would be foolish to write a novel set in the 14th century using 14th-century language like this. Few could understand it. Fast forward three centuries to the 17th century. The English language used at the time is closer to today’s but still difficult to follow. This is an excerpt from Dorothy Osborne’s Letters in 17th century English: I came down hither not half so well pleased as I went up, with an engagement upon me that I had little hope of ever shaking off, for I had made use of all the liberty my friends would allow me to preserve my own, and ‘twould not do; he was so weary of his, that he would part with’t upon any terms. As my last refuge I got my brother to go down with him to see his house, who, when he came back, made the relation I wished. He said the seat was as ill as so good a country would permit, and the house so ruined for want of living in’t, as it would ask a good proportion of time and money to make it fit for a woman to confine herself to. That was only three sentences! Again, total accuracy in period language would probably be a poor decision. But you probably do want to give a sense of the period through language. Word choice and phrasing can help with this. Language was often more formal in previous centuries. Writing “Your assessment is correct” instead of “You’re right” would give the reader a sense of a previous time yet still be easily understandable.
https://medium.com/mark-starlin-writes/writing-accents-slang-cultural-dialects-and-period-languages-a3b0c413876f
['Mark Starlin']
2018-12-09 17:05:33.239000+00:00
['Language', 'Essay', 'Historical Fiction', 'Creativity', 'Writing']
Get Started with React Navigation 5 in React Native
Navigator Components There are 3 main navigators that React Navigation comes bundled with, that are suitable for both iOS and Android based projects. These navigators and their respective packages are as follows. Stack Navigator @react-navigation/stack (official doc): The most vanilla navigator you’ll find, a Stack navigator will navigate from screen to screen in a hierarchical fashion: To set up a stack navigator, declare a stack navigator object via the createStackNavigator method. From here, the Navigator and Screen components derived from this method can be used to embed and wrap your desired screens: import { createStackNavigator } from '@react-navigation/stack' const DashboardStack = createStackNavigator(); function Dashboard() { return ( <DashboardStack.Navigator mode='modal' headerMode='none'> <DashboardStack.Screen name="Home" component={HomeScreen} /> <DashboardStack.Screen name="Stats" component={StatsScreen} /> </DashboardStack.Navigator> ); } This pattern of declaring a new navigator and taking the resulting Navigator and Screen components is consistent throughout all the supplied navigators. As you can see, the component-based declaration differs a lot from the previous version, but is quite self explanatory. Notice the mode and headerMode props of DashboardStack.Navigator — these may be familiar to you if you have used React Navigation 4. Essentially, the navigationOptions properties are now represented as props, all of which have been documented here. In addition to this, each DashboardStack.screen also support an options prop to customise navigation options on a per-screen basis. Instead of defining headerMode='none' fort the entire navigator, we could disable the header to individual screens using options : <DashboardStack.Screen name="Home" component={HomeScreen} options={{ headerShown: false }} /> There are quite a few ways to manipulate the header through options — check out all the options properties here. Bottom Tabs Navigator @react-navigation/bottom-tabs (official doc): Another valuable navigator that will set up the boilerplate for a tab bar navigator, popular amongst dashboard designs. This tab bar navigator offers quite a comprehensive solution, allowing the developer to customise the look and feel of tabs with ease. The following gist demonstrates how to construct a Bottom Tabs navigator, with some customisation with SVGs and styling: Again, the bulk of syntax may look similar to the previous version of React Navigation, with a couple of key differences: We’re now configuring the tabs within a screenOptions of the navigator component, where the tabBarIcon property is returned. of the navigator component, where the property is returned. screenOptions now provides a route prop (more on that further down) that provides context on the currently active screen. This is now used in your switch statement to determine the icon SVG. now provides a prop (more on that further down) that provides context on the currently active screen. This is now used in your switch statement to determine the icon SVG. The tabBar options prop is also a part of screenOptions, that allows you to configure additional styling of the tab bar. Each Tab icon is embedded via JSX with a <BottomTabs.Screen /> component within the <BottomTabs.Navigator /> component. All the props and more screenOptions examples are documented at the official docs, as well as a dedicated Tabs Navigation guide being available to read through. Stack navigators can be nested within bottom tab navigators, but the bottom tabs UX will remain as you are navigating through the stack hierarchy. This may be what you intend to do, but in most cases, displaying tabs within deep hierarchies is confusing UX. To get around this, you can link from one screen to another screen in another navigator, that we’ll cover further down. Drawer Navigator @react-navigation/drawer (official doc): Another useful navigator that allows screens to animate from one side of the screen to be revealed, and animate back when closed. Stack navigators can be nested within these types of navigators to expand on the content within it: The Drawer Navigation documentation does a good job covering this navigator, so this article will not delve deeper into it, having covered the previous two navigators that demonstrate the same changes from the previous version. There are some useful props for the Navigator however, beyond screenOptions , that will be commonly used, with openByDefault , drawerPosition , drawerType and minSwipeDistance being some of those. For completeness, there are two more Android focussed navigators that based on Google’s Material theme design, being createMaterialBottomTabNavigator and createMaterialTopTabNavigator . These navigators are typically managed by the organisation themselves on GitHub, that can be seen on their GitHub organisation page along with all the other utilities React Navigation offer. Nesting Navigators It is common to nest navigators within other navigators, and navigate from one to another. Consider the following setup where a stack navigators are nested within a tab bar navigator: // nesting navigators const DashboardStack = createStackNavigator(); export const Dashboard = () => { return ( <DashboardStack.Navigator> <DashboardStack.Screen name="Home" component={HomeScreen} /> <DashboardStack.Screen name="Stats" component={StatsScreen} /> </DashboardStack.Navigator> ); } const BottomTabs = createBottomTabNavigator(); export const Tabs = () => { return ( <BottomTabs.Navigator ... > <BottomTabs.Screen name="Dashboard" component={Dashboard} /> <BottomTabs.Screen name="Settings" component={Schedule} /> ); } What if we wanted to navigate from one tab to a particular stack screen in another tab? Well, React Navigation provides a simple API to do so, using the navigate function within the navigation prop. Consider the following button: // navigating through nested navigators <Button label="Back Home" onPress={() => { navigation.navigate('Dashboard', { screen: 'Stats' }); }} /> Upon pressing the above button, the tab bar navigator will switch to the Dashboard screen, and the stack navigator within that will navigate to the Stats screen. More detail on nesting navigators can be found here. About the new `route` prop Although the navigation prop is still used in v5, some of its properties have been delegated to a separate route prop, as a means to separate the current route properties from the navigation context. The upgrading from v4 to v5 post has outlined these changes of the navigation prop. The route prop (at props.navigation.route ), is now where the current screen’s data is hosted, that used to be in props.navigation.state . Concretely, navigation supplies you with methods and context to navigate through your NavigationContainer , whereas route contains data pertaining to the currently active screen. For more info on these props: Go to the navigation API reference, that lists all the properties of the object, ranging from goBack , navigate , setParams , and more. API reference, that lists all the properties of the object, ranging from , , , and more. Go to the route API reference, that details its properties, including the name , key and params properties. In regards to setting params when navigating, not much has changed. However, there is no more getParam method to fetch params. Instead, params are fetched from route.params in the following fashion: // getting a param from `route` const myParam = route.params?.myParam ?? 'defaultValue'; This is a more generic way of checking whether a param exists and assigning a default value if it does not, rendering the previous getParam utility somewhat obsolete. Hooks in React Navigation 5 There has also been an introduction of new hooks in React Navigation 5, that make it easier to work with navigators within functional components, as well as put less (or no) reliance on HOCs. The useNavigation hook for example removes the need for the withNavigation HOC that was depended upon before, that will fetch the navigation object from the navigation container’s context: // useNavigation hook import { useNavigation } from '@react-navigation/native'; function MyComponent() { // get navigation without passing it down in props const navigation = useNavigation(); ... } This totally replaces withNavigation , that is now a part of the Compatibility layer of utilities. useRoute can also be used to obtain the route object discussed in the previous section. There are a few more hooks to be aware of when upgrading from v4: useFocusEffect : triggered when the screen in question is focussed. This can replace an event listener setup that you may have relied upon in previous versions. Event listeners are still supported, but have been simplified. There are now two events of navigation, focus and blur . Before, we had didFocus , willFocus , didBlur and willBlur to play with. focus can be implemented within useEffect like so: let focusListener = null; useEffect(() => { focusListener = navigation.addListener('focus', async () => { // do something }); return (() => { if (focusListener.remove !== undefined) this.focusListener.remove(); }) }, []); Notice that focusListener is null by default, as useEffect hooks are triggered after the component renders.
https://rossbulat.medium.com/getting-started-with-react-navigation-5-in-react-native-f82676294d2f
['Ross Bulat']
2020-04-30 13:24:28.075000+00:00
['React Native', 'Software Engineering', 'Programming', 'React', 'JavaScript']
Cluster Analysis With Iris Data Set
This article is about hands-on Cluster Analysis (an Unsupervised Machine Learning) in R with the popular ‘Iris’ data set. Let’s brush up some concepts from Wikipedia Machine learning is the study of computer algorithms that improve automatically through experience. It is seen as a subset of Artificial Intelligence. Machine learning algorithms build a mathematical model based on sample data, in order to make predictions or decisions without being explicitly programmed to do so. Supervised Learning is the machine learning task of learning a function that maps an input to an output based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples. Unsupervised Learning is a type of machine learning that looks for previously undetected patterns in a data set with no pre-existing labels and with a minimum of human supervision. Cluster Analysis or Clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). About Iris Data set Iris Flower (google image) Iris flower data set was introduced by the British statistician and biologist Ronald Fisher in his 1936 paper The use of multiple measurements in taxonomic problems. This is perhaps the best known database to be found in the pattern recognition literature. Iris data set gives the measurements in centimetres of the variables sepal length and width and petal length and width, respectively, for 50 flowers from each of 3 species of iris. The species are Iris setosa, versicolor, and virginica. Image from techgig.com So, let’s start now ! You may like to download the Iris Data set & the R-script from my github repository. Hope you have R & RStudio installed for a hands-on experience with me :) Objective The Objective is to segment the iris data(without labels) into clusters — 1, 2 & 3 by k-means clustering & compare these clusters with the actual species clusters — setosa, versicolor, and virginica. Install and Load R Packages ‘tidyverse’, ‘cluster’ and ‘reshape2’ — these three R packages are required here. Install if not done earlier. We need to load the packages with the library function. install.packages(“tidyverse”) # for data work & visualization install.packages(“cluster”) # for cluster modeling install.packages("reshape2") # for melting data # note : not required if already installed library(tidyverse) library(cluster) library(reshape2) Import the Iris Data set We can import from disc after setting the working directory where the csv file is. setwd(“E:/my_folder/work_folder”) mydata <- read.csv(“iris.csv”) Or, get it from the in-build R datasets mydata <- iris Explore the Data set With below functions we can check the data set before exploring glimpse(mydata) head(mydata) View(mydata) Let’s visualize the data now with ggplot2 Sepal-Length vs. Sepal-Width ggplot(mydata)+ geom_point(aes(x = Sepal.Length, y = Sepal.Width), stroke = 2)+ facet_wrap(~ Species)+ labs(x = ‘Sepal Length’, y = ‘Sepal Width’)+ theme_bw() Petal-Length vs. Petal-Width ggplot(mydata)+ geom_point(aes(x = Petal.Length, y = Petal.Width), stroke = 2)+ facet_wrap(~ Species)+ labs(x = ‘Petal Length’, y = ‘Petal Width’)+ theme_bw() Sepal-Length vs. Petal-Length ggplot(mydata)+ geom_point(aes(x = Sepal.Length, y = Petal.Length), stroke = 2)+ facet_wrap(~ Species)+ labs(x = ‘Sepal Length’, y = ‘Petal Length’)+ theme_bw() Sepal-Width vs. Pedal-Width ggplot(mydata)+ geom_point(aes(x = Sepal.Width, y = Petal.Width), stroke = 2)+ facet_wrap(~ Species)+ labs(x = ‘Sepal Width’, y = ‘Pedal Width’)+ theme_bw() Box plots
https://medium.com/swlh/cluster-analysis-with-iris-data-set-a7c4dd5f5d0
['Ahmed Yahya Khaled']
2020-08-28 22:35:45.116000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'K Means Clustering', 'Clustering']
JavaScript Frameworks, Performance Comparison 2020
Group 4 — Standard Performance This is the biggest group and consists of some of the most popular libraries. Also many of the corporate-backed ones from the likes of Facebook, Google, eBay, and Alibaba. These are the libraries that either has less of a focus on performance or highlight performance in one area but suffer in others. Group 4 Performance There is a lot of red and orange here but keep in mind these libraries on average only around 2x slower than the painfully handcrafted imperative Vanilla JavaScript example we are using here. How different is 400ms from 200ms? In raw performance, React is the leader of this pack. Although it never ceases to amaze given how different the architecture, how close React, Marko, Angular, and Ember are here overall. Still, it’s React, and that’s the React Hooks implementation, that is the leader here. For all those pointing extra function creations and holding on to classes, the performance argument is not on your side. React Hooks are the most performant way to use React. Most libraries here either have naive list sorting which leads to really poor swap row performance or they have expensive creation costs. Ember is an extreme case of this as it has much better update performance than the rest of this group but creation among some of the worst. The slowest libraries(Knockout, Ractive, and Alpine) are all fine-grained reactive libraries with similar architecture. The Knockout and Ractive(also by Rich Harris the author of Svelte) are from the early 2010s before VDOM library dominance. I also doubt Alpine was expecting to ever render 10k rows with its sprinkle of JavaScript approach. We won’t see another pure fine-grained reactive library until much later in the comparison. Next, we will compare startup metrics a category largely based on the libraries’ bundle size. Group 4 Startup The order changes a good amount here. Where Alpine has the worst performance we can see it has the smallest bundle size and quickest startup time of the bunch. Marko(from eBay) is right behind it, followed by Rax(from Alibaba). All 3 of these libraries are built for server-side rendering primarily with lighter client interaction. It’s largely why they are in Group 4 for performance but lead startup here. The latter half of the table are some of the largest bundles we have in the benchmark ending with Ember that is more than 2x times the size of any other implementation. I don’t know why it takes more than half a megabyte to render this table. But it really hurts startup performance. The last category we will look at is memory consumption. Group 4 Memory Memory tends to reflect the patterns we’ve already seen as it has a big impact on performance and larger libraries tend to use more of it. Alpine, Marko, and React lead the way. The ageing fine-grained reactive libraries use the most leading up to Ember. Ember is just gigantic. It already is using more memory than Vanilla will use throughout the whole suite after rendering just 6 buttons to the page. Group 4 Results In general, this group represents over 300k stars on GitHub and probably the largest portion of NPM downloads, but it’s Marko and Alpine that place the highest on average ranking in this crowd. React is 3rd after them holding its own especially in performance. This is the group where we have frameworks of titanic proportions and where our old reactive libraries have gone to die. Let’s move on to something a little more optimistic.
https://medium.com/javascript-in-plain-english/javascript-frameworks-performance-comparison-2020-cd881ac21fce
['Ryan Carniato']
2020-12-21 23:30:18.192000+00:00
['React', 'Vuejs', 'Web Development', 'JavaScript', 'Software Development']
Geodesics: Paths of Shortest Distance
Everyone knows that the path of shortest distance in a paper (or in general a flat plane) is a straight line. What if the paper was curved? What if you want to find the shortest distance from one point on Earth to another? Finding the shortest distance in curved surfaces is not as simple as one might think. In this article, we will look into how we can find geodesics, which are the paths of shortest distance, on curved surfaces. Geodesics on a Sphere One way to develop an intuition of how the shortest paths may look like is to imagine the geometry of the object. In this case, the sphere is a geometry we are all familiar with. A key point are that travelling laterally, by the same angle is shorter when we are closer to the poles. This result would prove important in understanding why the geodesic path looks that way. Furthermore, we should always bear in mind that we are travelling along the surface of the object and not through the object or in a parabolic manner over the object. Imagine this as walking along the surface. Now we can get into how to develop the framework for this analysis. We know from Pythagoras theorem that the sum of squares of the distances in orthogonal coordinates give the square of the overall ‘hypotenuse’ distance from one point to another. This can be represented as Looking at this from the infinitesimal lens, we obtain So, using this formula for the infinitesimal distance, we can deduce that the formula for arclength is merely We know that representing a sphere in cartesian coordinates makes the calculations very cumbersome, so we switch to spherical coordinates where the infinitesimal distance is given as One can derive this is two ways: represent x, y and z in terms of r, theta and phi, and find the derivatives and use chain rule to convert. The other, simpler way is to merely observe the relationship between the spherical coordinates and cartesian coordinates. The terms then become obvious. Also, in our case, we assume there is no change in the radius (we indeed live in an awfully flat sphere). So, the differential simplifies and the arclength becomes The trick to approaching this integral is to make our solution of the form Now, the integral can be represented as Solving the Integral If you have taken some calculus courses in your university, you would remember that formula for arclength and would have solved it for some parameterized curve given by your teacher. However, this problem of finding the optimal curve that minimizes this integral is much harder. It involves the use of calculus of variations. Calculus of variations is essentially looking at optimization (extremum) problems and finding the optimal function that extremizes a given functional. An important concept is that of a functional. Think of a functional as a function with parameters that you vary. Varying this parameters gives you different functions (although of the same form) and our goal is to find that one (or few) parameters that gives the extreme value of the function. Without getting into too much of the details, the extremum of the functional can be found by solving the Euler-Lagrange equations. The Euler-Lagrange equations are a set of equations that resulted from applying the Fundamental Lemma of Calculus of Variations to integrals such as the one we have. These are the Euler Lagrange equations Where L is called the Lagrangian, and it is essentially the integrand of the integral that needs to be extremized. So now we can apply these equations to our problem (letting our sphere be an unit sphere, without loss of generality) Differentiating, we get This can be rearranged into the following integral This integral is quite trivial and gives us the following solution Where a and phi are constants that will be determined by the initial and final points. Although this solution may look complicated, it actually gives the formula of a great circle on a sphere. It is a well known fact that great circles are the shortest path between two point on a sphere. Generalization The example we saw was a very simple derivation of a geodesic path on a sphere. For more complicated geometries, it is more useful to turn to differential geometry. In differential geometry, we denote the relationship between the distance and the coordinate system by the metric tensor Note that Einstein summation convention is applied, so the terms are summed over repeated indices. In this general curved space, we can represent the arclength as Where the curve is parameterized with t. Turns out that instead of minimizing this functional, we can minimize the ‘energy’ functional, defined by Now, the equivalent of the Lagrange equations for problems in differential geometry is the geodesic equations: Where L is known as the Christoffel Symbols. They essentially contain information about how the basis of the coordinates change with respect to other coordinates at different points. So, in Euclidean space, we can show that Christoffel symbols are equivalent to However, in curved space, their definitions are more complicated, and they come out to be Conclusion Solving the geodesic equation is no mean feat, and it is very tedious even for the simplest of geometries, due to the number of terms and permutations over indices. However, it is sometimes nice to see how concepts can be generalized to be able to consider several other cases. Do look forward to my post about solving the geodesic equations.
https://medium.com/engineer-quant/geodesics-paths-of-shortest-distance-16b8e264abf7
['Vivek Palaniappan']
2019-04-16 10:59:10.139000+00:00
['Geometry', 'Artificial Intelligence', 'Mathematics', 'Physics', 'Engineering']
What Apple Search Engine May Look Like
Photo by Markus Winkler on Unsplash Ever since Apple took a high stand against advertising without consent from iOS 14, its stand against the current ad-giant Facebook is conspicuous. However, the same cannot be said about Google. To evaluate Apple’s chances in the Search arena, one must evaluate the motivations of both companies. The paths are intersecting, just now: Apple’s relationship with Google isn’t exactly that of competitors. They are codependent. Google has paid Apple $8-$12 Billion to keep Google as the default iOS browser. There is an antitrust lawsuit already that could bring harm to both of them, not just Google. Imagine, $8-$12B just to keep it the default, not to eliminate others. That’s the value Google ascribes to its search business. That’s also the value it ascribes to being on every platform because it doesn’t have a strong platform of its own. Android is its licensed OS, and it has to rely on hardware partners to keep its roots strong among mobile users. Google is quite active in the iOS ecosystem — something which is evident from the fact that its apps heavily occupy the App Store. Google Drive, Gmail, Maps, Chrome, Google Photos, Google Translate, Youtube — these are just a few big names. They signify things where Google is already dominating but needs iOS users not to fall out of the line. Google dominates on the App Store like it has to. Apple peeks into the Play Store such that it needs it, but hasn’t figured out exactly how. On the other hand, Apple apps on Google Play Store are in minuscule numbers: Apple Music, Apple TV, Move to iOS. They signify things that Apple is trying its foothold into. It’s far from winning in those yet. From a premium hardware company, it now wants to transform itself into a service company. But it lacks the advertising backbone that Google developed during its evolution along with the rest of the web. Google dominates on the App Store like it has to. Apple peeks into the Play Store such that it needs it, but hasn’t figured out exactly how. The Apple-Google relationship isn’t bitter to be exact Apple has notoriously harassed app developers that could compete with its line of business. It rejected the Spotify app (a competitor to iTunes Music) for filing an EU complaint to circumvent 30% for an in-app purchase. Yet Google Maps, a clear winner in the map race since Apple’s mapping gaffe, is still dominating on iOS despite the fact that Apple Maps has improved much, and doesn’t track users. Despite their fierce competition in the smartphone market, the Apple-Google relationship isn’t bitter to be exact. Their territories have been quite disparate, despite intersecting often. It is only now that their paths are intersecting, like never before. It might get a lot interesting. The Apple search engine may be quite different from Google: Not the opposite, but different. For one, the Apple search engine would be privacy-focused. This means not tweaking the search results + rankings based user profiles. That has always been its initial promise. Any lawsuit in that direction may not only break it financially; it’s also too big to afford the negative publicity fallout. Tim Cook once said: Our business model is very straightforward: We sell great products. We don’t build a profile based on your email content or web browsing habits to sell to advertisers. We don’t “monetize” the information you store on your iPhone or in iCloud. And we don’t read your email or your messages to get information to market to you. In order to rely less on who is searching, Apple must solely rely on what is being searched. Not personalizing results means a lot of things: Do not collect their data. If you collect it, do not make it leave their devices. If you send it to your servers, encrypt it on the way, in a guaranteed end-to-end encryption manner — the lack of which got Zoom in trouble in the recent past. Use personalization for relevant results, not ad-profiling. In order to rely less on who is searching, Apple must solely rely on what is being searched. Google’s secret weapon in search: Search business is much beyond an average search user, and Google possesses the most hidden weapon that will never be monetized. During an ordinary search user’s entire life, searching is all about best shoes/clothes/laptops/dating/diapers/clinics/schools/restaurants/tv-shows/fun videos. This consumer base forms the highest chunk of any search firm’s advertising revenue. But search business is much beyond an average search user. It’s the collective data about quality (non-layman) searches and their overall trend. When analyzed through fine-tuned powerful machine learning GPUs, this extra volume that is little in quantity becomes much bigger in its futuristic value. In fact, it is the most hidden weapon Google possesses that is rarely talked about. Besides its gigantic repo of users’ personal data, Google also has a huge repository of academic research. Google Scholar and Google Books are areas that aren’t monetized yet, but they definitely give it a lot more insightful peek into where knowledge is headed. They might not become products in themselves, but they might spawn off a lot of Google products simply because of the information density they capture compared to an old school SEO-high website. For example, if searches trend in Google Scholar about a specific gene technology, Google could preemptively buy or invest in a company that leverages it before it has made its VC round. It would be foolish to think Google may not be using any of its search data in powering its AI research arm. Since it’s open, the world benefits from it. And Google also receives back what it gives away: the contribution of the open-source community. Enter open-source, where Google is the biggest contributor that brought us the modern web + digital Gen Z. Compare it to other giants (Microsoft, Apple, and Amazon) who only resorted to open source when they couldn’t do without it. Being simply the most regarded company, this is where Google gets a peek into software developers’ brains, and how they shape & shake the world. Microsoft lately made a preemptive assault by acquiring GitHub, but no one understands open source today better than Google. This means that to even compete with Google search, Apple must work harder. It must keep privacy out of the equation and still deliver more relevant results than Google across all demographies. Where Apple could make a compelling case: Google search has lost a lot of relevance in the recent past, thanks to its aggressive ad-based + AMP-focused search rankings. About a decade ago, Google Chrome was a breeze in the market dominated by the infamous Internet Explorer. Today, the search experience on Google Chrome is far from the best. For anyone who is unaware, AMP is Google’s own framework for website makers that makes it easy for Google to crawl their content. In exchange for AMP compatibility, website owners get ranked higher than their non-AMP counterparts. But in general, the AMP benefits more to Google: it saves billions of dollars from a crawling operation that consumes less-power because of known-website format. It also displays AMP website content inline, so the user doesn’t need to leave Google. This earns Google better retention (which again means more user-click data) but less inbound traffic to website owners. Users are annoyed, too. About a decade ago, Google Chrome was a breeze in the market dominated by the infamous Internet Explorer. Today, the search experience on Google Chrome is far from the best: You are bombarded with half-page results from the same AMP-compliant site, and you must go beyond Page 3 (pun intended) to get relevant. Upon visiting the site, notification plugins and GDPR notices punch you in the eye. While the last two are not part of the search experience, those are the changes brought about directly/indirectly by Google itself. Will Apple Buy DuckDuckGo? For obvious reasons, webmasters are frowning at Google. They are dying to see a sizeable competitor. DuckDuckGo, the most promising David in the race holds only 0.45% of the global market share against the Goliath Google (92.5%), but it checks a lot of privacy-focused checkboxes: It doesn’t collect personal data (no sign-ins), no geo-tracking, and no IP tracking too. There has been widespread speculation about Apple considering to buy DuckDuckGo. While Apple could gain an early and easier foothold by acquiring an established web search player like DuckDuckGo, there are two reasons that could not happen: Apple is already crawling the web with AppleBot — note that this is the same technology that powers Siri search rankings. Buying a web-search engine would establish Apple as someone who needs the web to succeed. This is something Apple as a company has always distanced itself from doing. If Apple would go for a buy, it would be more to thwart a long-term competition. Again, I do not see much of a match besides the privacy bandwagon. It’s difficult for Apple to go the web route when it already dominates the market region of its own creation. The devices. The Device centered search: Anytime you issue a command to Siri, it presents choices. Those choices are ranked not by ads, but by what is already there on the device. If you ask your iPhone Siri to make a reservation, there is a possibility that it might bring website results of nearby restaurants. But if you already have a restaurant reservation app (e.g. Yelp), chances are, it will offer it as a top choice. The web results will follow. This has two distinct advantages: Your intent to find a restaurant doesn’t necessarily leave the device. Your privacy is intact. As of now, App makers (e.g. Yelp in the above example) don’t have to pay Apple an extra dime to make themselves visible as the top choice. This means that Apple has a potentially stronger offering to both its user and supplier than web-search providers. Its success will solely depend upon how strong Siri aligns itself with user intents, and how app developers can leverage this to their advantage. That’s where Apple still lags behind. In a recent test conducted by Apple aficionado site 9to5mac, Apple’s Siri lagged behind Google Assistant in its effectiveness to answer user queries. Yet, the test still notes one thing: Siri wins on interactivity. That’s a clear verdict that juxtaposes Apple’s experiences vs Google’s stockpile of information. That’s How Apple Search Engine Will Act: Stronger hardware is Apple’s trumpcard to compensate for lack of data. That last fact can be the first step Apple must take in its search war journey. In the last 2020 WWDC, Tim Cook emphasized iOS 14’s capability to do more with data on the device. A better choice than those bytes leaving for a remote server to be analyzed at peace by the hungry eyes of data scientists. In the context of search, this means that user’s requests will be fully analyzed with the help of a neural engine present on the device itself. Whatever operation web-search engines perform on the collective search data will be performed on individual searches by the device processors. It could still happen that the ultimate outcome of the process could be sent back to Apple servers. But this will be mainly to enable a feedback loop: To measure the efficacy of its systems (and update Search with every iOS update) rather than listening to every slang and abusive word frustrated users hurl at Siri. This way, without competing with Google on its own turf-the web-Apple could streamline its own traffic. Keeping searches for its own services within the Siri system could be a serious advantage for Apple. That, combined with removing Google as the default search engine on Safari, Apple could make a sizable hole in the sky dominated by Google. Apple’s Siri based search could also be leveraged on services like Apple TV and iTunes Music, to fuel its fledgling media arms. By offering better discoverability options, Apple could come up with a win-win for itself + content creators — fixing a mistake it committed in the early App Store search. Apple search can’t survive by simply offering Apple’s own services in a monopolistic manner With its latest M1 chip, Apple is in a stronger position to dominate on-device neural processing on desktops too. This will enforce players like Google and Bing to either empower their browsers (which will again be thwarted by Apple citing privacy) or spread themselves even on devices, such as Amazon’s Alexa. Stronger hardware is Apple’s trump-card to compensate for lack of data. The browser-based search had its time of glory. That time might change soon. With the emergence of Gen Z, barring academic research, the search might completely move to mobile. Following that route, Apple still must gain escape velocity to reach the inflection point, because it does not own the web. But with all of its pitfalls, that web still exists. With all its on-device intelligence, Apple's search can’t be a sizable challenger by simply offering Apple’s own services in a monopolistic manner, an approach it has tried before and failed. Final Thoughts: It is not clear when Apple's search will roll out. It is also not clear how much it may alter the search business equation. It might not alter the market in a disruptive manner anytime soon. But it could create a search business category of its own. All the outcomes will depend upon the execution. If Apple has learned anything from its Maps experience, it might alter the search game forever.
https://medium.com/swlh/what-apple-search-engine-may-look-like-10e04e572b59
['Pen Magnet']
2020-12-24 23:34:35.021000+00:00
['Apple', 'Search Engines', 'Privacy', 'Google', 'Technology']
Are We Crazy For Wanting to Be Writers?
I’ve always dreamed big dreams. No matter what I’ve tried pursuing, I’m always confident that I’m going to succeed. So far, I’ve never actually succeeded, but I keep believing it anyway because the alternative doesn’t make sense. Why would I attempt something that I was sure I was going to fail at, right? Someone recently left a comment on my article What My Day Looks Like as a Full-Time Writer saying: “I am curious as to how this will change as you grow and develop as a writer and an adult. Is your only source of income writing on Medium or do you also do freelance writing etc.?” Very confidently I said that Medium was my only source of (consistent) income. I also said that I’d like to self-publish short books, but most of all, I’d love to publish novels traditionally. It wasn’t until I wrote it that I realized how insane I sounded. Naive, even. I mean, how many other writers want to publish novels and make a living off of writing books? What makes me different? What are the chances of me actually succeeding? And am I an idiot for assuming that at some point I’m going to make $5,000 a month on Medium? Because I do assume that. I genuinely believe it. To me, all of these dreams aren’t a matter of if they’ll happen, but when they’ll happy. It’s not because I think my work is great. Nor am I saying that I don’t have doubts that I’ll never succeed in this industry, nights when I want to curl up in bed and forget it all. I ask when and not if just because I’ve always known that I’m not going to quit. Giving up is an option so far from me it touches the sun. That’s why, even for something as insane as traditionally publishing a book, I don’t doubt that it’ll happen someday. Even if it’s twenty years from now. But am I crazy? Should I be setting more concrete goals? Should I think about finding a “real” job instead of writing articles on here? As I sat there, for the first time in my life, those are the doubts that ran through my head. For the time, I was genuinely afraid for my future. I got a glimpse into what “everyone else” thinks. For a moment, I thought, “This is why people judge those who pursue their dreams. Because they look fucking crazy.” I know what I look like now, to others. Like I’ve got my head stuck in the clouds. But after the fear dissolved, I thought: Is that so bad? I mean, this is what I want. It’s clear to me, now more than ever, that this is what I want. To write for a living. If you were to ask me what else I’d like to do other than write I’d tell you to stop asking questions that have no answers, like what happens when an unstoppable force meets an immovable object? I know one thing: this is the only life I’m gonna get. This is the only chance I’m going to get to live and reach my goals. How can I not go after them? How can I not believe that I will achieve them, especially knowing that even though the odds are low, this shit is not impossible? I’m not on a mission to swim every inch of the ocean. I just want to write. If people have done it before me, then there’s nothing that says I can’t. For the first time, I realize maybe I am a little crazy, but I’d rather be crazy than be dictated by fear. If I can’t imagine doing anything else, then why would I do anything else? How could I let other people’s opinions stop me from living my life as I want to? Especially when most of those people wish they were doing what they loved, too. I’m going to continue believing. I’m going to keep assuming that I’m going to reach these goals. If you’re crazy like me, if you want to make a living by acting or writing or drawing, then keep being crazy. As Robin Williams said once: You’re only given a little spark of madness. You mustn’t lose it. Be dedicated, keep learning, and stay open to the opportunities around you. Don’t stop until you’ve gotten to where you are because as Hugh Jackman said, you have to follow your gut even if it doesn’t make sense. Making a full-time living by making up stories doesn’t make sense. Being in a movie that people watch on the big screen doesn’t make sense. Making music and singing in front of thousands of people doesn’t make sense. But logic is overrated. Let everyone else fit in and make sense, and you, my friend, keep being a little fucking absurd. I know what I love. I know what my gut is telling me, and we all know that when you don’t follow your gut, shit always goes wrong. Trust your gut, too. It doesn’t need to complicated. I’m not saying you have to quit your job and risk losing a stable income. I’m saying that if you want to keep writing, then stick around no matter how many cracks in the road there are and even if you can’t see the end for a long time. I’ll be here too, anyway. You’re not alone.
https://medium.com/itxy-writes/are-we-crazy-for-wanting-to-be-writers-96faef08f0b7
['Itxy Lopez']
2020-12-29 23:36:23.243000+00:00
['Advice', 'Writing Tips', 'Creativity', 'Success', 'Writing']
Christmas Songs Are Proof That No One Cares What You Write (Only How You Write)
One of a writer’s biggest fears is not being original. They don’t want to say what everyone else has already said. But the truth is, everything already has been said. So, why write at all? Because no one’s heard the things that have already been said in your voice, in your point of view, with your personal stories. That’s what makes stories unique: it’s not the lessons that are being shared that matter, but how those lessons are being shared. It’s why we read articles on productivity or self-help or relationships over and over again. We’ve heard all the advice before but we haven’t heard how person A says it compared to person B. (Plus, we all know we need to read or hear advice more than once for it to actually stick.)
https://medium.com/itxy-writes/christmas-songs-are-proof-that-no-one-cares-what-you-write-only-how-you-write-2505f134c2ee
['Itxy Lopez']
2020-12-23 05:43:53.009000+00:00
['Self', 'Advice', 'Writing Tips', 'Creativity', 'Writing']
Kepler.GL & Jupyter Notebooks: Geospatial Data Visualization with Uber’s opensource Kepler.GL
I love working in Jupyter Notebook, and the same functionality of Kepler.gl is available in a Jupyter Notebook environment. In this tutorial, I highlight how you can incorporate kepler.gl for Jupyter visualisation tool inside your notebook. The advantage of using Kepler Jupyter notebook is that you get both the flexibility of Jupyter Notebooks as Kepler’s great visualisation tools. Displaying Data in Kelpler Jupyter notebook The dataset we use for this tutorial comes from NYC Open Data Portal. It is all incidents reported in New York in 2018. import pandas as pd from keplergl import KeplerGl import geopandas as gpd df = gpd.read_file("NYPD_complaints_2018.csv") df.head() The first few rows of the dataset are below. Incident data, category and the coordinates of the incident place are among the columns available in this dataset. To plot your data with Kepler, you first need to create a map. Let us do that with just one line of code. #Create a basemap map = KeplerGl(height=600, width=800) #show the map map The default map, with a dark base map, appears in the notebook. You can easily change that if you want.
https://towardsdatascience.com/kepler-gl-jupyter-notebooks-geospatial-data-visualization-with-ubers-opensource-kepler-gl-b1c2423d066f
[]
2020-05-10 10:59:57.840000+00:00
['Python', 'Jupyter Notebook', 'Geospatial', 'Data Visualization', 'Keplergl']
A non-invasive approach to find respiratory syndromes in infants: Part Two
This is Part Two of the blog about my final year project. If you haven’t read Part One yet, you can find it here. At the end of Part One, we were left with a CSV file containing respiratory signals along with their corresponding respiratory rates. Now, we need to use this data to classify any respiratory syndromes that the infant may have. We do this with the help of a deep-learning model. The dataset (our CSV file) is split into training and testing data for the model. We had a total of five videos, and we got 100 respiratory signals from each video, so we used 400 samples (80%) for training and 100 samples (20%) for testing. The model has a total of four layers; an input layer, an output layer and two hidden layers. The input layer has 225 nodes, as each of our respiratory signals has 225 feature values. The output layer has 4 nodes, as we’re classifying the signals into four classes, and the two hidden layers have 100 nodes each. We classified the signals into four classes; No Information, Bradypnea (slow breathing rate), Normal and Tachypnea (fast breathing rate). The normal breathing rate of an infant lies between 40 and 60 breaths per minute (bpm), so signals with a rate lower than 40 bpm were classified as Bradypnea, and those with a rate higher than 60 bpm were classified as Tachypnea. The model was compiled with the Adam optimizer and trained over 500 epochs, using categorical cross-entropy as the loss metric. The model had an overall accuracy of 92%. Classification of diseases by the model We wrote a research paper on the project, which was accepted at the IEEE International Conference on Communication and Signal Processing (ICCSP) 2020. The paper will be published in their journal soon.
https://medium.com/analytics-vidhya/a-non-invasive-approach-to-find-respiratory-syndromes-in-infants-part-two-2ff4a8a5b653
['Navaneeth S']
2020-09-01 06:12:46.831000+00:00
['Python', 'Respiratory Disease', 'Deep Learning', 'Matlab', 'Engineering']
A Simple Technique to Boost Creativity
A Simple Technique to Boost Creativity Open your mind to peripheral thinking Photo by Zulmaury Saavedra on Unsplash The idea of ‘peripheral’ thinking is this: instead of taking a torchlight to explore a darkened room, try taking a lamp. When there’s so much advice encouraging us to be endlessly more productive and more focused, it’s difficult to hang onto the idea that creativity often works best when the mind is relaxed. The peripheral mind allows light to spread outwards, not in a fine beam but in a broad illumination. In the right circumstances, when we use our peripheral mind, we let go of the targeted thought; in doing so, we replace the bull’s eye with a focus as wide as the sky. Thinking widely like this is useful to creativity because some of the best moments of creativity are the unplanned ones. They can’t always be designed — but they can be encouraged. For this to happen, the mind has to remain as open and supple as possible and be willing to appreciate the usefulness of unintended outcomes. It’s like the serendipitous discovery that happens when you go to a bookshop or an art gallery: intent on finding a specific work by a specific author, you get distracted by the book or artwork that sits next to it. If you are open to a chance encounter, a whole new pathway can spread out before you. A relaxed attitude and an open mind We use our peripheral senses all of the time. If you have ever become skilled at playing a sport or a musical instrument, you’ll know that point where the action occurs too fast for the conscious mind to deliberate over. You must rely on your peripheral senses to work in concert with your reflex behavior. Even as we walk down the street, our peripheral senses are perpetually at work, guiding us and helping us make decisions, largely on an unconscious level. Peripheral thinking inhabits a more flexible mode of thought than deliberate attention. The idea echos principles found in the ancient Chinese philosophical tradition of Taoism. From Chapter 24 of the Tao Te Ching: ‘He who stands on tip-toe, does not stand firm; He who takes the longest stride, does not walk the fastest.’ He who does his own looking sees little, He who defines himself is not therefore distinct. In Taoism, the method of achieving “perfection” is in appreciating the nuance of opposites and learning to be at ease with the unplanned rhythms of the universe. Striving too hard leads to instability. It is about taking a wider view, of yourself and the world around you. Think of the idea you have of your own identity. It can be captured with specific concepts, but only partially: you might be a mother or a father, a daughter, or a son. You may also be a student, a writer, a doctor, a lawyer, or a politician. You might be a caring person, an eccentric person, happy or optimistic. But no matter how many concepts you might employ to picture yourself, they will only build an approximate image. To capture yourself fully, you must take a wider view. A view as wide as the horizon. Creative thinking is similar: forget the idea that you are trying to express yourself as a single entity. Instead, open up to multiple identities and let these various perspectives inform your output. The idea of peripheral thinking is perhaps best considered an attitude or temperament, one of graceful letting go. Ways to access the peripheral mind for creative thoughts Cultivate the art of diversion: be open to finding something more interesting than the thing you were looking for in the first place. Let go of your sense of self. Inhabit different identities and play around in those thoughts to see the world from multiple points of view. Don’t be afraid to let your thoughts to slow down and wander off at wayward tangents. Be patient. The creative thought you are waiting for will always come — at the right time. Be happy to bend, like a grass bending in the wind. It is better than snapping like a brittle branch. My name is Christopher P Jones and I’m an art historian, novelist, and the author of How to Read Paintings. (Click link for Kindle, Apple, Kobo, and other e-reader devices). Read more about my writing at my website. Would you like to get… A free guide to the Essential Styles in Western Art History, plus updates and exclusive news about me and my writing? Download for free here.
https://medium.com/the-shadow/a-simple-technique-to-boost-creativity-a9e1c8e6bbf
['Christopher P Jones']
2020-12-22 17:21:59.230000+00:00
['Mindfulness', 'Art', 'Inspiration', 'Music', 'Creativity']
Using Regex with Python
Photo by Ethan McArthur on Unsplash Python is a convenient language that’s often used for scripting, data science, and web development. In this article, we’ll look at how to use regex with Python to make finding text easier. Finding Patterns of Text with Regular Expressions Regular expressions, or regexes, are descriptions for a pattern of text. For instance, \d represents a single digit. We can combine characters to create regexes to search text. To use regexes to search for text, we have to import the re module and then create a regex object with a regex string as follows: import re phone_regex = re.compile('\d{3}-\d{3}-\d{4}') The code above has the regex to search for a North American phone number. Then if we have the following string: msg = 'Joe\'s phone number is 555-555-1212' We can look for the phone number inside msg with the regex object’s search method as follows: import re phone_regex = re.compile('\d{3}-\d{3}-\d{4}') msg = 'Joe\'s phone number is 555-555-1212' match = phone_regex.search(msg) When we inspect the match object, we see something like: <re.Match object; span=(22, 34), match='555-555-1212'> Then we can return a string representation of the match by calling the group method: phone = match.group() phone has the value '555-555-1212' . Grouping with Parentheses We can use parentheses to group different parts of the result into its own match entry. To do that with our phone number regex, we can write: phone_regex = re.compile('(\d{3})-(\d{3})-(\d{4})') Then when we call search , we can either get the whole search string, or individual match groups. group takes an integer that lets us get the parts that are matched by the groups. Therefore, we can rewrite our program to get the whole match and the individual parts of the phone number as follows: import re phone_regex = re.compile('(\d{3})-(\d{3})-(\d{4})') msg = 'Joe\'s phone number is 123-456-7890' match = phone_regex.search(msg) phone = match.group() area_code = match.group(1) exchange_code = match.group(2) station_code = match.group(3) In the code above, phone should be ‘123–456–7890’ since we passed in nothing to group . Passing in 0 also returns the same thing. area_code should be '123' since we passed in 1 to group , which returns the first group match. exchange_code should be '456' since we passed in 2 to group , which returns the 2nd group match. Finally, station_code should be '7890' since we passed in 3 to group , which returns the 3rd group match. If we want to pass in parentheses or any other special character as a character of the pattern rather than a symbol for the regex, then we have to put a \ before it. Matching Multiple Groups with the Pipe We can use the | symbol, which is called a pipe to match one of many expressions. For instance, we write the following to get the match: import re name_regex = re.compile('Jane|Joe') msg = 'Jane and Joe' match = name_regex.search(msg) match = match.group() match should be 'Jane' since this is the first match that’s found according to the regex. We can combine pipes and parentheses to find a part of a string. For example, we can write the following code: import re snow_regex = re.compile(r'snow(man|mobile|shoe)') msg = 'I am walking on a snowshoe' snow_match = snow_regex.search(msg) match = snow_match.group() group_match = snow_match.group(1) to get the whole match with match , which has the value 'snowshoe' . group_match should have the partial group match, which is 'shoe' . Photo by Touann Gatouillat Vergos on Unsplash Optional Matching with the Question Mark We can add a question mark to the end of a group, which makes the group optional for matching purposes. For example, we can write: import re snow_regex = re.compile(r'snow(shoe)?') msg = 'I am walking on a snowshoe' msg_2 = 'I am walking on snow' snow_match = snow_regex.search(msg) snow_match_2 = snow_regex.search(msg_2) Then snow_match.group() returns 'snowshoe' and snow_match.group(1) returns 'shoe' . Since the (shoe) group is optional, snow_match_2.group() returns 'snow' and snow_match_2.group(1) returns None . Conclusion We can use regexes to find patterns in strings. They’re denoted by a set of characters that defines a pattern. In Python, we can use the re module to create a regex object from a string. Then we can use it to do searches with the search method. We can define groups with parentheses. Once we did that, we can call group on the match object returned by search . The group is returned when we pass in an integer to get it by their position. We can make groups optional with a question mark appended after the group. A note from Python In Plain English We are always interested in helping to promote quality content. If you have an article that you would like to submit to any of our publications, send us an email at [email protected] with your Medium username and we will get you added as a writer.
https://medium.com/python-in-plain-english/using-regex-with-python-d53631d79197
['John Au-Yeung']
2020-05-04 14:50:29.064000+00:00
['Python', 'Technology', 'Software Engineering', 'Programming', 'Software Development']
The Smarter Way of Asking for Programming Help
Before You Ask a Question Before you ask for help, make sure you’ve done your part of the research as well. Some people are so lazy — they don’t even think of doing some googling on their end. They simply open up a forum, say Stack Overflow or a Facebook group, and post their question. It can even be simple questions, such as “how do I start programming?” or “how do I install an npm package?” You can genuinely not understand how to do it, but you must make sure you put up an effort from your end — and then ask for help. Make sure you show the potential audience of your question that you’ve put an effort into finding a solution to your problem and yet failed. These are some steps you can take before asking your question from others. Search online — the first thing you should do is google Read the documentation — make sure you read the official documentation and forums Ask a skilled friend — if the above two options don’t work, you can ask a skilled friend of yours If you haven’t been able to find a solution after these three steps, you can go ahead and ask your questions. When you ask your question, display the fact you’ve done these things first. How to search online One common thing I’ve noticed among my peers is they all do search on Google before asking a friend for help. But what surprises me the most is when a friend searches on Google to find a solution and succeeds in doing so. How can two people search on the same platform, yet only one person was able to find a solution to the problem? This is because of the way both of them phrased their questions. Make sure you include only the necessary phrases or terms in your search query. Suppose you want to know how to add a local image to your HTML. You can either search “how to add images to my website” or “how to add a local image in HTML.” As you can see, the latter question is better phrased — as it’s more specific but provides enough detail.
https://medium.com/better-programming/the-smarter-way-of-asking-for-programming-help-52cd140dc437
['Mahdhi Rezvi']
2020-05-05 17:54:01.405000+00:00
['Python', 'Programming', 'Software Development', 'JavaScript', 'Startup']
Understanding Clustering
Understanding Clustering Introduction to unsupervised learning Photo by Ian Baldwin on Unsplash Outline - Introduction to Clustering - Motivating Example - Common Clustering Algorithms - K-Means Clustering - Hierarchical Clustering - Density Based Clustering (DBSCAN) - Understanding K-Means Clustering - Choosing the optimal value for K - Limitations of K-Means Clustering Introduction to Clustering Clustering is a commonly used unsupervised learning algorithm that aims at dividing data points into groups/clusters based on some similarity measure. Data points in a cluster are more similar to those within the cluster than to data points in other clusters. Classification is a supervised learning problem where we know the class labels associated with the data points and given a new data point, we check how well the model performs in assigning the data point to the correct class label; Clustering on the other hand is an unsupervised learning problem where the data points are not labelled but we’d like to group together the data points that are similar under a certain similarity measure. Given a dataset that we do not know anything about, clustering essentially helps in understanding intrinsic grouping among the data points. Motivating Example A cake company wants to open a chain of stores across a city and wants to find the optimal store locations to maximize revenue. What would be the possible challenges? ✅Analyze the areas from where the cake is being ordered frequently. ✅Understand how many stores to open to covers the buyers in the area. ✅Figure out the locations for the stores within all these areas so as to keep the distance between the store and buyers small. Well, here’s a simple way to solve the above problem; It may not seem very practical but helps understand the intuition behind the approach. Initialize locations of cake stores to certain number of random locations on the map Assign each buyer to the nearest cake store Update the store location depending on the average number of buyers Do you already see that this is just one of the many possible ways we could do this? That’s absolutely correct. This is just one of the valid hypotheses. In clustering, every possible hypothesis results in different but valid clusters. Customer Segmentation, Social network analysis, Biological data analysis are some applications where clustering is commonly used. Common Clustering Algorithms In this section we’ll look at some of the commonly used clustering algorithms. K-Means Clustering K-Means clustering aims at partitioning the unlabeled data into a distinct number of groups (K) where the grouping is done based on the similarity measure(distance). The primary idea is that the closer the data points are, the more likely they are to be similar and are therefore more likely to belong to the same cluster. We shall look at K-Means Clustering in greater detail in a subsequent section. Hierarchical Clustering Hierarchical clustering is a method of clustering that uses top-down(divisive) or bottom-up(agglomerative) approaches to cluster similar data points together. Agglomerative Clustering (Bottom-up Approach) Start by treating each observation as a separate cluster in itself. Identify two clusters that are the closest to each other Merge the two most similar clusters Repeat steps (1) and (2) until all the clusters are merged together. Here’s a simple illustration. Illustrating Agglomerative Clustering (Image Source) Divisive clustering (Top-down Approach) In divisive clustering (not very frequently used in practice), we do the following: Start by englobing all points in a single cluster Split the cluster iteratively into smaller clusters until each cluster has only one data point in it. A simple illustration is shown below. Illustrating Divisive Clustering (Image Source) Density Based Clustering (DBSCAN) DBSCAN stands for Density Based Spatial Clustering of Applications with Noise. It’s a clustering method that takes two parameters epsilon ( eps ) and MinPoints for cluster analysis. eps — this essentially quantifies how close two points should be to each other to be considered a part of the cluster. MinPoints — the minimum number of points to form a dense region. For example, if we set the minPoints parameter as 3, then we need at least 3 points to form a dense region. These parameters should be chosen to be in their optimal value range. Understanding K-Means Clustering K-Means Clustering is a simple clustering algorithm that groups data points in an unlabeled dataset into a certain number (K) of coherent subsets called clusters. It finds K centers called cluster centroids, one for each cluster and hence the name K-Means Clustering. The algorithm finds the closest cluster to which a data point belongs based on distance metric such as Euclidean distance. The most commonly used distance metric is the square of the Euclidean distance between two points x and y in an m-dimensional space. Euclidean Distance (Image Source) Here, j is the jth dimension (or feature column) of the sample points x and y We define cluster inertia, a quantity that the clustering algorithm aims to minimize in the Sum of Squared Errors (SSE) sense as follows Cluster Inertia (Image Source) Here, μ(j) is the centroid for cluster j, and w(i,j) = 1 if the sample x(i) is in cluster j and 0 otherwise The working of K-Means algorithm essentially involves the following steps Random initialization of K cluster centroids Cluster Assignment Updating Centroids The pseudocode for the algorithm is shown below Image by Author The following is a good illustration of K-Means Clustering; We initialize 4 random points as cluster centroids; And follow the steps in the pseudocode above and we can see how the distance between all other data points to the cluster centroids are computed and each data point gets assigned to a cluster and the centroids get updated and the process repeats until convergence. Choosing the optimal value for K As we see, K is a hyperparameter and we have to choose the optimal number of clusters K , But how do we do it? Well, here are a few methods to find the optimal value of K . Method 1: Run k-means multiple times, each time with different randomized initial centroids, and choose a set of values that leads to the minimum value of distortion cost function given by Distortion Cost Function (Image Source) where u_c is the cluster centroid. Smaller values of cost function correspond to better initialization Well, this can be very time-inefficient and computationally expensive. Method 2 — Elbow Method: Plot the values of K against the distortion cost function or inertia. Choose the value of K at which the graph elbows out or has a sharp turn. To determine the optimal number of clusters, we have to select the value of K at the “elbow” , that is, the point after which the distortion/inertia starts decreasing linearly. The figure below shows a case where for the dataset used, the optimal number of clusters K turns out to be 3. Optimal K Value by Elbow Method (Image Source) Limitations of K-Means Clustering While K-Means is a simple and useful clustering algorithm, there are a few limitations which are stated below; We need to specify the number of clusters ( K is a hyperparameter) is a hyperparameter) Results can change depending on the location of the initial centroids (random initialization) Not recommended if there are lot of categorical variables (depends on the numeric values to compute the distances) Assumes that clusters are spherical, distinct and approximately equal in size Not suitable for clustering data where clusters are of varying sizes and density References
https://medium.com/women-who-code-silicon-valley/understanding-clustering-f3cc74577877
['Bala Priya C']
2020-12-20 12:05:45.749000+00:00
['Machine Learning', 'Unsupervised Learning', 'Women Who Code', 'Clustering', 'Python']
6 Ways Small Businesses Can Use Instagram Reels to Promote Their Products
6 Ways Small Businesses Can Use Instagram Reels to Promote Their Products The coolest new Instagram feature and the infinite creative marketing possibilities it offers Image by cottonbro on Pexels On August 5, 2020, Instagram launched its newest feature — Reels: a fun way to create and discover short, entertaining videos on the platform. This feature offers creators the option to create videos shorter than 30 seconds. These can have several clips with audio, effects, and a multitude of other creative tools. As with any new feature on any social media platform, Instagram has been promoting reels to a wide audience. The focus has shifted from picture to videos, and the newest short-video format has been taking the platform by storm. This is why several writers, content creators, singers, and dancers have been taking to the platform to reach more users. Reels are also a great opportunity for small business owners to promote their products and services. As such, Instagram is a powerful tool for small business owners because: This post discusses six creative ways people have used this feature to promote their business and the kinds of results they have achieved. It also discusses the creative freedom reels gives to content creators, and how you can apply these to your business to see amazing returns. Note: The small businesses mentioned here are based in different parts of the world and offer a variety of products and services. The post isn’t sponsored, nor have I received any free product in exchange for a review. I selected the posts based on the “top” section on Instagram, and among all the content posted by these accounts, I picked the ones with the highest engagement.
https://medium.com/better-marketing/6-ways-small-businesses-can-use-instagram-reels-to-promote-their-products-6849e584b7c8
['Anangsha Alammyan']
2020-12-24 16:36:47.174000+00:00
['Marketing', 'Business', 'Creativity', 'Social Media', 'Success']
Why Falling in Love With Being a Writer Feels Easier Than Actually Writing
Breakup With the Fantasy of “The Writer’s Life” “When I arrived in Paris in 1948 I didn’t know a word of French. I didn’t know anyone and I didn’t want to know anyone. Later, when I’d encountered other Americans, I began to avoid them because they had more money than I did and I didn’t want to feel like a freeloader. The forty dollars I came with, I recall, lasted me two or three days. Borrowing money whenever I could — often at the last minute — I moved from one hotel to another, not knowing what was going to happen to me.” — James Baldwin, The Paris Review You might believe this struggle is beautiful, set to the backdrop of Paris. But it’s not when you’re actually living it. Literature and entertainment have hypnotized many a writer with the romanticism of the artist’s life. The ratty foreign hotel room, the cigarette butts, the empty wine bottles — you get it. Hemingway, Baldwin, Wilde, and other writers took residence in Paris once in their careers. Never mind that many of them were barely making it, working odd jobs to stay afloat. There’s something about writers living abroad, renting rooms, and working in seclusion that sounds seductive to us. I’ll give you a clue as to why. It’s not about the location or the bottles of Pinot. Or even how you think their lives are shaped by their experiences. It’s striving for something in the face of uncertainty, loneliness, and financial ruin. They, like anyone in movies with a great dream, are truly going for it. That’s the romantic part—the work. Because, sadly, people rarely ever give their all to what they want. The actual dream is your act of creating at this moment. It has nothing to do with some foreign destination or moment of arrival. But becoming a writer has everything to do with the process and how you feel about writing — doing the hard work. Telling the story of a writer who made it, in retrospect, can’t help but sound cinematic. I’m sure I don’t have to tell you this, but I will anyway. You require nothing outside of yourself. As Stephen Pressfield says in Turning Pro, “The amateur is tyrannized by his imagined conception of what is expected of him. He is imprisoned by what he believes he ought to think, how he ought to look, what he ought to do, and who he ought to be.” This is no different than the reimagined lives of our writer heroes. You don’t have to write a magnum opus to start. You don’t have to go anywhere beyond your bedroom to make a writer of yourself. No local coffee shop with shotty wifi required. Now, you can still go to France or Cuba or the local Amtrak station. You can be inspired by the stories of your favorites. You can copy their rituals and routines. The true subtitle for the section is Breakup With the Fantasy of “The Writer’s Life” for Your Own You can craft your lifestyle in any way you imagine. Just don’t allow what you think others have done to prevent you from moving forward.
https://medium.com/swlh/why-falling-in-love-with-being-a-writer-is-easier-than-actually-writing-3d3f2dcca13
['Brandon B. Keith']
2020-10-20 22:50:58.572000+00:00
['Art', 'Comedy', 'Writing', 'Writing Tips', 'Creativity']
Why the blockchain revolution has no leader — on purpose
Why the blockchain revolution has no leader — on purpose It’s ‘power to the people’ time all over again. But this time, we really mean it. Roger McNamee, a billionaire tech investor, psychedelic rock star, and author of the forthcoming book, Zucked — Waking Up to the Facebook Catastrophe, gave me some very simple yet sage editorial advice a couple of dozen years ago. “In the end, consumers always get what they want,” Mr. McNamee says. Translation — If you want to pick winning companies, bet on the ones who are giving people what they want. Sequoia Capital founder and original VC gangster Don Valentine puts it even more simply, “There is only one question that really matters when looking at a new company — Who cares?” Power to the PC Going back to the future, I can remember people embracing the Personal Computer so they could chuck their typewriters, white-out bottles, calculators, slide-rules and ‘secretaries’ (thank you Woz, Steve Jobs, Adam Osborne, and all the other early PC pioneers and members to the Homebrew Computer Club). By 1984, the average Joe could even become his own publisher, from design to final output (thank you Adobe founder John Warnock, Xerox Parc, and the Apple MacIntosh and Laser Printer teams). As PC users started feeling their newfound power, people then wanted to share their work. Bob Metcalfe recognized this desire and invented Ethernet, which allowed people to connect their office computers, faxes, and printers over coaxial cable. By the early 1990s, the real consumer ‘Information Highway’ dream was finally becoming reality via a fusion of the existing Internet infrastructure with a new content publishing platform called the World Wide Web (invented by English scientist Tim Berners-Lee in 1989) and then put on steroids in October of 1994 with the Netscape web browser — which grabbed 90 percent market share only 3 months after its launch. Sequoia Capital founder and original VC gangster Don Valentine puts it even more simply, “There is only one question that really matters when looking at a new company — Who cares?” The ultimate fantasy of our generation was to have our very own Star Trek phone. And boy, did we get our Star Trek phone (thank you again, Steve Jobs and Apple team). While our smartphones can’t quite beam us up yet, they deliver waaaaaay more than we could have ever imagined through zillions of apps that assist almost every aspect of our life. While smartphone growth went flat for the first time in 2017 (despite declining average unit sales prices), over 2 billion people now happily own a smartphone. When you roll all this product innovation together, the results are in and breathtaking. The United Nations just announced that more than half of the global population — a whopping 3.9 billion people — are now using the Internet, representing the biggest global market share grab in history. We all wanted to be empowered and connected, and we got more than what we asked for. In retrospect, a relatively small set of entrepreneurs sensed our nascent desires, delivered, and created trillions of dollars in new wealth along the way. Entrepreneurial observation: Did these wizards see a rising consumer demand in the market that they rushed to fill, or did they imagine what we really wanted before we even know we wanted it? I think it is the latter for Steve Jobs and his creations, but that is a great discussion for another post. A Bust In Institutional Trust Ok, enough with ancient history. Let’s talk about now. What do consumers want today? The West's biggest consumer trend is a broad-based and precipitous fall in confidence in our oldest and most trust-based institutions — government, media, academia, and finance. A recent Gallup poll asked Americans about their confidence levels in 15 societal institutions. Only three — the military, small business, and police — earned a majority-level of trust. Consumer trust in banks and mass media, in particular, is collapsing. The financial crisis in the late 2000s certainly took the wind out of much of the trust we had in big banks, and these institutions have failed to win our confidence back since, as can be seen in the inset graph. Ironically, the inspiration to create bitcoin was a response to the banks' failure to protect the common man. The mass media has also been marginalized by an onslaught of online and social media content. It’s hard to imagine today, but upon his retirement in 1981, legendary CBS Evening News anchor Walter Cronkite commanded a whopping 50 million nightly viewers for his half-hour evening news broadcast from 6:30 and 7:00. His competitors at ABC’s and NBC’s were logging in similar results. Today, although the U.S. population has grown 35 percent over the last 30 years, those same news programs now command only between 6 and 8 million nightly viewers. Even when you add in the top three cable TV news shows (Fox 3.8M, MSNBC 2.8M, and CNN 1.1M), the numbers are still anemic compared to the old days. A new study from the Pew Research Center shows that over two-thirds of U.S. adults (68 percent) get their news on social media. Still, out of that population, over half (57 ) of these consumers say they expect the news they see on social media to be largely inaccurate. Populism and the Blockchain Revolution Our growing mistrust in institutions and central power is now manifesting itself in the global political scene. The Brexit movement, the Trump and Bernie Sanders campaigns, recent populist victories in places like Italy, Mexico, Brazil, and France’s growing ‘Yellow Vests’ movement are all examples of people frustrated by static wage growth and a lack of faith in the centralized powers who promised to do something about it. Increasingly, people have also been willing to take some pretty big risks to rock the powers at be. A prime example was the election of Donald Trump — a candidate who had neither previously served in the military nor held a public office. It would have been unfathomable for anyone of his background to have been elected President before the 2016 election. To achieve this triumph, Trump and his populist followers beat out the Republican establishment, the Democratic Party, the Clinton machine, Hollywood, academia, and mainstream media. As liberal journalist John Heilemann observed on elections night 2016, “It’s like working Americans were so frustrated that they decided to roll a stick of dynamite into Washington DC and blow it up!” The demands of the so-called Yellow Vests in France are similar to those of other populist movements, but the uprising is not tied to any political party. One thing I have learned as a Silicon Valley OG, innovation never ceases to surprise. While consumer mistrust for institutions has been exploding, a growing set of entrepreneurs have also been hard at work developing a new security infrastructure for the Internet that might protect us from the Goliaths we now despise. It all started on October 31st, 2008, when the unknown author, under a pseudonym name Satoshi Nakamoto published the whitepaper titled ‘Bitcoin: A Peer-to-Peer Electronic Cash System.’ Satoshi’s single mission was to be the first to solve the double-spending problem for digital currency using a revolutionary Proof-of-Work distributed verification system based on a peer-to-peer network, and in doing so, the first killer blockchain application was born. In 2014, Ethereum proved that a blockchain could be used as a distributed peer-to-peer general computer without a centralized authority and database administrator. Hence, the first blockchain-powered by a ‘smart contract’ system was born. Little did Satoshi and the Ethereum founders know, but by creating a security infrastructure so digital currency could be trusted without the need for government backing and verification or central administrator, they were also providing an approach to solve even bigger societal problems. For example, Google has been notorious for collecting their users' personal information without letting them know the extent of what they were gathering and selling this data like commodities to any advertiser who would pay. Compounding such consumer intrusions is a constant stream of security breaches, including the violation just announced on the 53 million people who had signed up for the failed Google Plus social media experiment. Another example of consumer abuse is Facebook’s model, which is based upon ‘customer-as-product,’ a tactic very publicly and resoundingly denounced by Apple CEO Tim Cook. “We could make a ton of money if we monetized Apple’s customers if our customers were our product. But we’ve elected not to do that. We’re not going to traffic in our customers’ personal life. Privacy to us is a human right, a civil liberty,” observesCook. People’s time is valuable. If companies like Google and Facebook care about their customers, they should protect their customers’ time with the same rigor they strive to protect their privacy and other digital rights. There is now a huge popular rebellion against Facebook (a.k.a. #DeleteFacebook campaign), and the Congressional authorities are all over Facebook like white on rice. The Power of Bitcoin, the Blockchain, and Smart Contracts Just when we all thought the centralization of computing in the hands of big ‘walled gardens’ like Google and Facebook was unstoppable, entrepreneurs have once again felt our pain and developed solutions to counter this presumed inevitability. The evolution from centralized to decentralized to distributed computing is a metaphor for the populists ideals and movements breaking out all over the globe. The Bitcoin project demonstrated that a suite of technologies could enable a new kind of trustless, decentralized economic network. Consumers will increasingly take back control of their personal data and hand it out selectively when they want to conduct specific, even anonymous transactions, but otherwise keep to themselves. We will eventually be able to unimpeachably share our transactions and behavior to the IRS, a prosecutor, the DMV, or anyone who might want to see the purchase and maintenance history of any asset we might want to sell. As Silicon Valley VC and libertarian activist Patri Friedman (grandson of Nobel Laureate Milton ‘Freedom to Choose’ Friedman) has observed, “We believe the agaric potential of crypto goes far beyond the initial use case of hard money, and will steadily bring the world’s illiquid, opaque, paper-based assets online, a thesis we call ‘Markets Eating The World.’” The bottom line is the decentralization of data storage and ownership promises to dramatically upend the status quo, cutting out intermediaries and centralized platforms and replacing them with peer-to-peer transactions executed via ‘smart contracts.’ Within the next few years, a dramatic portion of the money and resources that currently fund large centralized systems will be distributed out on the blockchain and often maintained by a global ring of independent computer workers (We call them ‘d-workers’ — the ‘d’ is for ‘distributed) from India and Brazil and other developing nations at a fraction of the cost. And while the income earned by d-workers may seem modest by western standards, it could be game-changers for literally millions of those living in developing nations. We recognize that by almost every standard, the global blockchain infrastructure is still in ‘alpha.’ But we also believe that over time, literally millions of independent blockchain applications will be spawned and suck much of the power out of current economic centers such as those in the US. Coinbase CTO Balaji Srinivasan calls these regions collectively ‘The Paper Belt’ — the four metropolitan areas where the most important industries and political infrastructures converged during the post-war era: Boston (education), New York City (publishing, finance), Los Angeles (media, Hollywood) and Washington DC (politics, law). This trend bodes well for developing nations, in particular. The nations will finally afford the costs of joining the modern economy, transfer money across borders at a small fraction of the current cost (think Ripple), take out microloans, and even buy insurance. A recent Oxfam report stated that 82 percent of all wealth created in 2017 went to the global top 1 percent. Perhaps this is a bold statement, but I believe that the Blockchain Revolution will get even out this concentration of wealth over the next 20 years by building the economic power and average wage levels in developing nations. The Death of the Davos Man (and a French President) For ten years (1996 to 2006), I gleefully participated in the World Economic Forum's annual meeting in Davos, Switzerland. I hobnobbed with private jet flying, helicopter landing, Dom Perignon swigging, titans of industry, and politics. As its mission states, the Davos Man is “committed to improving the state of the world by engaging business, political, academic, and other leaders of society to shape global, regional, and industry agendas.” Far from owning my own private jet, my role at Davos was as a ‘media leader,’ which meant I helped moderate programs, covered the event editorially, and gratefully didn’t have to pay the $35,000 ticket to get in. My Davos life's highlight was joining forces with Accel Partners cofounder and global power broker Joe Schoendorf and holding our annual ‘Davos meets Silicon Valley’ cocktail reception at the Kirchner Museum center of the village. Armed with dozen of cases of California’s finest red wines and the best French champagne, those parties were epic. Over the years, our wine fests matched the likes of Peter Gabriel, Israeli President Shimon Peres (RIP), and Bill Gates, with the best and brightest young entrepreneurs spinning out of Silicon Valley, including Google founders Larry Page and Sergey Brin, Skype’s Niklas Zennström, YouTube founder Chad Hurley, and a 21-year-old Mark Zuckerberg. France President Emmanuel Macron in his element at the 2018 gathering of the World Economic Forum. Upon reflection, and as the Forum’s mission statement above reflects, Davos was the annual calling of the once and future masters of the universe to ‘shape’ the world agenda with the globe as their chessboard. I admit, in addition to drinking lots of Joe’s red wine, I drank the Davos Kool-Aid as well. For 10 years, I felt like one of the Masters of the Universe. But as long as this show has been going on, a populist earthquake has been brewing. Davos regular Larry Fink, the CEO of the $6 trillion-plus asset management firm BlackRock, nailed it before last year’s gathering. “Popular frustration and apprehension about the future simultaneously have reached new heights. We see the paradox of high returns and high anxiety. Low wage growth, dimming retirement prospects, and other financial pressures are squeezing too many across the globe,” he said, “I believe that these trends are a major source of the anxiety and polarization that we see across the world today.” Bingo! It also may be the elephant in the room that many people do not want to talk about. Still, open borders and huge influxes of new and often undocumented immigrants into mainly developing countries are also causing societal anxiety and providing further fodder for the populist movements. The dissatisfaction with immigration is not fueled by racism and xenophobia, as the global elite want us to believe, but by a sense of people feeling overwhelmed by the sheer numbers of immigrants flocking into their respective countries. For example, the US harbors close to 25 percent of the world’s immigrant population, totaling somewhere between 45 million to 60 million people. According to a recent Gallop poll, another 147 million potential migrants desire to come to the US. While the average American still believes in the ‘melting pot’ ideal, it has become impossible for such an enormous infusion of new people and cultures to assimilate and secure gainful employment successfully. US citizens are not the only ones with these concerns. A recent Pew Research surveyed people from 27 countries, including Mexico, South Africa, and Sweden — found that most people in each country surveyed did not want more immigration into their country. This begs the question: If these massive influxes are so unpopular, and people didn’t vote for them, why does this continue? One of the starting 5 (in my opinion) on the cultural thought leader team of our time, Victor Davis Hansen from the Hoover Institute, who also runs an almond farm in central California and employs dozens of legal immigrants, says, “Whether, in California, Nevada, Paris or London, the administrative progressive elite are willing to take the short term hit on popularity, for the long-term ability to change the demography. Their goal is to create reasons for more government subsidies, entitlements, and gain fealty with newcomers who they hope will keep them in power.’ Dr. Victor Davis Hanson, Martin, and Illie Anderson Senior Fellow in Classics & Military History at the Hoover Institution, Stanford University, presents ‘California at the Crossroads’ at the 18th Annual Kern County Economic Summit. Meanwhile, the elite lives in gated communities and send their kids to private schools where everyone speaks the native language and is completely insulated from the ramifications and downside of their own ideologies. These folks largely believe that most people are ignorant and need an anointed elite to guide them. They have become the pigs in Orwell’s Animal Farm. They also ignore the existential questions. Why are an incredible 700 million people on the planet trying to move primarily from non-westernized countries to westernized countries? Why are the countries busy spreading their citizens worldwide, such as China, India, Mexico, and Turkey, the least likely to allow immigrants into their own nation. “The elites that oversee this process never seem to consider that immigrants are attracted to countries with constitutional governments and promote rationalism, tolerance, and free-market economics,” observes Mr. Davis. “If they just advocated this menu of ideals outside the western world, people could stay in their home countries, which is usually always their preference.” My perhaps hopeful premise is that as these anxieties have grown, blockchain and AI entrepreneurs have been hard at work in the background developing the technologies to soothe our global discontent. We are entering that third wave of the commercialization of the Internet. The goal is to finally deliver on the original promise to distribute economic and political power, not centralize it. If I were dictator of the US, the first public crypto project I would initiate would be putting the global immigration process on an independent blockchain. The goal would be to provide security and privacy and leverage the d-workers to help verify the good pilgrims from the bad and bring much-needed order to the immigration process. This is possible today. Unlike the Davos Man, the blockchain revolutionary's role is not to rule the world but champion a plan that makes sure that no one rules the world. It is safe to bet that the Davos Man and the institutions they represent won’t go down easy. As Mr. Srinivasan warns, ‘Even if the Paper Belt is vulnerable, it still wields incredible power. These industries literally control the world’s armies, information, knowledge, and economic systems. Like all incumbents, these industries will use their power for self-protection.’ Balaji Srinivasan warns, ‘Even if the Paper Belt is vulnerable, it still wields incredible power. Due to popular unrest in his country, France President Emmanuel Macron has emerged as the poster figure for the Davos Man. As Macron battles the Yellow Vests over energy taxes, his approval rating has plummeted to 23 percent in an Elabe survey. Bernard Sananes, head of the Paris-based pollster, explained the new findings, “Macron doesn’t listen to the people, doesn’t know the people, and doesn’t understand the people.” I predict that Macron's inevitable downfall will be a shot heard around the world that will send his fellow elites running for cover. Surviving the Crypto Winter and Still Getting What We Want Bitcoin billionaire Tim Draper remains bullish on the blockchain revolution's potential despite having to navigate the ‘Crypto Winter’ for the last several months. “Bitcoin is a metaphor for the societal change we are going through right now,’ says Mr. Draper. ‘If you want to win, you need to be a part of this change. You need to get in front of it. The only alternative is to be a Luddite, and they died out, didn’t they?’ Mr. Draper thinks that government-backed currencies — especially those that come from countries with a history of corruption, wild currency volatility, and manipulation — will eventually go away and be replaced by cryptocurrencies. “Japan has more or less declared that Bitcoin is their national currency and has welcomed crypto entrepreneurs from all over the world, and they are better off for it. They obviously got a big boost when China basically outlawed all things crypto.” In the end, following the world views of Messieurs McNamee, Valentine, and Draper, crafty entrepreneurs eventually will create a more accessible, self-empowered, private, secure, and economically distributed online world — because that’s what we want. In the meantime, we can expect to see mainstream media talking heads to get more shrill, Wall Street’s money-changers to continue to dog Bitcoin and token offerings, and politicians desperately try to reboot socialism, all in their vain attempt to hold onto the power slipping through their claws. But like I said up top, it’s ‘power to the people’ time all over again, and it’s the blockchain entrepreneur who is working hard to give us our power back. ############ Tim Draper on the state of the crypto markets — December 3rd, 2018.
https://tonyperkins.medium.com/why-the-blockchain-revolution-has-no-leader-on-purpose-6cd2bd361f5
['Anthony Perkins']
2020-11-09 03:17:01.814000+00:00
['Facebook', 'Bitcoin', 'Blockchain', 'Google', 'Venture Capital']
Create Your Own Reduce Function in Swift
The Motivation Reduce is one of the coolest higher — order functions available to us in Swift. What if we could look about how reduce works under the hood, and perhaps mangle the functionality to be…whatever we would want it to be. The reduce function The original The classic, canonical use is to use reduce to perhaps sum an Array var arr = [1,2,3,4] arr.reduce(0, {$0 + $1}) //10 This is, of course, using lots of magic from Swift’s type inference. The longer version (for the same array arr ) does have the same result arr.reduce(0, { sum, newVal in return sum + newVal }) // 10 which makes it slightly easier to read for beginners, and slightly more tricky for the more experienced (you can’t win). There are variations on uses for the reduce function. Although Reduce works on any array it doesn’t have to return an Integer (like the example above) but can return other types. What about returning a String to concatenate an array? var names = ["James", "Ahemed", "Kim"] names.reduce("", {$0 + $1}) This version of reduce just concatenates the array of Strings. The original: using reduce to count frequencies It might not seem amazing to perform the above functions with reduce , but reduce is more flexible than that! The documentation gives us something to go on here. reduce(_:_:) has a declaration : func reduce<Result>(_ initialResult: Result, _nextPartialResult: (Result, Element) throws -> Result) rethrows -> Result Notice the generic Result here: This can be any time. This means…it can be…a value, an array, or in the case of this example: A dictionary. We can take an array of fruits : var fruits = ["banana", "cherry", "orange", "apple", "cherry", "orange", "apple", "banana", "cherry", "orange", "fig"] with which we are going to return a dictionary fruits.reduce([:] as! [String: Int], { a, b in var c: [String:Int] = a c[b, default: 0] += 1 return c }) Although there is some tricky type casting going on, we can see that we have reduced the array down to a dictionary. This results in the following dictionary ["cherry": 3, "fig": 1, "apple": 2, "banana": 2, "orange": 3] Our own reduce function We can set up a function to, well reduce . The idea of this is to reproduce the in-built function. A simple reduce function This function is called (as shown above) with the following reduce(elements: test, initialResult: “”, nextPartialResult: {$0 + $1}) A reduce function in an extension The tests Let’s do this properly. Before writing this function, what do we expect it to do and what results do we expect it to give? That is, we need to write some Unit Tests. We can initially check the existing reduce function against these tests if this didn’t work I’d be very surprised (don’t worry, it does). These tests are not complete, they are a rather basic suite to check if the extension performs as the original reduce (for production code more tests would be required to ensure that the function really does work as intended). so we wish to create a new reduce function (myReduce) that operates the same way, that is on an array. This sounds like an extension to me. The extension To understand this you will need some experience of iterators in Swift, and remember this is going to be called myReduce to match with the tests above. myReduce is a generic function. This takes an initialResult and a nextPartialResult function in order to create a result of the type (generic) Result . The initial result (of type Result ) is set to the initial result (which is fine, since the function nextPartialResult has not yet been applied to any element of the input Array). Entering a while loop with iterator.next() , which uses a while let in Swift (so only runs if iterator.next() is not nil) and uses try to run the nextPartialResult on the result and the nextElement which is then assigned to be the result . TO test this within the playground the ReduceTests.defaultTestSuite.run() runs the (3 tests) — which all pass!
https://stevenpcurtis.medium.com/create-your-own-reduce-function-in-swift-e92b519c9659
['Steven Curtis']
2020-05-12 07:25:44.620000+00:00
['Software Engineering', 'Mobile App Development', 'Swift', 'Programming', 'Software Development']
Runaway Train
Photo by Matthew Bedford on Unsplash Hurricanes, wildfires, storms, floods, and images of streams, rivers, and the oceans filled with plastic flood my consciousness. My morning walks take me past sick and dying trees, over a muddy river covered with a strange white foam, no doubt caused by fertilizers and herbicides from agricultural runoff. Most troubling is the absence of honeybees that once flourished and swarmed on the blooms of White Dutch clover that once covered our yards in summer before anything, but approved grass became a weed to kill. These things focused and sharpened my awareness of the environmental deterioration and devastation we deny. The effects of climate change due to global warming is evident to those whose eyes are open and take a moment to study science. We are, all life on earth, passengers trapped on a runaway train on a track most say, and the overwhelming evidence supports is a dead-end. Some of us are here by choice, but most find ourselves trapped on this train as it hurls down a dead-end track, picking up speed. We see the mountain marking the end of the line looming on the distant horizon. It gets closer with each passing moment and each gulp of fossil fuel required to keep the locomotive going. We watch, most of us passively, but uncomfortably as the mountain grows. Others claim we are silly and spreading fear. They tell me my concerns about climate change are merely part of the normal climatic cycle of the planet. Fossil fuel industry spokesmen and advocates assure me the track continues to the bright future where we’ll all benefit and prosper by staying the course and continue extracting and using fossil fuel sources to meet our voracious appetite for energy. I vividly recall the images reflecting the lack of concern for the environment from my youth growing up in the 1950s. I remember the coal soot that turned the snow black around our school, in my neighborhood, and around our house. I remember the putrid smell of diesel fumes and clouds of exhaust from automobiles hanging in the frosty air. Most of all I still see the raw sewage pouring from huge pipes emptying into the Wabash River that provided drinking water to communities downstream. We didn’t think about pollution. It was merely part of the inevitable cost of our rising standard of living, or so we were led to believe. America’s population in the 1950s was between 150 and 180 million people; today it is almost 330 million. The population of the earth rose from two and a half to three billion between 1950 and 1960; today there are over seven and a half billion. Tripling the population intensified pollution’s impact. The numbers of people and the amounts of pollution they inevitably generate increased exponentially across these decades. At the same time, our awareness and knowledge of the impact of this pollution also grew exponentially, yet our thinking and attitudes about it stayed relatively the same. I can’t help but ask myself ‘Why?’ Those who insist fossil fuels are not responsible for global warming and climate change tell us increasing our extraction and use of these resources are vital to maintaining our standard of living. In their view, their continued use is a moral imperative for improving the health, well-being, and standard of living for those currently left behind. How do they support this claim? The fossil fuel industry led by the Koch brothers-funded Heartland Institute and the Nongovernmental International Panel on Climate Change (NIPCC) — (created by the Heartland Institute) presents an opposing view to climate change and global warming. Examining these materials reveals an interesting omission; they offer no science in support of their opposition to the scientific consensus presented by the UN Intergovernmental Panel on Climate Change (IPCC). The NIPCC response is focused entirely on what it sees as inconsistencies and contradictions in the IPCC reports. The bulk of the argument of those advocating continued use of fossil fuels is found in “The Outlook for Energy: A View to 2040” published by Exxon Mobil in 2017. This report states, “ever-increasing supplies of energy are needed to sustain economic growth and ensure human betterment and that the bulk of that energy will be supplied by fossil fuels well into the future.” It claims without continued fossil fuel dependence there can be no economic growth. The report sees the need not only for more oil but also greatly expanded use of natural gas and coal into the middle of the twenty-first century. We must do this, the report says, to ensure the world’s poor and disadvantaged won’t stay immersed in poverty. The Exxon report asserts we must increase our use of fossil fuels because sustaining the growth of the new global middle class depends upon it. Alternative forms of energy are dismissed as being too expensive, unreliable, or too difficult to move from place-to-place. Between 2010 and 2040 human population will increase from around seven billion to over nine billion with the demand for cars, SUVs, and other light-duty vehicles growing by over 100%, but while there will be an increase in hybrids, the vast majority of autos will continue guzzling gas. This report goes further in asserting demands for energy will be even greater when we consider all these new middle-class consumers will want their share of computers, appliances, air-conditioners, flat-screen TVs, and other consumer goods with the inevitable spinoff for the need of more trucks, trains, and container ships to move these goods around the planet to meet the new demand. What is most important about this report is what it omits and ignores. Reading the literature nothing is mentioned of the effects of the inevitable increase in carbon dioxide and other greenhouse gasses into the atmosphere resulting from these activities. There is no mention of the probable effects of producing more plastic containers and adding to the plastic already seen polluting the world’s oceans. There is no mention of the impact these things will have on global temperatures. There is no mention of consequent melting of ice sheets in Greenland and Antarctica and rising sea levels. When the industry does bother to look at renewable sources of energy, they see nothing but problems. They say wind and solar are more costly and aren’t growing fast enough to meet the demand for more power. They admit renewable sources will expand between now and 2040 but will still only account for 4% of energy generation. Defenders also assert renewables are problematic because they provide only intermittent sources of energy — failing at night and on windless days necessitating the need for bolstering by other fuels to ensure uninterrupted energy output. They ignore the simple solutions to many of these issues. Beyond this report, the proponents of fossil fuel consumption such as Matt Ridley, a British aristocrat, and journalist, and Alex Epstein, founder, and president of the Center for Industrial Progress claim fossil fuels will save the world and claim there is a moral case for their use. They tell us fossil fuels are available and plentiful, easier to find, more efficient, easier to transport, easy to set up and generate thousands of jobs. What they don’t and won’t say or acknowledge is fossil fuel use degrades the environment, is causing global warming and climate change with catastrophic implications, require huge storage and transport support, create public health hazards and problems including spills and accidents, face rising costs over time, and pose a health risk to workers. Finally, despite their claims, fossil fuels are a finite resource; solar and wind are for all practical purposes, infinite. Addiction comes in limitless varieties; it takes almost any form. Our addiction to fossil fuels blinds us to the brutal reality we are choosing to create. We, humans, are a curious sort. We tend to get stuck in old ways of thinking even when we know it has become obsolete. As many others have observed in the past, our supreme excellence is also our tragic flaw. They are the two sides of the same coin and we deceive ourselves by choosing not to recognize the changed conditions and circumstances that require us to abandon our old habits. It’s not the fittest who survive, it is the organisms that are the most flexible and adaptable in the face of changing conditions and circumstances. The dinosaurs were the fittest, but it didn’t save them from extinction. Dependence on fossil fuels, particularly oil, built the modern world we know and propelled the United States to become the world’s superpower. But oil, coal, natural gas and other forms of fossil fuels in the 21st century are leading us to a catastrophe of our own making. We are riding the climate change denial express. It is manned and controlled by those who benefit most from our consumption and dependence on fossil fuels. You only have to listen closely and think about what these proponents are trying to tell us to see it is a form of addiction. Our enslavement to fossil fuels blinds us to the brutal reality we are creating. Photo by Andrew Karn on Unsplash No one is willing to order the switch be thrown to put our runaway train on the new track taking us in a different direction. That’s how addiction works. It distorts and perverts your senses until you are willing to put everything, even life itself, at risk for the sake of the 40 pieces of silver the addicted claim we need for our next fix so we can continue accelerating on our one-way journey. The nervous switchman is waiting ready to flip the switch, but the command isn’t being relayed because the “bosses” the “owners” the “investors” in the current system can’t let go of their addiction. The mountain at the end of the track is real. The track will end. We know the science and the science says we can only use a fraction more of our carbon-based fuel budget before we go over the cliff, and this cliff makes the 2008 fiscal cliff that has taken so much media attention look like an anthill on a salt flat. Have you ever watched or been a part of an intervention with an addict? The addiction could be anything: drugs, alcohol, food, sex, porn, shopping, literally anything. If you have, you know how difficult it is to get that person’s attention. You know how hard it is to get them to seek help or to agree to treatment. They have an endless number of excuses, reasons, defenses they will use to try and justify their claim there is no problem. The addict claims what he/she is doing is okay and they are in control. They are convincing in proclaiming to us why there is no need for this intervention, or for them to change anything about their activities or behavior. This is where we are with our fossil fuel addiction. We know the petroleum, natural gas, coal, and other fossil fuel-dependent interests are going to make every excuse, pervert science, do anything they can to protect their interests and continue making money regardless the consequences to the rest of us. They tell us giving up our dependence on fossil fuels will end of the good life as we know it but ignore the end of this “good life” and maybe most all life if we continue traveling on their tracks. We do not have the choice of waiting. The train is rocketing forward greedily guzzling fuel faster and faster, and the moment is close when we either throw the switch to put humanity on a different path or remain on the track of denial until it crashes into the mountain of climate reality where the line ends, killing most, if not all of the life on the train.
https://jerry45618.medium.com/runaway-train-bc4854af7db7
['Jerry M Lawson', 'De Omnibus Dubitandum']
2019-07-18 10:10:56.870000+00:00
['Politics', 'Health', 'Climate Change', 'Psychology', 'Culture']
You’re Making History Without Even Trying
I keep telling myself the world is new every day. There are infinite, small opportunities to make history. Whether you realize it or not, you are making history with your life. Just your being here on the planet changes things. You get to design your life today; and it’s impossible to do it like anyone else before you in history. This one’s all yours. And mine. We don’t get to judge. I recently took a job at a spa, finally making use of my massage therapy license. I am surprised I like it so much; both the work and the structure it provides after months of freefalling. One of my clients yesterday was a 16-year-old girl, whose pants, by the way, were made of the exact same material as my high school uniform skirt. ( I could barely stop staring.) Her older sister was home from college for Thanksgiving break. To celebrate, her mom treated them all to a girls’ spa day. When the massage was finished, the teenager smiled and said, “I feel like a new woman.” “You are,” I responded. “Every day, if you want to be.” Did my own kid get that memo? Stealing Joy Back The other day I was running the trails at the county park, notorious for its poor signage and cryptic trail map. Of course, I got lost. This added about another mile to my intended hike. Sometimes while running in the woods, I rant to invisible ghosts or my former self. That day, I was talking to Katie’s spirit, my far-flung daughter, inviting her to stop by anytime. We don’t have to wait for the proverbial joyous reunion. She could send a sign or something. How hard could it be to pierce this veil between us? Sheesh. Instantly, there appeared a yellow butterfly, and the earworm from out of nowhere, the Beatles tune “She Loves You.” Yeah, yeah, yeah, I know it’s corny, but there it was. It occurred to me then. I was the first woman to ever… race a butterfly on a cloudy day, on that particular twisted route back to my car, wearing a Torchy's Tacos teeshirt. Never was there ever another moment like that one. OK, maybe it’s more newsworthy to be the first woman in flight or on the local fire department, or the first scientist to discover a new element or the first person to climb a certain mountain. But you also set yourself apart by praying to a God you don’t believe in, or writing an original essay, creating a recipe using flavors never joined together. Or you can paint an angel on a cardboard box. Any creative pursuit automatically adds you to the list of firsts. I took a picture of her cardboard box art long ago. Glad I did. Image courtesy of the author. It would never matter to me whether Katie left a legacy that the world noticed. I just miss her in my life, like all parents who lose children, and like no other parent before me. She was the keeper of memories, and details I lost but for her. She was my meme-sharer and funny girl. But to say that my life is an unbearably deep hole without her is to discount all this chewy goodness, post-Katie. So I keep at it, because what else is there? Not the stuff of legends, but… We hand history to the great artists, the famous writers, the notorious politicians, the evil emperors, the wealthy benefactors, the selfless prophets, and the brilliant scientists. And then there’s you and me. What is history? Does it count if no one knows about it? Does it count if the only one who reads your book is your sister and best friend? Does it count if your mom posts posthumous pictures of your art on her blog? Who are the creators of history? I’m afraid to admit greatness is seated in people like my daughter, and like me. And absolutely, in you… the very fact of you. Every day we are creating the world, painting on cardboard, snapping skateboards, planting herbs in pots then letting them die of thirst, rallying energy in your team of gamers, losing mittens in the woods. Original thoughts and ideas are popping to life every second of every life from our staggeringly common, yet individual brains. We send birthday cards to 100-year-old strangers we read about in the paper. We rescue turtles on the highway. We make soup. Collectively we make a new world every day. One of Ursula Nordstrom’s favorite quotes, and now mine, was by the dancer Martha Graham. Nordstrom kept a ragged printed copy on a piece of paper in her purse. “There is a vitality, a life force, an energy, a quickening that is translated through you into action, and because there is only one of you in all of time, this expression is unique. And if you block it, it will never exist through any other medium and it will be lost. The world will not have it. It is not your business to determine how good it is nor how valuable nor how it compares with other expressions. It is your business to keep it yours clearly and directly, to keep the channel open. You do not even have to believe in yourself or your work. You have to keep yourself open and aware to the urges that motivate you. Keep the channel open. … No artist is pleased. [There is] no satisfaction whatever at any time. There is only a queer divine dissatisfaction, a blessed unrest that keeps us marching and makes us more alive than the others” Whether or not my daughter was recognized for her accomplishments, whether or not she was proud of her life as she lived it, she did indeed make history. The artist as a child. Image courtesy of the author. Now, if you’re reading this, the onus is on you. Ask yourself, “Why am I here? What more is expected of me?” Every life leaves a legacy You are a miracle. Never before seen and never to be seen again. I believe that now for sure. I also know that no legacy is more valuable than any other. We will all wash up on the shores of history in equal measure. Another exquisite grain of sand tumbled by the ocean of eternity. See you out there.
https://medium.com/portals-pub/youre-making-history-without-even-trying-f4d9630e826f
['Jen Mcgahan']
2020-11-25 15:34:31.245000+00:00
['Life', 'Art', 'Health', 'Creativity', 'Personal Growth']
How to Spot a Genuine Asshole
Jean-Francois Marmion: Let’s start with the basics. What, exactly, is an asshole? Aaron James: An asshole is a man, or more rarely a woman, who accords himself special advantages in his social life and feels immune from reproach. He’s the guy who cuts in line at the post office, granting himself a privilege that’s normally reserved for pregnant women and emergencies. In the moment, he has no justification beyond feeling that he’s rich, handsome, or smarter than everyone else, so his time is more valuable than theirs. If you ask him to stand in line like everyone else, either he won’t listen, or he’ll tell you to get lost. It’s not that he despises other people; rather, it’s that he doesn’t think they deserve his attention. The moment that you don’t understand how extraordinary he is, he decides you’re unworthy of his interest. It’s important to clarify, though, that there is a difference between someone acting like an asshole and a person who is an asshole. That seems like a tough distinction to make. How can you know for sure? Most people behave like an asshole from time to time, particularly if they’re going through a rough patch or having a bad week. But for me, the mark of the bona fide asshole — the true asshole — is that he’s consistent. An asshole will reliably be an asshole in multiple arenas of his lives, but not in all of them. For example, he might be an asshole at work and on the road, but not at home, or the other way around. The all-purpose asshole, who’s an asshole whatever the context, is rare. Most assholes will need to turn off their asshole behavior, at least under some circumstances, to get what they want. Being an asshole ultimately comes down to a person’s social behavior. But the internal tripwire is a failure to show interest in others. Assholes think it’s up to everyone else to adjust to them, no matter what the situation. And often, the people in their lives will humor them. The asshole’s behavior is often fostered by their social dynamic, but the primary cause is a sense of entitlement that’s personal and deeply rooted, and difficult to dislodge. As a general rule, privileged people have a much greater risk of becoming eminent assholes. Financial prosperity, conventional good looks, and intelligence are all qualities that make it easier to admire yourself and to attract the good opinion of others. The broader culture, subculture, and social sphere also play a significant role in creating assholes. The asshole’s patterns of behavior are shaped by circumstances that reward their single-minded pursuit of power and sense of superiority. In a culture where individualism prevails, the ratio of assholes to non-assholes will be higher than in civic-minded societies where assholery is more easily suppressed or frowned upon. The United States, for instance, is culturally favorable for assholes to thrive. Donald Trump is a supreme asshole, an überasshole, if you like. I mean by that, that he’s an asshole who inspires respect and admiration for his mastery of the art of assholery, despite heavy competition from his peers. Assholes generally have to fight for the title of “asshole in chief” or “baron” of assholes, but few can match Trump’s prowess at piling assholery upon assholery (Kim Jong-un, in North Korea, being a notable exception). Those who manage it for a time, like Chris Christie, the former governor of New Jersey, often end up becoming more docile. What’s the best approach for dealing with a true asshole? Can you convince them to reconsider their assholish ways? An asshole might be capable of change, but it’s better to avoid assholes altogether. Of course, sometimes that isn’t possible. Sometimes, for example, an asshole will be kept on in an organization because he brings in money or prestige. In those kinds of situations, it’s imperative that you join forces as a united front against the asshole, because it’s by dividing people against each other that assholes accomplish their goals. This is much easier in small groups than in a political context. But there’s a lot society can do to reduce the number of assholes, even though it’s hard, because they have a knack of blocking our path. Then there’s the matter of dealing with an asshole within your family, which is both banal and very delicate. He needs to feel he’s smarter than other people and to take them on, one against all, with no apologies, even in the thick of everyday interactions. As a result, the quality of assholes’ relationships is abominable. Sometimes the best course of action is to isolate the asshole or reduce contact with him to preserve your mental health. Is there a silver lining here? At the very least, can they make the rest of us feel better about ourselves for choosing the high road? I’m sometimes asked if it’s possible to feel grateful for assholes, given that they remind us of our comparative virtues. My view is that, even if you learn to get along with an asshole, I don’t think it’s possible to feel gratitude for them, unless they end up recognizing your value as a human being. You can always congratulate yourself on understanding them and handling them better, but they cause too many frustrations and problems for too many people to tip the scales toward a favorable light. I’m also asked whether we’re secretly jealous of assholes. To this, I respond that you can feel impotent, frustrated, and indignant when you’re confronted with someone who repulses you. “How can a person be that way?” you might ask yourself. But emulation doesn’t enter into it. That said, when assholes succeed, it’s possible to feel jealousy: “That’s how you get famous, by acting like an asshole? I could have done that! But he thought of it first, and he was quicker about it.” If you’re a bit of an asshole yourself, you can appreciate the technique of a connoisseur.
https://forge.medium.com/how-to-spot-a-genuine-asshole-1bf7a5da2cad
['Jean-François Marmion']
2020-10-14 14:18:20.964000+00:00
['Relationships', 'Assholes', 'Psychology', 'Society', 'Book Excerpts']
Gen X Will Not Go Quietly
Gen X Will Not Go Quietly We have always lived loud, and we’re not going to change that Photo: Cavan Images/Getty Images Gen X refuses to die with dignity. We will die in the spirit of Grace Jones and Poison, thank you, as we express ourselves in any way we see fit, whether that be in a regular suit and tie, or a full body of tattoos. Most of us are not, as our parents did, dressing our age. We see those articles — the ones that say we’re supposed to stop wearing strappy sandals and high-tops, combat boots and miniskirts, or skin-tight jeans with holes — and we consider them for a second until we say “fuck that.” We’ve always worn whatever we wanted and we don’t particularly give a crap if you don’t like it. We grew up in the ’80s, the age of flash and color, punk, metal, and hip-hop. We learned to live loud and we aren’t giving it up. Our music is still loud and so are our clothes. So are our opinions, which we’re not afraid to share with you. We might be leading the PTA or playing bass in this cool little band on the side, or maybe we’re just taking our kid fishing, to show them what it’s like to do something outside. We are now at the age we have seen most of this bullshit before, so you’re not going to fool us easily. We lived through the era of Reagan and the fear of the Cold War. We saw the wall come down and we supported our LGBT friends until eventually, we helped to vote for the passage of marriage equality. We came of age in the era of androgyny (the precursor to gender-neutral), and AIDS, back when the president of the United States understood Russia was not, and could likely never be, an ally. We are now at the age we have seen most of this bullshit before, so you’re not going to fool us easily. Gen X may be a small generation that’s stretched too thin right now as we look after our kids and our parents, but that just makes us all the more wary and skeptical. We’re not sure if the country is going to survive Donald Trump, but we saw the country survive Ronald Reagan and Bill Clinton and George W. Bush, so we suspect there will be something afterward, whether the dust settles after one or (God forbid) two terms. We like to listen to Lizzo and Duran Duran and Tupac and The Cure and Madonna and Childish Gambino. And also old country, old rock and roll, old jazz, and occasionally, some classical. We grew up on tacos and pizza and burgers. We have come to love sushi and perogies and bulgogi. But now, one of our kids is having gender identity issues and we’re trying to figure out how to help our partner adequately care for their parents while making sure our own mothers and fathers get to their doctor’s appointments, so we’re often tired. But we still like to go out and have a drink once in a while and maybe even sing a little karaoke. We’re not satisfied to stay in a marriage that isn’t working, so we’ll divorce responsibly, making sure the kids’ concerns take center stage, because that’s probably not what happened when our parents divorced and we want to do better by our own kids. We know how to cook dinner, secure a mortgage, air up a bicycle tire, and skateboard, though we don’t skate much anymore. Instead, we do yoga when we can. And as we get older and wrinklier and our hair whitens and we’re stooped over barely shuffling along, we might have to switch to orthopedic shoes from our Chucks (if we haven't already), but we’ll still have Guns N’ Roses blasting into our hearing aids, dammit. You can just go ahead and play Prince at all our funerals.
https://gen.medium.com/gen-x-will-not-go-quietly-3b0429c63c70
['Amber Fraley']
2020-02-04 20:21:55.136000+00:00
['Humor', 'Music', 'Gen X', 'Culture', 'Society']
Wanderlust
Wanderlust A short primer on your longest nerve The word “vague” has a Latin root, which means “wandering”. If we climb a little higher up the tree, we arrive in 16th century France, where the word is used to signify emptiness or vacancy. In English, we use this word to mean “uncertain” but the same word in French means “wave”. Like a wave in the ocean, coaxed from the water by the wind and forever aimlessly rolling over itself. I imagine the first anatomists as cosmonauts, discovering the longest nerve in the body, tracing its tortuous path from the bottom of the brain and down into the gut and naming it in Latin: nervus vagus. The wandering nerve. The intrepid traveller in the uncharted universe of human biology. These are the thoughts that run uncensored through my brain as it sloshes against the inside of my skull, which has just made uncomfortable contact with the linoleum beneath the nurse’s station where four large vials of my blood sit on a plastic tray. I don’t characterize myself as squeamish. My compulsion to understand how the brain works has required me to cut apart a fair number of them over the years, and I’ve been enamoured of all things gross since I was a small child. But like so many of our inexplicable, irrational behaviours, the tendency to hit the floor at the sight of blood is reflexive, an evolutionary short circuit that bypasses your rational mind and heads straight for your autonomic nervous system. We can find similar examples across so many different species that it seems to be a conserved evolutionary strategy to avoid predation; a secret third option when neither fight nor flight are available. We see it in the extreme in the opossum, who can convincingly reproduce all of the physiological hallmarks of death when a predator is near. Mice, deer, and even fish, will instantaneously freeze when they detect hints of danger, like shadows looming overhead. Most animals are simply very convincing actors. They retain awareness while they appear to be dead. Humans, on the other hand, completely lose consciousness — thanks to the vagus nerve, and for reasons that still aren’t entirely understood. One major factor, though, is our posture. Human brains are demanding machines. They require high volumes of blood flow relative to the brains of other animals, not to mention the added burden of bipedalism: a consequence of walking upright is that these humongous resource consuming supercomputers in our heads are located far enough above our hearts that it requires a lot of cardiac force to keep things running smoothly. Because of this, even a slight drop in blood pressure will disrupt the flow of glucose and oxygen to the brain enough to shut the lights off temporarily. This is one of the first things that happens when the vagus nerve is activated — heart rate drops, blood pressure drops, and pretty shortly thereafter, so do you. Our bodies are equipped with an impressive variety of sensors and effectors that are constantly working to maintain balance. If they’re doing their job, you won’t even notice them. The vagus nerve orchestrates a lot of this cross-referencing between your internal organs and your brain, by relaying signals through deep-brain structures like the medulla and the hypothalamus. To do so, it requires finely tuned internal “monitors” at all the sites to which it ventures and reports back. In the case of blood pressure, these monitors are receptors, sitting at the major junctions and crossroads of blood flow as it travels to and from the heart.These receptors are activated whenever they’re stretched — which happens if there’s an increase in your blood pressure. And once they’re engaged, they signal through the vagus to the medulla, a structure at the very bottom of the brain stem that interfaces directly with the spinal cord and controls mission-critical processes like respiration and digestion. From here, the signal is relayed on through the central nervous system to dampen the activity of the sympathetic nervous system — the branch of your autonomic nervous system that’s associated with fight-or-flight. This loss of consciousness, byproduct of a blood-pressure dip, can be explained physiologically by our greedy brains and upright posture — but the evolutionary logic for this trick is slightly less straightforward. This is how we work now, but how did we get here? One idea is that very early in our history, if we found ourselves entangled in an unwinnable violent struggle, the almost immediate shut-off of all of our vital functions might even the odds; either by fooling our would-be attackers into thinking they’d won or at least losing interest, or to limit the amount of blood that was allowed to leave our gravely injured bodies. Sometimes when I’m especially mystified by human behaviour, I try to remind myself that hundreds of thousands of years of collective effort have honed and expanded our menu of emotional experience and motivation to such an overwhelming degree that the scientific disciplines we invented to understand them can’t even keep up. But in the beginning, the options were pretty simple: you survive, and you make more people who hopefully will also survive, or you don’t. You’re eaten by something larger and faster than you. You succumb to an injury or a preventable disease. Your friends are tired of your unfortunate nocturnal respiratory problems, so they kill you with sharp rocks. Living as we do in a world of well-fortified shelter from the elements, vaccines, and CPAP machines, we now have before us thousands of subtler and more complicated things to worry about. Our brains, though, are a mess of electrochemical cross-talk and hard-wired instinct. Those same reflex circuits that engaged to protect us from immediate death before we figured out fire and agriculture, can now be engaged by any strong emotion. In medicine this is referred to as “neurogenic syncope”, which, roughly translated, means a sudden loss of consciousness due to anything that can’t be readily medically diagnosed. Which isn’t to say it’s any less ingenious as a protective mechanism. In moments of extreme stress or grief or otherwise incomprehensible trauma, having this built-in emergency shut-off mechanism can help us conserve our physiological and psychological resources when no useful alternatives exist. And as our behavioural repertoire expands, new and surprising roles for the vagus nerve and the circuits it controls are also coming to light. Of late, there have been scores of claims that the calming effects of singing, deep breathing, and yoga all stem from vagus nerve activation. However, it’s difficult to find well-controlled studies in animal models to substantiate these theories. This is a basic limitation of using animal models to study uniquely human behaviours — although it’s entertaining to imagine dressing dozens of mice in tiny yoga pants and then interviewing them individually about their stress levels. Many studies do, however, reliably point to a direct relationship between the vagus nerve and the immune system. The branch of the vagus that transmits information to the brain has receptors for chemicals produced in response to infection, so it can sense immune responses quickly and report back to your brain. In the opposite direction, from brain to body, vagus activation can actually suppress the production of these chemicals. That reduces local inflammation, promoting speedy recovery from illness. At the same time, the vagus is telling other parts of your body to settle in for rough weather, adjusting your appetite, the quality of your sleep, and your mood to essentially power down all your vital systems while they’re being repaired. Inflammatory markers are also highly correlated with depression: they’re found in higher levels in patients with major depressive disorder who are otherwise physically healthy, and they’re thought to interfere with your brain’s ability to use serotonin efficiently. So it’s been suggested that vagal activation might be an effective treatment for depression, because of its ability to limit the production of these inflammatory factors. It’s interesting to look critically at homeostasis. It feels almost counter-intuitive; in science, we take for granted that what we’re interested in is change, whether that means advancing toward some new technology or drug or simply understanding the fundamental processes of development, disease, and decay. Homeostasis is the study of how things stay the same. And that requires precise, delicate, and continuous effort. The vagus nerve is sometimes referred to as “the great wandering protector,” travelling to the farthest reaches of your body and passing messages back to your brain, letting you know when to eat, when to breathe, when it’s time to panic and run away, and when it’s safe to relax again. It’s constantly active, constantly sensing change and adjusting the way you behave, the hormones you produce, and how quickly your heart beats in response to your ever-changing internal and external states. On a larger scale, though, maintaining balance is still the primary goal. Evolutionary biologists have summed up and expanded this concept in the Red Queen hypothesis, suggesting that adaptive change and diversity in biology is a strategic manoeuvrer against the even more rapid evolution of viruses. We change our DNA just enough so that the last virus that decimated our population can’t quite figure out how to do it again a second time — until it does, and then it’s up to us to adapt or die. In this way, the vagus nerve is also like the Red Queen of Alice in Wonderland, constantly working to keep us in a stalemate with the elements. In physiology, as well as evolution, it takes a whole lot of running just to stay in one place. Support Snipette: Most Medium publications earn money via the Partner Programme, but we don’t have the luxury: their payment provider isn’t available in our country. Consider donating to Snipette instead to keep us moving!
https://medium.com/snipette/wanderlust-a5729891f4a
['Lindsay Gray']
2019-12-27 07:01:02.503000+00:00
['Vagus Nerve', 'Health', 'Neuroscience', 'Physiology', 'Evolution']
The Leaders of AI
Recently CB Insights has published its annual Report on the Top 100 Artificial Intelligence Startups redefining industries. The full report can be downloaded here and contains a hand-picked selection out of nearly 5K startups, based on several factors including patent activity, business relations, investor profile, news sentiment analysis, market potential, competitive landscape, team strength, and tech novelty. This is the short list of selected companies: Top 100 AI Companies for 2020 According to CB Insights Let’s dive deeper and learn what this data can tell us about the macro status of the AI sector. We’re not interested so much here about the specific phenomena (whether this or that company was selected) but on the underlying trends at a larger scale that the data points to. Top AI Countries If we break down where these companies are located, the hegemony is pretty clear: United States is home to 7 times the number of Top AI startups than its closest competitor, with Canada, UK, China, Israel and Germany as distant followers. Click on the chart for interactivity Of course, not all these companies were born in USA. Undeniably many of these startups relocated their headquarters there once they had enough early traction and funding to justify doing so. Many successful investors across Israel, UK and Northern Europe are capitalizing on that strategy. Precisely the fund I am involved with at the moment, Conexo Ventures, leverages that strategy for the Southern European market. But besides national pride, that point should be moot for entrepreneurs: the reality is that once a technology ecosystem is established with such an undeniable leadership, its gravitational force pulling in money, talent and experience is strong enough to provide a huge advantage to the local players and preclude any other clusters to solidify. Wise founders should understand this and get ready to re-domicile should the opportunity arise. This is the stark geo-political situation, now plotted on a world map. Click on the map for interactivity Top AI Funding If we pay attention not just at the raw count of startups but at the dollars fueling these companies (that is, aggregate AI funding per country) the picture does not change dramatically but reveals an interesting subtlety. Canada, while being 2nd in quantity of startups, it drops to 6th position when ranked in terms of funding, right behind its competitors UK, China, Israel and even Germany, before the long tail. Pretty cool. In terms of efficiency, which is a very important factor in ultimately deciding the rentability of any investment, Canada deserves a shout out, well done guys! Click on the chart for interactivity AI in Spain After the top six contenders there’s a longer tail of countries where only a startup made it to the list. Spain is up there thanks to Sherpa, which builds AI technology and products such as Digital Assistants, Recommenders, as well as more generic AI technology, such as Federated Learning. Sherpa originated in Bilbao, home to its founder Xabi Uribe-Etxebarria, which is particularly pleasant for me since I was born in neighboring Portugalete, close to its renowned hanging bridge. Now we both have swapped bridges for the golden one in Silicon Valley and Xabi hails from Palo Alto. He nevertheless has it clear that the fight is not over, the stakes are just larger, and that he needs to persevere in the strategy that led Sherpa to its success: “The key is in specialization. Our competitors have different sections inside the same company, hey build from search engines to autonomous cars to desktop computers. Really we are competing on a very specialized section, but one that has great impact inside these companies. We cannot compete with the entire company, but we can against those specific sections, like AI, and perform better than they do.” Top AI Investors Let’s switch gears and have a look at who are the leading AI investors worldwide. For that purpose we define “leading investor” as those with more companies that made it to the list — irrespective of the amount invested. Most likely these are going to be all USA funds… but which ones? And the winner is… Google Ventures, with 8 investments. Following closely on their heels with 7 investments we find Peter Thiel’s own Founders Fund. Click on the chart for interactivity Then we have a tie up for 3rd place with 6 investments each among Khosla Ventures, Data Collective, Sequoia and Plug and Play, the top Silicon Valley accelerator with branches around the world (among them Plug and Play Spain located in Valencia). Precisely these days Plug and Play is celebrating its Spring Summit 2020 so I recommend that you stay tuned to see what is brewing in this unicorn powerhouse. Top AI Industries Finally let’s close up with by digging on the industry distribution. This is always a hairy topic as there are so many ways to classify them, including the usual confusion between Economic Sectors (which is our real interest) and Technologies (methodologies or techniques used) on the one hand, and Business Models (like selling to businesses or to consumers) on the other. I will spare you the clean-up details, but I recommend that you check the supporting notebook for details. Click on the chart for interactivity Unsurprisingly, the most popular industry by far is Software (the “mother of all industries” for AI). But is is somewhat surprising to see Manufacturing and Transportation ahead of other sectors where AI is mucho more spoken about such as Finance, Insurance and Real Estate. Indeed Manufacturing includes which includes Robotics and Industrial Automation, while Transportation includes Autonomous Vehicles plus Logistics and Navigation that are still capturing the lion’s share of Top AI Startup dollars. There is an interesting correlation here, that goes from very organized environments (a software program, an assembly line, a route), to reasonably organized (Real Estate, Insurance, Finance) to the least organized or, inversely, those in which context variability requires human handling, such as Environment, Education, Agriculture and Human Resources. It is likely that as methodologies continue advancing and algorithms are able to handle higher amounts of chaos, maturing closer to a “general AI” rather than “context-specific AI” we will see increased success in these areas that are now at the tail of the distribution. Image by Gerd Altmann from Pixabay Supporting Documentation If you want to explore the supporting documentation and even play yourself with the data directly, check the following links:
https://medium.com/algonaut/the-leaders-of-ai-8eb1391e5ca4
['Isaac De La Peña']
2020-05-06 20:53:17.932000+00:00
['Data Science', 'AI', 'Artificial Intelligence', 'Startups', 'Investments']