title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Does .NET 5 Deliver on Its Promises?
In a year full of unexpected disruptions, you could be excused if you missed Microsoft’s massive milestone. But here were are — as of November 10th, .NET 5 is an official release, replacing both .NET Core and the .NET Framework. As we explained last year, .NET 5 isn’t just a bundle of new features. It’s the conclusion of a major effort to port .NET to a cross-platform, open source architecture. Essentially, it’s a mission to rebuild .NET, without disrupting the developers who currently rely on it for mature, deployed applications. The change isn’t without risk. In fact, it’s a bit like replacing the wheels on a racecar while it’s speeding down the track. The effort began six years ago, when Microsoft surprised most of the world with the cross-platform .NET Core. Depending on your perspective, .NET Core was either an emerging .NET alternative or a strange shadow-clone version of the framework with its own conventions and headaches. Then, .NET Core swallowed the entire framework. Or at least it got its jaws around the head—up until last year, it was still too soon to tell if it would manage to digest its entire meal. (Everyone who’s worked on an over-ambitious software project recognizes the point when you’ve gone so far on a major change that you can’t back out, but you’ve just realized there’s no way to complete the work on time and with all its promises intact.) So now that .NET 5 is an official release, what’s the verdict? Did it fulfil its goal? And do its broken promises — and there were a couple — prevent it from succeeding? The cross-platform promise The central ambition of .NET 5 is to be a universal version of .NET — one that works for all modern types of applications, and replaces both the .NET Framework and .NET Core. And on this count, .NET 5 delivers. .NET 5 supports every project type it set out to support, with no real asterisks. You can build the full suite of ASP.NET applications (Razor pages, Blazor apps, and Web API services). It’s a similar story on the desktop side, with support for Windows applications that encompasses Windows Forms and WPF. (Obviously, all these applications rely on the Windows operating system. Despite the cross-platform powers of .NET, you can’t run or develop them on different operating systems.) The limits of the .NET promise haven’t changed either. For example, there are some legacy technologies that aren’t in .NET 5, including: ASP.NET Web Forms WCF (Windows Communication Foundation) WF (Windows Workflow Foundation) Some developers are understandably angry about these changes, but there are no surprises here — none of these technologies were ever in the .NET 5 roadmap. If you still need to support applications that use them, you’ll probably continue using the last version of .NET Framework (4.8). If you’re more ambitious, there are community projects trying to patch the WCF and WF gaps. Blazor and C# in the browser There’s nothing quite as exciting as brand new tech, and for the last year there’s been nothing newer or more ambitious in the Microsoft developer ecosystem than Blazor WebAssembly, an optimized runtime that lets you run C# code in a web browser. Officially, Blazor had its first release in .NET Core 3.1, but .NET 5 gives it a chance to reach a wider audience, including plenty of developers who are still wondering whether the platform is stable enough for them to start exploring. The answer is yes—with a few caveats. Blazor has definitely evolved past the point where it started, as a proof-of-concept experiment. But it’s still bigger and heavier than pure JavaScript frameworks like React and Vue. And the application model, which closely follows ASP.NET Razor, is a convenience for some and an irritation to others, especially those who don’t already have a background in ASP.NET. Incidentally, there were a number of major Blazor enhancements that developers were hoping would make the .NET 5 cutoff. Most didn’t. We did get lazy loading and CSS isolation. But if you were waiting for one of these features, prepare to be disappointed: Ahead-of time (AoT) compilation, which should make Blazor applications much faster, although it may require larger downloads. True multithreading, which depends on still-uneven browser support. Hot reloading, which triggers an automatic recompile when changes are made, and seamlessly shifts to the new version without restarting the application. All of these features are still in play, and many are likely to show up in .NET 6. Whether Blazor is the best way to take advantage of WebAssembly in a browser — and how bright its future will be — are still up for debate. See also: The best examples to get started learning Blazor. A deeper look at how Blazor works from our pre-release review, written exactly one year ago. Harmonizing desktop development Microsoft is famous for reinventing the wheel — except, instead of a wheel, think “desktop API.” Somehow, in a world that prizes a single codebase and run-everywhere languages like JavaScript, Microsoft has ended up simultaneously supporting three different models for Windows desktop development: Windows Forms WPF (Windows Presentation Foundation) UWP (the ironically named Universal Windows Platform) Although there’s still no sign of a single technology that can replace them all, Microsoft is trying to break down some of the walls that separate them. They launched Project Reunion, an initiative that will let Windows Forms and WPF applications use the FluentUI bits from UWP. Even more exciting, there’s a possibility that they’ll extend support beyond Windows 10, and all the way back to Windows 8.1. The Project Reunion features were initially slated for .NET 5, but they missed the cut. They’re currently a part of the WinUI 3 library, which is floating in preview limbo. See also: A deeper look at what’s coming in Project Reunion, eventually. Fusing mobile and desktop .NET 5 is a release that brings everything under one big developer tent. That includes the Xamarin technology used for native mobile applications. But here’s the catch. Xamarin didn’t just use a scaled-down .NET runtime, it also used a different UI model — one based in XAML, influenced by WPF, but still thoroughly its own. In other words, Xamarin is one more awkward island. It doesn’t mesh with Blazor on the web side, or with any of the Windows toolkits on the desktop side. Microsoft had solution for that, too, called .NET MAUI (for Multi-platform App UI). It’s an evolution of Xamarin that allows you to target mobile Android and iOS platforms, and desktop Windows applications (WPF or UWP), with everything magically bundled into a single project. It might even integrate with the world of standard web applications through Blazor. For a while, Microsoft was promising that this ambitious change would happen in time for .NET 5, but it eventually slipped to .NET 6. (They specifically blamed the coronavirus for the delay, if you’re looking for another reason to hate 2020.) The bottom line is that you’re hoping for an easier way to create native applications on a variety of platforms, you’re stuck waiting. Or, you can consider a third-party tool, like the excellent Uno Platform. See also: A deeper look at .NET MAUI. Single-file applications Under the heading of “nice little things that are actually much more difficult than they seem,” Microsoft has been trying to deliver a true single-file deployment solution for quite some time. In .NET 5, they didn’t quite succeed. The goal of single-file deployment is to package an application and all its dependencies in a single executable file. When you launch the file, the runtime unpacks the resources and loads them dynamically. Simple, right? Not so fast. It turns out that there are operating system and security considerations that complicate the picture. The solution Microsoft settled on generates a true single-file package that works flawlessly on Linux. But on Windows and macOS computers, you still need to include a few separate runtime files and distribute them alongside your “single-file” executable. Microsoft explains the reasons for this painful compromise here, and plans to revisit the issue again in .NET 6, with no guarantee that the situation will improve. C# 9 goes functional As always, .NET 5 also includes updated versions of its core languages, C# 9 and F# 5. The changes in C# aren’t as dramatic as they’ve been in some previous versions (remember the introduction of generics and LINQ?). But they’re still significant. The most notable changes show C# creeping closer to functional programming, with a new feature for immutable data objects and a more powerful expression syntax. See also: The functional-programming changes in C# 9. VB slides into irrelevance For years, we’ve watched C# go from strength to strength. And now .NET 5 makes it explicit—there is only one core, do-everything .NET language, and that’s C#. In second place? You must mean the nicely crafted but very niche F#, a language explicitly designed for functional programming. Its most important contribution just might be the way it keeps nudging C# to add more functional features. As for VB, once the world’s most popular hobbyist language and a side-by-side equal to C#, it’s now little more than a legacy. You can use VB with some older project types, most notably Windows Forms and WPF. (This is an improvement over the non-existent support offered in the most recent version of .NET Core.) But ASP.NET? Not if you want project support and Visual Studio designers. In fact, there’s more ASP.NET support for F# than there is for VB, and C# remains the clear favorite across the board. Here’s a handy table that summarizes the project support you get out of the box in .NET 5: This change isn’t all bad. Having extra languages splits the developer ecosystem, and there’s no point encouraging developers to code in VB if the documentation, examples, and .NET communities are all talking in C#. But it’s still a sad ending for one of the world’s most influential languages. A .NET every year? Perhaps the greatest success in .NET 5 is that it launched precisely on time. A year ago, its release date was set for November 2020, and the binaries dropped ten days into the month. This is significant because November is about to become a lot more important for .NET developers. Microsoft has made a long-term pledge of releasing a new .NET version in November, every year. Here’s the current release plan: In Microsoft terminology, .NET 5 is a current release, which means it gets a limited support lifetime that will end a few months after .NET 6 debuts next year. This is unlike Microsoft’s LTS releases, which have a guaranteed support window of 3 years. The last version of .NET Core (3.1) is an LTS release, and the next version of .NET (6) will be one, too. So if you’re working in a government organization or a large company that needs a stronger support policy, now may be the time to plan with .NET 5, but it’s not the time to deploy. In the end, the most exciting part of .NET 5 isn’t new tech like Blazor, or its cross-platform support, or even its open-source status. It’s the fact that Microsoft has successfully pulled off a critical reboot. They’ve replaced the aging .NET Framework, revitalized the .NET family, and all-but guaranteed that their programming tools will thrive for another decade.
https://medium.com/young-coder/does-net-5-deliver-8f3f89193d21
['Matthew Macdonald']
2020-11-17 12:50:05.597000+00:00
['Programming', 'Aspnetcore', 'Dotnet', 'Csharp', 'Microsoft']
How to Become More Curious
Learning is a lot easier when it’s interesting. And it’s interesting, to a large extent, because you’re curious about the subject. Yes, the carrot of career opportunity and stick of exam failures can motivate. But if you really want to learn something, nothing beats curiosity. Yet it’s boredom, not curiosity, that dominates student life. Research shows that students report feeling bored much of the time in class. This makes it harder to pay attention and more painful to learn. How can you boost your curiosity for a new subject? The Science of Curiosity Curiosity remains an under-studied phenomenon. Early research focused on now mostly discredited drive-reduction accounts. Curiosity, like hunger, was envisioned as an aversive state that we were driven to reduce. But, if this were true, why would anyone read a murder mystery novel? In 1994, George Loewenstein offered a more modern take in his information-gap theory. This theory argued that curiosity was driven from the gap between what you know and what you’d like to know. While this definition may seem almost tautologically true, there were a few key predictions: Curiosity is susceptible to framing effects . Like a figure-ground illusion, if the situation emphasizes a single missing piece, you’re much curious than if you think you haven’t assembled most of the puzzle. . Like a figure-ground illusion, if the situation emphasizes a single missing piece, you’re much curious than if you think you haven’t assembled most of the puzzle. Insight-based problems evoke more curiosity than accumulative ones . If you need a single idea to make the entire idea snap into relief, you’ll be more curious than if the answer is only to be found by acquiring a mountain of facts. . If you need a single idea to make the entire idea snap into relief, you’ll be more curious than if the answer is only to be found by acquiring a mountain of facts. You need to believe you can solve the puzzle. Social psychologist Albert Bandura’s influential self-efficacy account of motivation argued that to be motivated (or curious) we need to believe we can be successful. If you think a lot of investigation won’t result in an insightful payoff, low curiosity is likely to result. There isn’t a magic formula for curiosity. But there are a few strategies we can apply to make things more interesting. You Need to Know More to Ask Better Questions An implication of Loewenstein’s theory was that more knowledge should lead to more curiosity. The person who knows 47 of 50 states is more likely to be curious about which ones she’s missing than the person who only knows three. Research confirms this by noting that knowledge about a topic predicted curiosity for new knowledge. One reason for this is simply that you need to know something before you can ask good questions. Since good (unanswered) questions are the raw material for curiosity, it’s difficult to be curious about something when you can’t ask any questions. Researchers Naomi Miyake and Donald Norman summarize the importance of knowledge base to curiosity nicely in the title of their paper, “To Ask a Question, One Must Know What is Not Known”: “At a research seminar on computer techniques, we noted that beginners at programming (to whom the seminar was addressed) asked few questions and generated few comments. More expert programmers, however, had many questions and, eventually, dominated the discussion.” This means learning itself creates a positive feedback loop. The more you know about a topic, the more likely you are to have unanswered questions that drive curiosity. Read more books and the books get more interesting. Start Asking Questions Curiosity is susceptible to framing effects. Which means you’ll be far more curious when you have a concrete, unanswered question that seems like it shouldn’t be too hard to solve. The problem is that knowledge is often presented in a way that actively stifles this question-generating approach. Rather than creating a mystery, for which new knowledge is needed to unravel, most subjects are presented as already solved: “Go ahead and memorize this. Don’t worry, we already proved it’s the correct answer.” To be more curious, you have to reframe what you’re learning in terms of the key mysteries it was developed to decode. What were the burning questions that kept people up at night as they tried to solve the puzzle? One way to start is simply to ask questions about more things that you’re asked to take as a given. Why does DNA need to be translated into RNA before it can make things? Why is there a minus symbol in this equation? Why do profits maximize when marginal revenue equals marginal cost? The attitude that leads to more question-asking, and thus more curiosity, is one which recognizes that the world is deeply strange. Only with the benefit of hindsight do the answers we’ve discovered seem obvious. To be more curious, you need to recapture the spirit of those who puzzled over them when they were still unsolved. Know Where to Get the Answers If the response to a question is simply, “that’s just the way things are,” or worse, “shut up and memorize,” the outcome is frustration, not curiosity. Thus, the art of asking questions needs to be paired with actually finding the answers. Luckily, this is easier than ever. Online forums, like Quora or Reddit’s Ask Science, offer ways you can ask questions and get expert replies. For many questions, teachers, peers and people around you can often answer questions you’ve missed. Figuring out the answer for yourself is also satisfying. Some of my greatest joys in math have been getting that breakthrough insight that makes sense of a confusing problem. It can take a little bit of time and playing around, but suddenly having the reason why it must be that way snap into view can be immensely gratifying. Learning is Dialog, Not Consumption The attitude that creates curiosity is to see learning as principally driven by asking questions and coming up with answers, not consuming information. While we don’t always have a choice in how knowledge gets presented to us, if you see that there’s always a deeper layer of questions and answers, mysteries and insights, then even seemingly dull topics become a puzzle waiting to be solved.
https://medium.com/swlh/how-to-become-more-curious-67d58c842e8c
['Scott H. Young']
2020-12-02 22:23:30.710000+00:00
['Self', 'Ultralearning', 'Productivity', 'Reading', 'Learning']
Data is Making Hits & Changing the Music Biz; Dawn Ostroff’s Plan to Turn Spotify Into the Ultimate Podcast Hub
How Data Is Making Hits and Changing the Music Industry — www.complex.com Data is transforming the way the music industry operates. Complex spoke with the analysts on the frontlines, who explained how it all works. Music Tech Investment Areas You Need to Know — www.billboard.com The wave of upcoming music tech presents a wealth of opportunity. Dawn Ostroff’s plan to turn Spotify into the ultimate podcast hub — www.latimes.com Ostroff believes podcasts can attract new listeners and increase the amount of time people spend on the platform. Audio stories can be accessed on multiple devices while consumers are multitasking. Social Media, the Modern Day Radio of Music — www.entrepreneur.com Sharing to earn social currency is the most direct path to building a hit record in today’s music discovery ecosystem. Is your single or album release ready? Nick Holmstén Out as Spotify Global Head of Music — www.billboard.com A company spokesperson has confirmed that Nick Holmstén, the streaming service’s global head of music since October 2018, will transition to an advisory role going forward. ‘They Legitimized Buying Views’: How YouTube Ads Impact Latin Music — www.rollingstone.com The massive streaming service offers a “safe and sanctioned” way for labels to pay for views. Are labels abusing it? What do music/tech startups REALLY think about working with major labels? — medium.com In Music Ally’s latest analysis report, we take a look at the three major labels’ strategies around music/tech startups and investment. = UMG Central Europe’s Frank Briegmann Touts Massive Streaming Numbers, Physical Gains at ‘Universal Inside’ — www.billboard.com Universal Music Group executives, artists and retail partners gathered in Berlin on Wednesday for the label’s annual “Universal Inside” event. Spotify’s New Campaign Is All About the Joys of Listening in the Car — musebycl.io In fact, you might never want to leave it. Betaworks’ next startup camp is focused on audio — techcrunch.com Startup studio Betaworks is putting out a call for audio-focused startups. Parcast Launches ‘Horoscope Today’ Podcast Series — www.hollywoodreporter.com Spotify-owned podcast studio Parcast is launching the ‘Horoscope Today’ series with 12 daily shows for each of the different signs of the zodiac. Two Words For Radio And Podcasters: Derivative Content — jacobsmedia.com iHeartPodcast head Conal Byrne preaches “derivative content” — the way great podcasts — and radio stations — can extend their brands. Libre Wireless Powers Canton’s Lineup of Smart Wireless Music Streaming Multiroom Speakers, Amps, Sound Bars — www.librewireless.com Libre Wireless announced a broad relationship with Canton across an extensive range of industry leading consumer audio products. Sign up for the daily newsletter! A quick read of the best Music Streaming News → https://www.getrevue.co/profile/platformstream ✉️ Send news tips, comments to Jeff @ [email protected] 🎵 Want to sponsor an edition of Platform & Stream? Get more info here about sponsored posts & playlists.
https://medium.com/platform-stream/data-is-making-hits-changing-the-music-biz-dawn-ostroffs-plan-to-turn-spotify-into-the-87cdce02e850
['Platform']
2019-09-10 18:19:26.471000+00:00
['Data', 'Music', 'Streaming Music']
Intelligent, realtime and scalable video processing in Azure
1. Introduction In this tutorial, an end to end project is created in order to do intelligent, realtime and scalable video processing in Azure. In this, a capability is created that can detect graffiti and identify wagon numbers using videos of trains. Properties of the project are as follows: Intelligent algorithms to detect graffiti and identify wagon numbers Realtime and reliable way of processing videos from edge to cloud Scalable for exponential growth of number of videos Functional project that can be optimized to any video processing capability The architecture of the project can be depicted as follows: 1. Architecture overview In this blog this architecture is realized as follows: 2a. Cognitive Services to detect graffiti on trains (Custom Vision) 2b. Cognitive Services to identify wagon numbers (Computer Vision OCR) 3. Azure Functions for parallel processing of videos 4. Power BI for visualization (optional) 5. IoT Edge architecture for auto-tiering data(optional) 6. Conclusion In this blog, all video processing is done in Azure. Refer to this follow-up tutorial in which the graffiti detection is done on the camera (edge) itself. In the next chapter, Azure Cognitive Services will be deployed. 2. Azure Cognitive Services Azure cognitive services are a set of APIs that can be infused in your apps. It contains intelligent algorithms for speech recognition, object recognition in pictures and language translation. The models are mostly pretrained and can be integrated “off the shelf” in your project. Most models can also be deployed as container on the edge. In this project, two APIs will be used: Custom Vision that will be used to detect graffiti on trains. This model needs pictures for trains with/without graffiti to learn. This step can be seen as “adding the last custom layer in the neural network of an image recognition model that was already trained in Azure Cognitive Services” Computer Vision OCR that will be used to identify wagonnumber on train. This model does not require training and can be taken off the shelf In the remain of this chapter the following steps will be executed: 2a. Train and deploy Custom vision API to detect graffiti 2b. Deploy OCR Computer Vision API And the following part of the architecture that is realized: 2. Cognitive services to detect graffiti and identif wagon number 2a. Train and deploy Custom vision API to detect graffiti Go to Custom Vision website and sign in with your Azure AD credentations. Once you are logged in, select to create a Custom Vision project with properties “classification” and multiclass (Single tag per image)”, see also below. 2a1. Create Custom Vision API project Then download the following images the folder CognitiveServices/ CustomVisionImages in the following git project: As first step, add the graffiti pictures with tag graffiti to your project. Secondly, add the no_graffiti pictures with tag graffiti and then NEGATIVE to your project. Then train the model using the fast track, see also below. 2a2. Train Custom Vision Api project Once you trained the model, you can test the model by clicking on “Quick test” and then select an image from the test folder using the git project that was downloaded earlier. 2b. Deploy OCR Computer Vision API Go to the resource group that was created in step 2a to deploy your OCR Computer Vision API. Click on the add button and type “Computer Vision” in the search box. Select F0 as pricing tier. After you deployed your Computer Vision API, the resource group will look as follows. 2b1. Resource group after Custom Vision API and Computer Vision for OCR is deployed In the next chapter, the APIs will be used to detect graffiti and wagon number from videos. 3. Azure Functions for parallel video processing Once a new video is uploaded (synchronized) in Azure Blob Storage, it shall be immediately processed as followed: Azure Blob storage has a trigger that executes a simple Azure function that sends message to Azure Queue Azure Queue has a trigger that executes an advanced Azure function that 1) retrieves video from blob storage account, 2) takes every second a frame of the video using OpenCV and 3) detect graffiti on the frame, identifies the wagon number and writes results to csv file The Azure Queue step is necessary to be able video in parallel. In case the blob trigger directly triggers the advanced Azure function, videos are only processed serially. The parallel video processing architecture is depicted below. 3.1. Parallel video processing In the remain of this chapter the following steps will be executed: 3a. Install preliminaries for Azure Function with docker 3b. Create Azure Storage account with blob containers and queue 3c1. (Optional) create docker image for Azure Function Blob trigger 3c2. Deploy Azure Function Blob Trigger 3d1. (Optional) create docker image for Azure Function Queue trigger 3d2. Deploy Azure Function Queue Trigger 3e. Run test with video And the following part of the architecture that is realized: 3.2. Steps in blog plotted on Architecture. Parallel video processing in bold as next step In which the details of the parallel video processing capability can be found in picture 3.1 “Parallel video processing” earlier. 3a. Install preliminaries for Azure function with docker In order to create frames from videos, an Azure Function with OpenCV is needed. For that purpose, an Azure Function with Python using a docker image with OpenCV dependencies preinstalled is used. To do this, the following preliminaries needs to be installed: Install Visual Studio Code Install Azure Core Tools version 2.x. Install the Azure CLI. This blog requires the Azure CLI version 2.0 or later. Run az --version to find the version you have. to find the version you have. (optional, in case you want to create you own image) Install Docker (highly recommended) before you run the commands in this tutorial, execute the commands in this tutorial first 3b. Create Azure Storage account with blob containers and queue An Azure Storage account is needed to upload the videos to and to run the Azure Queue services on which the Azure Fucntion will trigger. Open your Visual Studio Code, open a new terminal session and execute the following commands: az login az group create -n blog-rtvideoproc-rg -l westeurope az storage account create -n <stor name> -g blog-rtvideoproc-rg --sku Standard_LRS az storage container create -n videoblob --account-name <stor name> az storage container create -n pics --account-name <stor name> az storage container create -n logging --account-name <stor name> az storage blob upload -f Storage/ImageTaggingLogging.csv -c logging -n ImageTaggingLogging.csv --account-name <stor name> --type append az storage queue create -n videoqueue --account-name <stor name> Make sure that a global unique name is taken for <stor name> as storage account name. 3c1. (Optional) create docker image for Azure Function Blob trigger In this step, a simple Azure Function is created that is triggered when a new video is added to the storage account. The name of the video is then extracted and added to the storage queue that was created in step 3b. Open your Visual Studio Code, create a new terminal session and execute the following commands (select python as runtime when prompted) func init afpdblob_rtv --docker cd afpdblob_rtv func new --name BlobTrigger --template "Azure Blob Storage trigger" Subsequently, open Visual Studio Code, select “File”, select “Open Folder” and then the directory afpdblob_rtv that was created in the previous command, see also below: 3c1. Azure Function Blob trigger In this project, replace the content of the following files BlobTrigger/__init__.py BlobTrigger/function.json Dockerfile requirements.txt With the content of the github project https://github.com/rebremer/realtime_video_processing/tree/master/AzureFunction/afpdblob_rtv/. Next step is to build to docker image and publish the docker image to a public Docker Hub. Alternatively, a private Azure Container Registry (ACR) can also be used, but then make sure credentials are set. Execute the following commands to publish to docker hub docker login docker build --tag <<your dockerid>>/afpdblob_rtv . docker push <<your dockerid>>/afpdblob_rtv:latest 3c2. Deploy Azure Function Blob Trigger In this step, the docker image is deployed as Azure function. In case you skipped part 3c1 to create you own docker image, you can replace <your dockerid> with bremerov, that is, bremerov/afpdblob_rtv:latest. Execute the following commands: az appservice plan create --name blog-rtvideoproc-plan2 --resource-group blog-rtvideoproc-rg --sku B1 --is-linux az functionapp create --resource-group blog-rtvideoproc-rg --os-type Linux --plan blog-rtvideoproc-plan --deployment-container-image-name <your dockerid>/afpdblob_rtv:latest --name blog-rtvideoproc-funblob --storage-account <stor name> az functionapp config appsettings set --name blog-rtvideoproc-funblob --resource-group blog-rtvideoproc-rg --settings remoteStorageInputContainer="videoblob" ` AzureQueueName="videoqueue" ` remoteStorageAccountName="<stor name>" ` remoteStorageAccountKey="<stor key>" az functionapp restart --name blog-rtvideoproc-funblob --resource-group blog-rtvideoproc-rg When the functions is deployed correctly, then the functions is created as follows in the portal 3c2.1 Azure Function Blob trigger deployed correctly When you clock on Blob Trigger, you can see the code that is part of the docker image. As a final step, Add Applications Insights (see screenshot) and follow the wizard. This enables you to see logging in the Monitor tab. As a test, find the video Video1_NoGraffiti_wagonnumber.MP4 in the git project adn upload it to the blob storage container video blog using the wizard, see below 3c2.2 Upload blob After the video is uploaded, the Azure function is trigger using the blob trigger and a json file is added to the Azure queue videoqueue, see below 3c2.3 Json file with video name added to queue 3d1. (Optional) create image for Azure Function Queue trigger In this step, an advancedAzure Function is created that is triggered when a message is sent to the Azure queue that was deployed in step 3c2. Open your Visual Studio Code, create a new terminal session and execute the following commands (select python as runtime when prompted) func init afpdqueue_rtv --docker cd afpdqueue_rtv func new --name QueueTrigger --template "Azure Queue Storage trigger" Subsequently, open Visual Studio Code, select “File”, select “Open Folder” and then the directory afpdblob that was created in the previous command, see also below: 3d1.1 Azure Function Queue trigger In this project, replace the content of the following files QueueTrigger/__init__.py QueueTrigger/function.json Dockerfile requirements.txt With the content of the github project https://github.com/rebremer/realtime_video_processing/tree/master/AzureFunction/afpdqueue_rtv/. Next step is to build to docker image and publish the docker image to a public Docker Hub. Alternatively, a private Azure Container Registry (ACR) can also be used, but then make sure credentials are set. Execute the following commands to publish to docker hub docker login docker build --tag <<your dockerid>>/afpdqueue_rtv . docker push <<your dockerid>>/afpdqueue_rtv:latest 3d2. Deploy Azure Function Queue Trigger In this step, the docker image is deployed as Azure function. In case you skipped part 3d1 to create you own docker image, you can replace <your dockerid> with bremerov, that is, bremerov/afpdqueue_rtv:latest. Execute the following commands: az functionapp create --resource-group blog-rtvideoproc-rg --os-type Linux --plan blog-rtvideoproc-plan --deployment-container-image-name <your dockerid>/afpdqueue_rtv:latest --name blog-rtvideoproc-funqueue --storage-account <stor name> az functionapp config appsettings set --name blog-rtvideoproc-funqueue --resource-group blog-rtvideoproc-rg --settings ` remoteStorageAccountName="<stor name>" ` remoteStorageAccountKey="<stor key>" ` remoteStorageConnectionString="<stor full connection string>" ` remoteStorageInputContainer="videoblob" ` AzureQueueName="videoqueue" ` remoteStorageOutputContainer="pics" ` region="westeurope" ` cognitiveServiceKey="<key of Computer vision>" ` numberOfPicturesPerSecond=1 ` loggingcsv="ImageTaggingLogging.csv" ` powerBIConnectionString="" az functionapp restart --name blog-rtvideoproc-funqueue --resource-group blog-rtvideoproc-rg When the functions is deployed correctly, then the functions is created as follows in the portal. 3d2.1 Azure Function Queue trigger deployed correctly Again, select to add Applications Insights (see top screenshot), you can select the same application insight resource that was created for the blob trigger. Application Insights can be used to see the logging of the QueueTrigger in the monitor tab. In case the Azure Function Queue Trigger ran successfully, the message that in Azure Queue is processed and the log of pictures can be found in the pics directory, see below 3d2.2 Videos logging in frames Also the logging can be found in file logging/ImageTaggingLogging.csv. In next part the output is visualized in Power BI. 4. Power BI for visualization (optional) Power BI aims to provide interactive visualizations and business intelligence capabilities with an interface simple enough for end users to create their own reports and dashboards. In this blog, it is used to create a streaming dashboard that create alerts when graffiti is detected accompagnied with the wagon number. In the remain of this chapter the following steps will be executed: 4a. Install preliminaries for Power BI 4b. Create Streaming data set 4c. Create dashboard from tile 4d. Add Power BI link to Azure Function And the following part of the architecture that is realized: 4. Steps in blog plotted on Architecture. Visualize output in bold as next step Notice that it is not necessary to visualize the output in order to do final step of this blog IoT Hub. 4a. Install preliminaries for Power BI In this blog, all datasets and dashboards will be created in Power BI directly and it is therefore not necessary to install Power BI dashboard. Go to the following link to create an account: 4b. Create Streaming data set Once you are logged in, go to your workspace, select create and then Streaming dataset. This streaming dataset is pushed from your Azure Function Queue Trigger. 4b1. Create streaming dataset Select API {} in the wizard and add then the following fields (fields can also in __init__.py of the Azure Function Queue trigger in method publishPowerBI() location (Text) track (Text) time (DateTime) trainNumber (Text) probGraffiti (Number) caption (Text) sasPictureTrainNumber (Text) sasPictureGraffiti (Text) 4c. Create dashboard from tile In the next step, a live dashboard is created based on the streaming dataset that is automatically refreshed once new data comes in. First, create a report and and a tabular visual. Simply add all fiels to this tabular. Subsequenly, select pin visual to create a live dasboard of the visual, see also below. 4c1. Create streaming dataset This way, multiple visuals can be created in a report and published to the same dashboard. See below for an example dashboard. 4c2. Example dashboard 4d. Add Power BI link to Azure Function Finally, the Power BI push URL needs to be added to your Azure Function Queue trigger such that data can be published. Click on the … of your streaming datasets, select API info and copy the URL, see below. 4d1. API info Subsequently, add the Power BI push URL to your Azure Function Queue Trigger and restart the function, see below. az functionapp config appsettings set --name blog-rtvideoproc-funqueue --resource-group blog-rtvideoproc-rg --settings ` powerBIConnectionString="<Power BI push URL" az functionapp restart --name blog-rtvideoproc-funqueue --resource-group blog-rtvideoproc-rg Remove the video Video1_NoGraffiti_wagonnumber.MP4 and upload it again to your blob storage account to the videoblob container. This will push data to your Power BI dashboard. 5. IoT edge for auto-tiering data (optional) Azure Blob Storage on IoT Edge is a light-weight Azure consistent module which provides local block blob storage. With tiering functionality, the data is automatically uploaded from your local blob storage to Azure. This is especially usefull in scenario when 1) device (e.g. camera) has limited storage capability, 2) lots of devices and data to be processed and 3) intermittent internet connectivy. In this blog, a camera is simulated on an Ubuntu VM that uses Blob on Edge. In the remain of this chapter the following steps will be executed: 5a. Created IoT Hub and Ubuntu VM as Edge device 5b. Add module Blob Storage to Edge device 5c. Simulating camera using Edge device And the following part of the architecture that is realized: 5. Steps in blog plotted on Architecture. IoT Hub Edge in bold as next step 5a. Install preliminaries for Azure Blob Storage on IoT Edge In order to use Azure Blob Storage on IoT Edge, the following commands need to be run (for more detailed information, see here). az extension add --name azure-cli-iot-ext az vm create --resource-group blog-rtvideoproc-rg --name blog-rtvideoproc-edge --image microsoft_iot_edge:iot_edge_vm_ubuntu:ubuntu_1604_edgeruntimeonly:latest --admin-username azureuser --generate-ssh-keys --size Standard_DS1_v2 az iot hub create --resource-group blog-rtvideoproc-rg --name blog-rtvideoproc-iothub --sku F1 az iot hub device-identity create --hub-name blog-rtvideoproc-iothub --device-id blog-rtvideoproc-edge --edge-enabled Run the following command to retrieve the key az iot hub device-identity show-connection-string --device-id blog-rtvideoproc-edge --hub-name blog-rtvideoproc-iothub And add this key to your VM using the following command az vm run-command invoke -g blog-rtvideoproc-rg -n blog-rtvideoproc-edge --command-id RunShellScript --script "/etc/iotedge/configedge.sh '<device_connection_string from previous step>'" When your IoT Hub and edge device is created correclty, you should see the following in the portal 5b. Add module Blob Storage to Edge device In this step Blob storage module is installed on the edge device. Select your edge device and follow the step in the tutorial using the Azure Portal In this, use the following Container Create Options { "Env":[ "LOCAL_STORAGE_ACCOUNT_NAME=localvideostor", "LOCAL_STORAGE_ACCOUNT_KEY=xpCr7otbKOOPw4KBLxtQXdG5P7gpDrNHGcrdC/w4ByjMfN4WJvvIU2xICgY7Tm/rsZhms4Uy4FWOMTeCYyGmIA==" ], "HostConfig":{ "Binds":[ "/srv/containerdata:/blobroot" ], "PortBindings":{ "11002/tcp":[{"HostPort":"11002"}] } } } and the following “set module twin’s desired properties”: { "properties.desired": { " deviceToCloudUploadProperties ": { " uploadOn ": true, "uploadOrder": "OldestFirst", " cloudStorageConnectionString ": " <your stor conn string>", " storageContainersForUpload ": { "localvideoblob": { "target": " videoblob " } }, "deleteAfterUpload":false } } } If everything is deployed successfully, the following should be in the portal 5b. Blob on Edge successfully deployed Also, you run the following commands from the CLI to see if everything is installed correctly. ssh azureuser@<<public IP of your Ubuntu VM>> sudo systemctl status iotedge journalctl -u iotedge cd /srv/containerdata ls -la If everything is deployed successfully, we can run a camera simulator that upload a file to your local blob storage in the next part. 5c. Simulating camera using Edge device In the final part of this blog, we will use a camera simulator that will put a file on the Edge device. As a first step, you need to open inbound port 11002 of your Ubuntu VM. Find the Network Security Group (NSG) of your VM and add port 11002, see also below 5c1. Add port 11002 to NSG Run the code from the github in CameraSimulator/CameraSimulater.py. In this project, replace the IP address of your UbuntuVM and the location of the video file you want to upload. This simulator uploads a video and triggers everything that was done in this tutorial, that is, 1) sync video to storage account since auto tiering is enabled, 2) trigger blog trigger and queue trigger function that process video, 3) invoke cognitive services to detect graffiti and identify wagon number and 4) push results to Power BI dashboard, see also below. 5c2. End result project 6. Conclusion In this blog, an end to end project was created in order to do intelligent, realtime and scalable video processing in Azure. In this, a capability was created that can detect graffiti and identify wagon numbers using videos of trains. In this, the following Azure functions were used Cognitive services were used as intelligent algorithms to detect graffiti on trains (custom vision API) and OCR to identify wagon numbers (computer vision API) Azure Functions with Python and docker were used to process videos in realtime in a scalable method Azure Blob storage and edge computing were used to process video reliable from Edge to cloud. Power BI to visualize output using streaming data in dashboards In this blog all video processing is done in Azure. In the follow up of this blog, the graffiti detecton model will be deployed on the camera (edge) which can save data transfer and can be costs beneficial. See already this tutorial how this is done in a different scenario. Finally, see also architecture of project depicted below:
https://towardsdatascience.com/intelligent-realtime-and-scalable-video-processing-in-azure-201f87104f03
['René Bremer']
2019-07-15 06:07:07.039000+00:00
['Programming', 'Data Science', 'Azure', 'IoT', 'Artificial Intelligence']
After the crisis: let’s fix procurement.
How do you solve a problem like procurement? So, if we were to redesign procurement for the 21st century, to try to imagine a system that is faster, cheaper, more effective, more competitive, more open, gets better public value and provides better protection against corruption, what might that look like? Well, I don’t know. But if you’ve read this far, you’re probably interested enough in this problem that you might have some ideas. These are mine: Digital platforms The first and most obvious move is to use the web to solve the bandwidth problem. It’s what the web does best. Government can learn from the effectiveness of platforms like Amazon, Ebay, AirBnB, Alibaba and the App store; all made possible by standardised template agreements, simple protocols and well-designed digital interfaces that aim to get the transaction costs as close as possible to zero. The Digital Marketplace and GCloud framework are a first step towards this. The objective should be to create truly open markets – that is, markets where any new entrant can join and quickly prove themselves capable of delivering. As far as possible, these could shift from filling in long application documents, towards a more ‘ratings’ like approach, where a supplier can easily get a small first order and prove themselves. The real proof is always in the pudding, and even the smallest portion tells you more than whole books of recipes. Right now, a busy, stressed, NHS care trust executive should be able to get online, and order 200 face shields each from 10 companies, in less than 10 minutes. However, what makes web platforms effective is not just the frictionless digital infrastructure, that lets buyers connect with sellers and vice versa. Their success lies in the power of the platform owner to shape the market: to set standards and enforce rules of engagement. Uber do it. What if government did too? 2. Rules-based procurement This ability to aggregate the public sector’s combined purchasing power to set the rules of the market is the single most powerful tool that government is not using. Let’s return to the example I mentioned earlier, of IT contracts. A year or so ago, in collaboration with Connected Places Catapult, Tech UK and with contributions from a range of experts and suppliers, we drafted a checklist of 15 simple but robust basic checks for any local government IT contract. It included things like a customer’s right to extract their own data at any time, and a ban on any contract longer than 2 years. We realised that if government were to make all public sector IT contracts conditional on meeting those 15 rules, government could prevent most, if not all, of the kinds of market failure we see across public sector IT every day, where government ends up paying millions for rubbish, outdated software. In other words, there is no such thing as ‘The Market’. There are many possible markets, and government has more power than it thinks it has to shape markets where it is almost impossible for them to go too far wrong. I would argue that not only does government have an opportunity to use its platform power more effectively, it in fact has a moral obligation to do so. Of course, there were some (not as many as you might think, I should add) murmurs from the old suppliers who said ‘if you make the rules of the game too stringent, we won’t play’. But that’s the whole point of competition. If the old companies aren’t willing to play, they leave the market open to younger , smaller, hungrier, more innovative companies who will. And believe me, they will. I’ve written about this before under the concept of ‘Democracy as a Platform’. (Additional note: In other domains, this could go further. For example, in cases where a contract will inevitably create a temporary monopoly, or where there is a particular urgency to move fast in a crisis, bids could be limited to non-profit companies (no profits to shareholders) with wage ceilings.) 3. Open, modular contracting Another approach we’ve been exploring, particularly in relation to procurement of buildings, is to shift away from the practice of bundling everything together (R&D, finance, delivery, risk etc) into a massive contractural black box, and instead doing the exact opposite: breaking everything you want to buy into small, separate, predictable, transparent modules, each of which is documented for all to see, and can be procured from any (or several) of a range of suppliers at any time. Let me give an example from our work on WikiHouse. In recent years, if a council wanted to build social housing on a site, they might go to a single contractor to do the whole thing. Of course, there are a limited number of construction firms capable of taking on a task and risk of this scale, so the bids come in high. All the design IP, costings, task data etc is black-boxed, kept by each contractor whose costs are largely based on guesswork. So when the next project comes along, nothing gets much better, cheaper or more predictable. Councils are stuck in groundhog day, re-tendering to the same group of construction firms, and starting every project from scratch. Digital manufacturing allows us to change this, because it allows construction to be broken into clear, predictable tasks, that almost anyone can do. So what if, instead of going out to tender, the council were to start by paying an R&D team to develop a customisable house type design, with an IP agreement that allows them to use that design as many times as they please (this might be a open source, or a commercial licence). Every component, every task is predictable and separately documented, all recorded in spreadsheets and assembly manuals that everyone can see. Prospective suppliers (who might be small shop-fitting companies down the road) then provide rules of thumb for the costing of each task (eg ‘£200 per day’ or ‘£21 per sheet of plywood’). These rules of thumb are their ‘bid’ that can be used to cost a task. If the customer offers them the job, they don’t get to change their bid, they only get to say ‘yes’ or ‘no’. If they do accept, they are obliged to measure as they go and feed–back into the design and documentation, so the predictability of cost and timescales gets better and better over time, for everyone to see. This also means that instead of being stuck with the same old large suppliers, the council can grow an open network of SMEs capable of delivering the same product or service, and switch between them at any time. Continuous competition, continuous collaboration and continuous agile innovation. This approach has the additional benefit that, by taking on additional risk (and then using feedback to reduce it) the customer also regains control over design quality. The contractors’ incentive to drive-down quality has been politely removed from the equation. This approach may not work for everything, but I suspect it could be applied to a huge variety of procurement tasks. It could certainly be used in the case of the COVID-19 PPE. (Update: After I posted this, Richard Pope shared with me this post on pioneering work done by 18F on Modular Contracting. It’s very much worth a read.) 5. Flat pricing The final idea — which certainly won’t apply to every situation, but might apply to some — involves flipping the game of bidding itself upside-down. The traditional contracting mindset is that you invite bids on price from different suppliers in order to ‘let the market find the price point’. You then pick one (often the second-to-lowest bidder). The trouble with this, as we have already explored, is that it creates a strong incentive for suppliers to bid low to win the job, then immediately engage in a race-to-the-bottom on quality or resilience. In the end, the money saved by getting suppliers to bid-low often turns out to be a false saving. But in many situations it may increasingly become possible to use shared data to predict roughly what the price point should be, or what the customer can viably afford for it to be. So what if, instead of getting suppliers to bid on price, we were to fix the price across all potential suppliers, but then invite them to outbid each other on quality or performance. In effect, to trigger a race to the top on quality and other social or economic outcomes. It’s a way of extending an open invitation to all possible suppliers, including new entrants, and saying ‘here’s the money that’s on offer, what could you do with it’? Of course, if no suppliers step forward, then the price would have to be raised until several companies offer their services.
https://alastairparvin.medium.com/after-the-crisis-lets-fix-procurement-428c598fb558
['Alastair Parvin']
2020-05-12 09:53:34.728000+00:00
['Government', 'Politics', 'Covid-19', 'Design', 'Systems Thinking']
Think you have a procrastination problem?
Think you have a procrastination problem? Behold six famous writers who procrastinated worse than you Photo courtesy of Michael Vrba on Unsplash When it comes to procrastination, I am a member of an elite class. In fact, I’m writing this piece to avoid working on a larger, more heartfelt story detailing my own struggles with procrastination. If you’re a writer, you likely struggle with procrastination, too. Like me, you probably look for ways to avoid doing the work in front of you, which is how you stumbled upon this piece. So rather than checking your Medium stats for the thousandth time, or organizing your desktop, or doing whatever it is you do to avoid writing, take a moment to behold six writers who probably procrastinated even worse than you. Douglas Adams Douglas Adams’ ability to avoid writing was considered legendary. “I love deadlines,” he once said. “I love the whooshing sound they make as they go by.” Despite completing nine books before his death, Adams is said to have hated the writing process. He would spend entire days in bed or in the bath to avoid writing. Rather than addressing his problems with procrastination head-on, Adams would order his publishers and editors to incarcerate him and scowl at him until he met his writing obligations. When struggling to finish So Long, and Thanks for All the Fish, Adams’ editor reportedly locked him in a hotel room to ensure that Adams had nothing to distract him from finishing his work. During those three weeks, the editor had food and drink delivered to Adams to prevent him from leaving the room. According to his friend Steve Meretzky, Adams “raised procrastination to an art form. Hitchhikers Guide would never have gotten done if I hadn’t gone over to England and virtually camped out on his doorstep.” Adams reportedly wanted his gravestone to read, “He finally met his deadline.” Samuel Taylor Coleridge Eighteenth century poet Samuel Taylor Coleridge was among the most infamous procrastinators of all time. Throughout his writing career, publishers would publicly promote imminent pieces from Coleridge that ultimately failed to appear. Scholars studying the literary works of Coleridge find that he left behind a trail of fragments and incomplete projects. While often brilliant, most were doomed to obscurity. Coleridge is best known for writing The Rime of the Ancient Mariner and Kubla Khan. But Coleridge never actually finished Kubla Khan. He contends that it was based on an opium-inspired dream that was interrupted because a “ Person from Porlock” came along. Coleridge himself described his procrastination as “a deep and wide disease in my moral Nature . . . Love of Liberty, Pleasure of Spontaneity, these all express, not explain, the fact.” As Molly Lefebure described him in her book, A Bondage of Opium, “his existence became a never-ending squalor of procrastination, excuses, lies, debts, degradation, failure.” Margaret Atwood The author of The Handmaid’s Tale describes herself as a “world-class procrastinator.” Atwood says her daily routine consists of puttering around and stressing throughout the morning until mounting anxiety finally drives her to begin writing around three in the afternoon. “I procrastinated for about three years about starting The Handmaid’s Tale,” Atwood once said. “I tried to write a more normal novel instead because I thought it was just too batty.” Despite her struggles with procrastination, Atwood still manages to get her work done. During her five-decade career, she has written 14 novels, 9 short story collections, 16 volumes of poetry, 8 children’s books and 10 full-length non-fiction works. Truman Capote “I am a completely horizontal author,” Capote said in an interview with The Paris Review. “I can’t think unless I’m lying down, either in bed or stretched on a couch and with a cigarette and coffee handy. I’ve got to be puffing and sipping. As the afternoon wears on, I shift from coffee to mint tea to sherry to martinis.” Capote, who wrote In Cold Blood and Breakfast at Tiffany’s, said he avoided writing with a typewriter. “Not in the beginning. I write my first version in longhand (pencil). Then I do a complete revision, also in longhand. Essentially I think of myself as a stylist, and stylists can become notoriously obsessed with the placing of a comma, the weight of a semicolon. Obsessions of this sort, and the time I take over them, irritate me beyond endurance.” Victor Hugo The French poet and novelist, whose works included the epic novels Les Misérables and The Hunchback of Notre Dame, struggled mightily with procrastination. In acknowledgment of his weakness, Hugo employed a unique tactic to keep himself on task: he had his servant strip him naked in his study, take away his clothes, and then leave him alone until a predetermined time. Confining himself to the study without clothing was Hugo’s valiant effort to avoid the temptation to go outside. With nothing to do and nowhere to go, Hugo found the wherewithal to complete his work. Herman Melville Melville began writing Moby Dick in February 1850 and finished 18 months later— a full year later than he had planned. The American Romantic author reportedly had his wife chain him to his desk while he was struggling to finish the epic novel. It was worth the inconvenience. The book is revered as one of the greatest American novels. William Faulkner said he wished he had written the book himself, and D. H. Lawrence called it “one of the strangest and most wonderful books in the world” and “the greatest book of the sea ever written.” The book’s opening line, “Call me Ishmael,” is considered among the best ever written.
https://medium.com/better-advice/think-you-have-a-procrastination-problem-ac3654e6d89a
['Tom Johnson']
2020-11-04 12:21:23.568000+00:00
['Writing Tips', 'Self Improvement', 'Writing', 'Advice', 'Procrastination']
Solution Architects Guide to Event-Driven Integration
Photo by Crystal Kwok on Unsplash As a solution architect, you need to keep an eye on technology trends and decide when is the right time for your organisation to get involved, and I would like to draw your attention to event-driven integration. This is an approach to integration that developers and architects are starting to recognise, the recent State of API Technology¹ report concluded that developers believe event-driven integration is the most important category in the integration strategy of their business. Cloud Elements The State of Integration Report 2020 Event-driven integration is an evolution away from the traditional hub and spoke integration patterns like an ESB, which is becoming an anti-pattern² for agile cloud-native integration. The agile cloud-native applications require faster change than could typically be delivered using the centralised monolithic integration teams with their ESB from traditional vendors. Also, the cloud-native applications and distributed data silos have a challenge of real-time data integration where a change in one data system needs to be simultaneously reflected in multiple other systems. This is generally hard and expensive to achieve with synchronous REST-based API integration patterns, as development teams have to maintain that state (REpresentational State Transfer) in their system APIs³ as these systems are based on request and response so they have to poll for change or a specific processes set up to look for change. I have also observed that COVID-19 is driving digital transformations⁴ and fuelling demand for real-time application integration and data movement, giving systems the ability to react to a state change in real-time. An event-driven integration pattern removes the need for clients to ask repeatedly if a given system has new data available (changed state), instead, it is built on an Publisher-Subscribe model⁵ where the client subscribes to events and the publisher will produce the event (data) and doesn’t care how many subscribers will consume this event. Event-driven integration architecture is made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events. The event represents something that happened within a system, a user action or even an action caused by existing request/response interfaces. I have included a made-up example below of a COVID-19 test result which describes the test result, which includes the who, what, when and where of the event. For event-driven integration, there is no need to ask a given system for more information or to enrich the data any further, known as Event-Carried State Transfer⁶. The associated data will then be delivered to other systems that are subscribing to the event and multiple outcomes are possible in real-time. Example Event based on a COVID-19 Test result I will continue on with the COVID-19 testing result example, as the system that generates the testing event must publish to an event broker, and consumers subscribe to a topic and then different outcomes are possible. It can be consumed by a Contact Tracing System to initiate contact tracing; a data warehouse for important data analysis; the person who had the test can be notified of their test result; local health units can be alerted that there is an outbreak in their area and all this in real-time. This is extremely powerful and shows what is possible with event-driven integration, and these consuming applications can be small agile components that can be developed quickly to adapt and react to change which is critical in COVID-19 contact tracing for instance. COVID-19 Event Broker example architecture As with all cloud-native distributed systems like this sprawl and lack of visibility can become an issue as publishers and subscribers proliferate, this was a problem that API integration had at first as well and why API Management came into existence. This is being addressed in the event-based architecture world with AsynchAPI⁷ and CloudEvents⁸ which are open source-based initiatives to create industry standards for defining events and Asynchronous APIs. It is harder than RESTful API Management as that has one protocol of HTTP, compared to many options in the event-based world. This is a good initiative and backed by some large players in this market and I feel that open source communities need this to be successful. If you want to get started with event-based Integration there are many options to get you started, and it can be confusing knowing which one is right for you. I will try to give some guidance but I do not have a vendor agenda and will give my impartial advice based on my experience using the different solutions. If your business is a digital native and cloud-based on one cloud provider then it is easy as they all have their event brokers that work on their cloud and integrate well with there various PaaS offerings, making it easy to create event-driven applications and even event-driven Serverless functions. I do not want to rank or rate the Cloud event broker but will simply list Azure Event Grid, AWS Event Bridge and Google Cloud Pub/Sub; they all have their strengths and weaknesses but are a great way to get started with Event-based Integration. However, if you have a multi-cloud strategy or have on-premise data centres then you will want an event broker that spans all of these and you then need to have a look at Confluent Kafka and Solace PubSub+, which are robust event brokers that can handle large volumes of data but are actually very different animals under the hood. The differences come from their origins as Kafka⁹ was created by LinkedIn and is a log-based broker based on handling large volumes of LinkedIn posts with eventual consistency. Whereas Solace Pubsub+ broker¹⁰ has origins in investment banking where it had to handle large volumes of financial transactions with guaranteed delivery, it can handle impressive throughput and can buffer data but doesn’t store it all like Kafka. There is another option and that is to go at it alone and go open source with no support, just do it yourself with a team of skilled engineers. Then you have a few options like Apache Kafka, NATS, Apache Pulsar, Solace Pubsub+ and if you really want some choices then have a look at CNCF streaming and messaging¹¹ category and there are more than a few options in there to get you started. In summary, I believe that this approach is another useful tool in the solution architect’s tool belt, so that you don’t always have to use the hammer (ESB traditional Hub and Spoke approach). Event-driven integration should coexist with API Driven Integration and can enable faster and more agile integration with the distributed data silos present within the business. In traditional system integrations, we have to handle the complexities and limitations of the systems of record and that is not always a quick process. Whereas if we can identify business events like Purchase Orders, Test Results, Bookings, Login Events, Customer Interactions, etc then we can start to look at changing the business processes associated with them and make it easier for others to do business with us and not be limited by the complexity of our systems of record.
https://medium.com/weareservian/solution-architects-guide-to-event-driven-integration-d9118bd75784
['Martin Arndt']
2020-10-15 23:24:17.277000+00:00
['Cloud Native Application', 'Event Driven Architecture', 'Solution Architect', 'Microservices', 'Integration']
6 Steps to Create an Effective Landing Page!
Section 1: Planning Step #1: Research Before you can lay down code for your landing page you need to use thoughtful thinking when envisioning how you want your landing page to look and feel. It is best to also research your user’s behaviors and pain points for the type of website you are creating. Step #2: Wireframe Equipped with knowledge, it’s time to transition into our next step: wireframing. This step is easily the hardest for a lot of people, because this is the first time that you are “putting pen to paper” to establish how you want your site to be laid out. A wireframe is a visual template that represents the framework of your site. For this wireframe, I took a slightly untraditional route and use Google Drawings instead of an established wireframing app or website. I use this method at times, because it’s quickly accessible for me and anyone I need to share my wireframe with and it has everything I need to quickly spin up a low-fidelity wireframe. This wireframe is just for guidance purposes to form a template for how we want to lay out our page. Step #3: Picking Colors My favorite part: colorrrrssss! The best site that I have used to generate color palettes is coolors.co. You should pick a color palette that matches the theme of your site. How do you want your user to feel when they come to your landing page? For this tutorial, I’m using this color palette: This palette contains earthy tones which produces calming effects Section 2: Coding & Content Step #4: Building the Structure Now, here is your bread and butter…coding. First, we will start with the HTML for the navigation bar, main content and the footer. You can think of HTML as the foundational structure for a house, it lays out the skeleton for our code. Below, you will see the HTML for the navigation bar, main content, and the footer: I made a navigation bar and footer per the wireframe, and for the main content I utilized CSS grid to organize and quickly position the content. One of the eye-catching elements that was added, is the video that plays in the background. On line 47, is where I have the video tag; inside the tag you can see where I am pulling in the video and providing additional information on how I want the video to run. I choose a video from the site Pexels, it features high-quality videos that you can utilize for your site! Remember: the HTML is just for structural purposes, the CSS is going to determine your styling for your site. Step #5: Styling the Structure In CSS, we will style our HTML code to create the feel for our landing page. Below, you will see the CSS for the navigation bar, main content, and the footer: Now, I will go over some of my design decisions I made and the reasons behind the decisions: Hero Video : For a blog site, I really wanted an impactful hero. Once I saw the video on Pexels it appealed to me as triumphant, earthy, and calming. It matched the aesthetics of what I wanted the landing page to portray. : For a blog site, I really wanted an impactful hero. Once I saw the video on Pexels it appealed to me as triumphant, earthy, and calming. It matched the aesthetics of what I wanted the landing page to portray. Text Color & Font : Due to the busy background, I needed an easily readable font and a bright color to contrast with the video. I decided to use a font in the sans-serif family because they are simple, clean, and easy to read. Also, all colors on the page are the same bright tan because I wanted to ensure that the text was constantly viewable throughout the loop of the video. : Due to the busy background, I needed an easily readable font and a bright color to contrast with the video. I decided to use a font in the sans-serif family because they are simple, clean, and easy to read. Also, all colors on the page are the same bright tan because I wanted to ensure that the text was constantly viewable throughout the loop of the video. Symmetrical Alignment : Blog sites are more often very structured. I wanted to keep the same concept by centering all the elements on the page. On line 43 in the CSS code, you can see where I also added additional gap spacing to the grid cells to take up a bit more white space on the page. This still leaves a good amount of room to view the video in the background. : Blog sites are more often very structured. I wanted to keep the same concept by centering all the elements on the page. On line 43 in the CSS code, you can see where I also added additional gap spacing to the grid cells to take up a bit more white space on the page. This still leaves a good amount of room to view the video in the background. Call-To-Action (CTA) Button: All effective landing pages will have an CTA button because the purpose of the page is to get the user to initiate the service you are providing. Step #6: Adding Content You want to add content to your site that is appealing and relevant to your user. For the purposes of making an effective landing page, you should showcase your best work and/or capture the essence of what you are offering on your site in a concise way. I staged our demo as a blog site, so I made sure to have a section where we could showcase an awesome blog and use a CTA button to ask the user to sign up to view more content on our site.
https://medium.com/the-innovation/6-steps-to-create-an-effective-landing-page-9222f5464073
['Adrianna Isom-Owen']
2020-07-24 17:24:35.140000+00:00
['Design Thinking', 'CSS', 'Landing Pages', 'HTML', 'Design']
Google’s AR Design Guidelines suffice while Apple’s fall short
The 3D industry hit a huge turning point last summer when Google and Apple introduced mobile augmented reality (AR) platforms. In a matter of weeks, the center of gravity for 3D UX design shifted to mobile. For the first time, anyone familiar with mobile app development could build immersive 3D experiences, with the potential to reach half a billion Android and iOS users (instead of just a small pool of VR headset owners). These platforms — ARCore from Google and ARKit from Apple — democratized 3D design for the masses. But enthusiasm quickly gave way to more practical concerns: What tools and documentation were available to facilitate the embrace of 3D? How could mobile app designers and developers, accustomed to mature tools that formed (mostly) integrated workflows, begin to design for 3D? Not only did they need to learn the important differences between designing for 2D and 3D, but they also had to wade through the hodgepodge of tools and byzantine workflows borrowed from gaming, video/film entertainment, architecture and engineering disciplines. To answer designers’ first concern, both Apple and Google quickly released augmented reality design guidelines. Apple’s guidelines are the more modest of the two in terms of both content and ambition. These appear under the Augmented Reality entry in their Human Interface Guidelines. Google’s Augmented Reality Design Guidelines (GARDG) caught our attention for its greater breadth of topics, which seem to draw from their inspiration from Google’s real-life experience designing and building apps like AR Stickers and Just a Line. But both guidelines ultimately fall short when confronting the complexity and ambition expressed by many designers. Apple and Google limit their focus to simple, single scene applications and make no allowance for complex mechanics — or really anything behind simple object placement and sticker-like functionality. This doesn’t meet the needs of anyone building apps that include interactivity like: Object selection Conditional behaviors Branching scene flows or storyboards driven off of user behavior Movement between scenes using teleportation Portals Physical gestures Both sets of guidelines lack any mention of multi-scene use cases, which automatically excludes many modes of interactivity or conditional behavior that leads to transitions, complex or more interesting changes of state, personalization, and ultimately a deeper, more immersive experience. Similarly, there is no discussion of animations (a common topic in our interviews with designers), either triggered or timed, or the notion of a shared or collaborative environment. The latter example is one we frequently encounter. Designers want to allow remote collaborators and clients to see their AR prototypes in the environment for which they were intended, and to provide feedback, all in real time. Even in the relatively tame realm of static object creation and placement, designers are already searching for the best way to design for complex behaviors, such as selecting objects that might be hidden by other objects. That said, when it comes to object placement (the most basic interaction in AR), the Google guidelines make assumptions about optimal object placement range — within the reach of the user — that we see no reason to codify at this time. What about throwing objects as a method of placement, or pointing, grabbing and interacting with objects at a distance? However, compared to Apple’s guidelines, which cover object placement only in relation to ARkit surface plane detection, Google’s treatment looks exhaustive. Not only are there sections covering “tap to place,” “drag to place,” and “free placement” methods, Google also includes a section on creating a sense of realism that briefly touches on the use of physics in object placement. Despite what it’s missing, Google’s Augmented Reality Design Guidelines (GARDG) is a good starting point, with more practical advice and a more pragmatic layout. Since they launched it in the summer, there’s been a lot of new additions, like an excellent new section on UI components. Meanwhile, a section entitled Designing the Experience covers practical concerns like onboarding. With its ongoing evolution, we can reasonably expect the guidelines to grow to reflect many of the areas of interest we’ve heard expressed. In the meantime, designers can fill in the gaps left by Google and Apple. We have the opportunity to shape best practices for 3D UX ourselves, and a few industry leaders have already started doing so. For instance, Bushra Mahmood, a designer currently with Unity, took the initiative to develop her own extensive design guidelines (well worth a look, here), while teams, like one at Marino Software, are bootstrapping entire new processes with a mishmash of existing tools. This kind of grassroots work is crucial, because Apple’s and Google’s guidelines will never keep up with millions of creative people experimenting and innovating. By the time headsets are widely available, the distinction between 2D and 3D UX design will have been permanently erased. By offering little in the way of resources, Apple and Google are holding the door wide open for anyone who wants to lead the way. Maybe that was the plan all along?
https://medium.com/figma-design/torch-placeholder-296c70710b79
['Paul Reynolds']
2018-10-22 18:49:57.318000+00:00
['Technology', 'Tech', 'UI', 'Augmented Reality', 'Design']
Bleary and Blind then Blam
5/6/2020 — Carlisle, Massachusetts You’re in a room for one. You’re in a house for a few. You’re in a cabin in the mountains where your only company are trees as slender and silent as women in fashion illustrations. You’re in a city in a box stacked and arranged like Jenga with countless other city boxes. Your view is an empty sidewalk in the suburbs that now seems more prison than escape from the city box you came from. You watch a field fringed with a green that’s so slow moving it seems frightened, as if the emerging vegetation will decide, we’re done. Let’s shelter in the damp and dark. Vacancy has come to define you more than your personality, because what use is personality when you’re alone? You’re with people related or legally bound to you whom you’ve now seen so often and so exclusively they have become like water on the stone of your heart. Their tics and utterances and demands are the drip drip drip that the vacancy has become on those of us who are truly alone. What had felt like a meaningless pleasant waterfall of time to yourself or time with those you love most is now an erosion. It never ends. You can’t shut if off, it will bore through you until you’re hollow.
https://medium.com/thacher-report/bleary-and-blind-then-blam-855671257f55
['Zachary Thacher']
2020-05-06 21:11:25.515000+00:00
['Quarantine', 'Memoir', 'Coronavirus', 'Relationships', 'Ghosts']
craven
cowards love it when you carry their weight grateful eyes are a tactic tacit disregard lines the look the traveler throws her keepsakes off the caravan until she looks back push the ponderous stone off your heart and rummage through your wrecked belongings there must be something left to salvage.
https://medium.com/meri-shayari/craven-9752d7c30806
['Rebeca Ansar']
2020-12-24 20:54:55.701000+00:00
['Life Lessons', 'Storytelling', 'Poet', 'Poem', 'Poetry']
Kubernetes Deployments
Prerequisites I recommend you know the basic knowledge of Kubernetes Pods before reading this blog. You can check this blog for details about Kubernetes Pods. What Is A Deployment Normally, when working with Kubernetes, rather than directly managing a group of replicated Pods, you would like to leverage higher-level Kubernetes objects & workloads to manage those Pods for you. Kubernetes Deployments is one of the most common workloads in Kubernetes that provides flexible life cycle management for a group of replicated Pods. A Deployment is a Kubernetes object that provides declarative updates, such as scaling up/down, rolling updates, and rolling back, for a group of identical Pods. In other words, A Deployment ensures a group of identical Pods to achieve the desired state. ReplicaSets v.s. Deployments Rather than directly managing Pods, Deployment utilizes ReplicaSets to perform declarative updates for a group of replicated Pods. The following picture demonstrates the relationship between ReplicaSets and Deployment: ReplicaSet ensures that a specific number of pod replicas are running at a given time, based on replica number defined in a ReplicaSet manifest. Although it provides an easy way to replicates Pods, it lacks the ability to do rolling updates on pods. Deployments are built on the top of ReplicaSets. A Deployment essentially is a set of ReplicaSets. It rolls out a new ReplicaSet with the desired number of Pods and smoothly terminates Pods in the old ReplicaSet when a rolling update occurs. In other words, a Deployment performs the rolling update by replacing the current ReplicaSet with a new one. You can check this doc for more details about rolling update or rolling back Deployments. A Deployment Example The following is an example of a Deployment configuration for creating an Nginx server with three replicated Pods. apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment-demo spec: selector: matchLabels: app: nginx env: demo replicas: 3 strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: app: nginx env: demo spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - us-central1-a - us-central1-b - us-central1-c podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - nginx topologyKey: kubernetes.io/hostname containers: - name: nginx image: nginx:1.15.3 ports: - containerPort: 80 Metadata The field metadata contains metadata of this Deployment, which includes the name of the Deployment and the Namespace it belongs to. You can also put labels and annotations in the field metadata . Deployment Spec and Pod Template The field spec defines the specification of this Deployment and the field spec.template defines a template for creating the Pods this Deployment manages. Pod Selector The field spec.selector is used for the Deployment to find which pods to manage. In this example, the Deployment uses app: nginx && env: demo defined in the field sepc.selector.matchLabels to find the pods that have labels {app: nginx, env: demo} (defined in the field spec.template.metadata.labels ). The field sepc.selector.matchLabels defines a map of key-value pairs and match requirements are ANDed. Instead of using the field sepc.selector.matchLabels , you can use the field sepc.selector.matchExpressions to define more sophisticated match roles. You can check this doc for more details about the usage of the field sepc.selector.matchExpressions . As you can see, a Deployment relies on pod labels and pod selector to find its pods. Therefore, it is recommended to put some unique pod labels for a Deployment. Otherwise, Deployment A may end up managing the pods that belong to Deployment B. Replica The field spec.replica specifies the desired number of Pods for the Deployment. Kubernetes guarantees that there are always spec.replica number of Pods that are up and running. It is highly recommended to run at least two replicas for any Deployment in Production. This is because having at least two replicas at the beginning can help you keep your Deployments stateless, as the problem can be easily detected when you are trying to introduce “stateful stuff” to a Deployment with at least two replicas. For example, you will quickly realize the problem when you are trying to add a cron job to a two-replicas Deployment to process some data on a daily base: the data will be processed twice a day as all replicas will execute this cron job every day, which may cause some unexpected behavior. In addition, a singleton pod may cause some downtime in some cases. For example, a single-replica deployment will not be available for a moment when the single Pod is triggered to restart for whatever reason. Rolling Update Strategies The field spec.strategy defines the strategy for replacing old pods with new ones when a rolling update occurs. The field spec.strategy.type can be Recreate or RollingUpdate . The default value is RollingUpdate . In general, it is not recommended use Recreate in Production based on the consideration of availability. This is because Recreate will introduce downtime when a rolling update occurs: All the existing pods need to be terminated before new ones are created when spec.strategy.type is Recreate . You can use maxUnavailable and maxSurge to control the update process when you set spec.strategy.type == RollingUpdate . The field maxUnavailable sets the maximum number of Pods that can be unavailable during an update process, while the field maxSurge specifies the maximum number of Pods that can be created over the desired number of Pods. The default value is 25% for these two fields. Moreover, they cannot be set 0 at the same time, as this stops the Deployment from performing the rolling update. you can set the field maxUnavailable 0 as this is the most effective way to prevent your old pods from being terminated while there are some problems spinning up new pods. Pod Affinity The field affinity inside the field spec.template.spec allows you to specify on which zones/nodes you want to run your Deployment's Pods. As shown in the following picture, the ideal scenario of running a Deployment is running multiple replicas in different nodes in different zones, and avoid running multiple replicas in the same node. You can check this doc for more details about how to assign your Pods to proper nodes. What Is Next I recommend you read this blog if you are curious about how to utilize Kubernetes StatefulSets to run stateful applications in Kubernetes. I recommend you read this blog if you are curious about how to utilize Kubernetes Services to load balance traffic to your applications in Kubernetes. Reference
https://azhuox.medium.com/kubernetes-deployments-d04263c67b24
['Aaron Zhuo']
2020-11-30 16:59:10.920000+00:00
['Entry Level', 'Kubernetes']
The Depth I: Stereo Calibration and Rectification
Hello everyone! Today we will talk about what is stereo camera and how we are using it for computer vision. By using the code I wrote for you, I will explain how we are calibrating the camera for stereo cameras and calculate a disparity map. I won’t go into mathematical details, you can read some OpenCV documents for that. Let’s start! This is the magic we’ll cover today! The reason we can perceive the depth is our beautifully aligned eyes. If you noticed, when we look close objects with one eye, we’ll see a difference between both perspectives. But when you look something far away, like mountains or buildings kilometers away, you won’t see a difference. These differences are automatically processed in our brain and we can perceive the depth! Animals that have eyes aligned far right and far left can’t perceive depth because they don’t have common perspectives, instead they have a wide-angle perspective. Some of them, like ducks, shake their heads or run fast to perceive depth, it’s called structure from motion. We won’t cover this concept, for now, let’s focus on a system like our eyes. Simplified stereo vision. You see how an object P is observed from two cameras. The object's position is different in both images. If two cameras aligned vertically, the observed object will be in the same coordinates vertically(same column in the images), so we can only focus on x coordinates to calculate the depth since close objects will have a higher difference in the x-axis. But to achieve that, we need to calibrate the cameras to fix lens distortions. After the calibration, we need to rectify the system. Rectification is basically calibration between two cameras. If we calibrate and rectify our stereo cameras well, two objects will be on the same y-axis and observed point P(x,y) can be found in the same row in the image, P1(x1,y) for the first camera and P2(x2,y) for the second camera. From there, it’s the only difference between the pixels and depth calculations. Respectively, upper left and right images are rectified left/right camera images, lower left is their combination to show the difference, lower right is the depth map. First, mount the stereo cameras to a solid object(ruler, wood or hard plastic materials, etc.) so that the calibration and rectification parameters will work properly. If you have Intel Realsense or zed camera, for example, you can skip all the parts because Realsense has auto-calibration and zed is already calibrated as factory-default. The next step is the calibration of both cameras separately. You can follow my calibration guide for that, it’s highly recommended for the next steps. Stereo cameras required the single calibration first since rectification requires these parameters. Use a chessboard image for the calibration and use at least 20 images for good calculation.
https://medium.com/python-in-plain-english/the-depth-i-stereo-calibration-and-rectification-24da7b0fb1e0
['Ali Yasin Eser']
2020-12-28 08:06:34.418000+00:00
['Stereo Camera', 'Opencv Python', 'Opencv', 'Python', 'Depth Map']
Being Human
A Quote & A Question image by author Dear One, I am consistently amazed both at how hard it is to be human, and how many different ways there are to go about this human experience. Today, please ask yourself, what is something about being human I am at peace with, and what is something I am working to improve or change? Much love to you, Kate
https://medium.com/age-of-awareness/being-human-fe5a3d2d03e5
['Katherine Grace']
2020-06-26 00:55:59.126000+00:00
['Self Improvement', 'Meditation', 'Consciousness', 'Self-awareness', 'Self Love']
Geospatial adventures. Step 1: Shapely.
Image generated using KeplerGL Geospatial adventures. Step 1: Shapely. A quick look at the basics of working with geometrical objects in Python using Shapely library. This is a first in a series of posts summarising some of the key outtakes from working with geospatial data with a PropTech twist over the last couple of years. These are going to have a bit of everything — geospatial datasets, geometric shapes, raster files, maps, visualisations. Starting from the very basics and building up towards more interesting and challenging things in the later posts... Let’s kick things off with introducing Shapely. Without a doubt one of my favourite libraries in Python — very central and absolutely essential to any geometry/geography related work you will end up doing. The library allows you to work with three main types of geometric objects: Point, LineString and Polygons+ geometry collections if you want to combine them. There’s a bunch of others — linear rings, multi-points, multi-polygons, etc., but for now these will do, the methodologies are very much transferable. Installation Pretty standard installation, using pip. If, like me, you are using jupyter — you can just run !pip install shapely Note that GeoPandas is using Shapely under the hood, so if you have installed GeoPandas— you probably have a recent version of Shapely already. You can perform some of the Shapely operations after importing Geopandas without a separate import, however, if you want to work directly with Point and Polygon objects — you would still need to load them in first. Enough about Geopandas however, we are going to look at it in more detail in the next post. Once installed, import it into your notebook and load the main geometry types: import shapely from shapely.geometry import Point, Polygon, LineString, GeometryCollection import numpy as np I am also importing numpy because I like it so much. Seriously though, I very often find myself jumping back and forth between shapely objects and equivalent numpy arrays of coordinates as numpy allows you to do some of the operations in explicit vectorised form a lot quicker, so it’s a good idea to look at the connection between the two right from the start. Point objects As the name suggests, this is just a point on a two-dimensional plane, characterised by a pair of coordinates. One of the super convenient features of Shapely is — it allows you to view all the geometric objects without having to resort to any graphical package. Note that regardless of the coordinate system positioning of the object, it always centres on the object for you when you want to view it. pt = Point(10, 10) pt1 = Point(100, 101) You can also display the string representation of the object, by just wrapping str() around it, or convert it to a numpy array of its coordinates. As I mentioned, I find the latter particularly useful as I often find myself working with large arrays of geometric objects (for example over 6mln building polygons from OSM) and if I want to do calculations in vectorised format, numpy is absolutely irreplaceable. There is also a method to load this string representation back into geometric format, which will come really handy when you have to load the data stored in a non-geometric format, for example from a csv file. If you want to quickly look at several objects and see how they scale vs. each other all you have to do is turn them into a geometry collection: A few other handy methods — distance, coordinates collection. In[8]: pt.distance(pt1) Out[8]: 127.98828071350908 In[9]: pt.x, pt.y, pt.xy Out[9]: (10.0, 10.0, (array('d', [10.0]), array('d', [10.0]))) One final thing before moving on to lines. All shapely objects have a .name attribute. This can be useful, for example, when you are transforming each of your polygons from a large collection stored in a GeoPandas or Pandas DataFrame into an array of smaller polygons, like a grid and want to have an easy way of relating them back to the original polygons. In[10]: pt.name = 'My Point' pt.name Out[10]: 'My Point' LineStrings LineStrings are initiated in a very similar way, only this time we have a list of tuples, rather than a single one. They can cross themselves and pass through the same points multiple times, however, the latter is not recommended as it adversely impacts performance and you are better off splitting them into individual components. Note that the order of points is important as it determines the order in which you pass through them (same applies to polygons as you’ll see below). ln = LineString([(0, 1), (20, 100), (100, 3), (120, 102), (200, 5)]) As with points, you can convert LineString object to an array of point coordinates. Order is preserved here, so this can be used to quickly get the coordinates of the first and last point — handy for constructing tree objects representing road networks for example. In[13]: np.array(ln) Out[13]: array([[ 0., 1.], [ 20., 100.], [100., 3.], [120., 102.], [200., 5.]]) Or if you want the same list of tuples representation used to create your LineString in the first place: In[14]: list(ln.coords) Out[14]: [(0.0, 1.0), (20.0, 100.0), (100.0, 3.0), (120.0, 102.0), (200.0, 5.0)] You can also split out only X coordinates or only Y coordinates (of course, you can do that using numpy as well): In[15]: list(ln.xy[0]), list(ln.xy[-1]) Out[15]: ([0.0, 20.0, 100.0, 120.0, 200.0], [1.0, 100.0, 3.0, 102.0, 5.0]) A quick look at the graphical representation together with the points we created earlier: Calculating a point to line distance, point projection on the line (distance from start along the line) and the length of the line is as trivial as: In[17]: pt.distance(ln) Out[17]: 8.01980198019802 In[18]: ln.project(pt), ln.length Out[18]: (10.801980198019802, 453.46769176178475) In[19]: list(ln.interpolate(ln.project(Point(1, 1))).coords) Out[20]: [(0.039211841976276834, 1.1940986177825703)] Note that if the projection of the point onto the line happens to be outside of the defined area — the distance will be calculated to the nearest end of line. If you are after the actual projection — you would need to do a bit of extra geometry. The easiest thing would be to extend the first and last segments of the line and still use the same projection method. Line intersections are pretty straightforward too, even when you end up with multiple intersections. The result is a MultiPoint object, which you can iterate through, just as if this was a regular list, with Point objects as iterables: In[21]: str(ln.intersection(LineString([(0, 0), (200, 100)]))) Out[22]: 'MULTIPOINT (72.55474452554745 36.27737226277372, 110.561797752809 55.28089887640449, 144.5255474452555 72.26277372262774)' In[22]: [np.array(a) for a in ln.intersection( LineString([ (0, 0), (200, 100) ]) )] Out[22]: [array([72.55474453, 36.27737226]), array([110.56179775, 55.28089888]), array([144.52554745, 72.26277372])] Polygons Not surprisingly, creating one is very similar to creating a LineString and, as in LineStrings , the order in which points are listed matters. Polygons can have holes inside and the way these are defined follows a simple rule: Polygon([list of polygon coordinates],[list of holes]), where each hole is itself a polygon. Note that polygons representing holes have to be either fully inside your original polygon or can touch it in no more than one place. Note that polygons can not be directly converted to a collection of points like we did with LineStrings and Points. Instead, we have to deal with their exterior and interior outlines (which are LinerRings themselves, i.e. a lineString, which loops on itself). For interior boundaries we get back an iterator: In[25]: np.array(poly.exterior) Out[25]: array([[0., 0.], [0., 1.], [1., 1.], [1., 0.], [0., 0.]]) In[26]: [np.array(a) for a in poly.interiors] Out[26]:
https://towardsdatascience.com/geospatial-adventures-step-1-shapely-e911e4f86361
['Dmitry Selemir']
2020-06-15 16:49:42.090000+00:00
['Geospatial', 'Shapely', 'Data Science', 'Python', 'Geometry']
A Character Has 4 Pivotal Moments To Change In A Movie by Peter Russell
A Character Has 4 Pivotal Moments To Change In A Movie by Peter Russell FilmCourage.com Follow Sep 8, 2018 · 4 min read Watch the video interview on Youtube here Film Courage: Can you give me an example of just showing, whether it’s someone waking up in the morning and they have back pain… Peter Russell, screenwriter and script doctor: Oh yeah…BREAKING BAD, here is this horrible guy, he got rid of his brother-in-law (we didn’t mind seeing him die), but he ends up ruining his family entirely, Jesse, etc. When we first see Walter White, actually we first see him in his underwear in the desert, when we first see him at home waking up, he’s waking up in this awful little bedroom with a terrible nightstand, he’s got this crappy little exercise machine which he gets up on which he’s broken after 10 seconds (one of those step things) and then the camera pans to the wall and we see on the wall that in 1983 Walter White was in line for a Nobel Prize in Physics (Chemistry). Then we go back to him and see he’s dead, something has hurt him. What is it? Well we find out much later what it is, that he was betrayed by his partner, by his love. And then he goes and he slumps down and he is sitting at his breakfast table and it’s his 50th birthday and he gets soy bacon for his birthday. And his wife goes “Eat it! You’ll like it!” And then his son comes out and says “Well, the water heater is not working.” “Well, you’ve got to get up early and be the first in the shower.” “Why can’t we buy a new water heater?” Right? Well, because Walter is a loser, right? Watch the video interview on Youtube here Everything about Walter shows us that he’s been horribly wounded by something. What we’re rooting for is for Walter to get better and so when Walter becomes a badass, I don’t know about you but I’m on his side. I know he’s hurting innocent people, I don’t care…”Walter, get ’em, get ‘em! Get that guy!” So that is a terrible part of the human soul baby. But it’s the device by which we make a character sympathetic is to show their wounds because as human beings we’re not going to be interested in good-looking, perfect people who are making a lot of money and they are great in everything they do. Who gives a crap? We want to see people that we can identify with because that’s not us. We’ve got problems, right? I’ve got problems, right? I want to see my problems and somebody else with problems, dealing with problems, okay? And you can say Okay a show like RIVERDALE doesn’t do that, but they do. And there’s fantasy shows where that’s not the case. But most of the time you do want to see a wound. That’s what likability really means. “Oh, they’re like me? They’re screwed up. They don’t have it all together. Wish I did. Maybe they’ll get it all together, right? Film Courage: And then that means “I’ll get it together.” Peter: Uh-huh. How did they get it together? But mostly it’s just like “Yeah, they’re like me. They’re not perfect, they are like me.” Film Courage: Well, you’ve been using an acronym for a little bit. And what’s funny is driving over I was looking through the notes and I think that David [Branin] had said off camera that BMOC stands for Big Man On Campus in the basketball world which I was not privy to, I did not know that. For writing what is your take on this acronym? Peter: Beginning, middle, obstacle and climax. That’s what that stands for. I found this out years ago analyzing all movies that there was an E = mc2 moment for me. There were four crescendos in a movie where the hero is asked to change and asked to learn the theme of the movie and asked to learn how to heal. All the things I talk about, the big things I talk about, healing, learning the theme, stopping bleeding, all that. There’s four times in a movie inevitably that that happens at a crescendo. It’s 30 pages in, 60 pages in, 90 pages in and about a 108 pages in. Those I call the beginning, middle and climax. The BMOC, right? Now that’s a structure that is in every great movie practically that you’ve ever seen. It’s not in a [Jean-Luc] Godard movie, okay. If you’re writing a French Wave movie, I’m sorry I won’t be able to help you. That’s just a French guy peeing in an alley for two hours and that’s great! I love those movies. But in a Hollywood film that structure is invariably in the story and if it’s not there’s usually something missing. It will be superseded some day but that’s what’s operating now. That BMOC operates in everything, every movie DUNKIRK, in DEADPOOL, everything. But now in DEADPOOL (let’s just take an example) which is a great movie again I’m big on wounds, right? What’s our wound in DEADPOOL? The guy…well he’s wounded because he’s ugly. He becomes extremely disfigured by a chemical bath right. And because of his wound…(Watch the video interview on Youtube here). Watch the video interview on Youtube here Want more Film Courage videos? Check out new videos 5:00 p.m. daily — subscribe at the main Film Courage Youtube channel. Subscribe to the second Film Courage Youtube channel here…
https://medium.com/film-courage/a-character-has-4-pivotal-moments-to-change-in-a-movie-by-peter-russell-2d026f133662
[]
2018-09-08 23:52:35.936000+00:00
['Writing Tips', 'Writing', 'Screenwriting', 'Writer', 'Writing Life']
How did we build a Data Warehouse in six months?
Abstract At Everoad, we leverage data to revolutionise the truck industry. For that purpose and to keep pushing our growth forward, we set ourselves the following ambitious challenges on the data side for 2018: Collect the data from every single source we have in our possession to have a 360° overview of the company (CRM, emails, calls, …) Provide to anyone in the company with the right information at different level of complexity and aggregation Have one single way to get the data: one and only source of truth Foster data analysts’ autonomy so they can extract their own data and be focused on what they do best: providing insights and helping the whole company to perform Build a healthy and forward looking structure that could enable future data scientists to use it As an introduction, if you have limited knowledge on our company, we are a marketplace that enables shippers and carriers to match in a smarter way to ship goods by truck. Our product is both a platform and a homemade back office to support the operation department that focuses on monitoring and handling the whole process from the chartering part to the follow up part and billing part. With such an overview of the company, you can easily get the basic data we believed our user would need: Basic BI on platform activity to better understand supply/demand flows Monitoring the operational complexity to have it more automated and streamlined the to have it more automated and streamlined Product related data analysis to drive product growth All those issues represented a pretty huge workload. How did we tackle that? The answer is in this article. We will first talk about the setup we had and why we wanted to migrate, then we will quickly describe the human resources we got to do so. Then we will mentioned the whole infrastructure we use. Finally, we will explain the global setup in place and discuss about how we want to improve it and what can be the next step and/or possible evolutions. The legacy of our data: why we needed to migrate In May 2017, the company already knew that data would be key. Therefore they decided to start using the data better and in a much more scalable and forward looking way. At this point we used Redash to extract the data from the BI noSQL database (a daily replication of the production MongoDB database) and import it into Gsheets by using some csv. Then, from these Gsheets (related between each over to create an easy-to-go data-pipeline), we plugged Google Data Studio to provide some basics dashboards. This was a really handiwork setup, but until then it correctly did the job. What happened? Gsheets is quite nice and this was a first step. However, when you start to have a “decent” amount of data, Gsheet becomes too slow. Even if Gsheet seems to do the job or if you try to filter/aggregate a lot your data before importing it during the ETL process then you just lose a lot of information and if you still can do so, your Google Data Studio dashboards will just become slower and slower, trust us. Obviously in the end, Gsheets wasn’t built for that. So we decided to migrate it. We already had Redash, querying a dedicated BI noSQL database, and we already had Google Data Studio. We wanted to capitalised on these in order not to lose too much time. ETL legacy pipeline — May 2018 Human resources Two persons were tasked with building the whole data warehouse: a data analysts who handled the product-design, Alexandre Laloo, and a data engineer, myself, who was responsible of all the technical support required: building the BI cluster, building the ETL pipeline and all the aspects we will describe in the following developments. These two resources were 100% dedicated to the ETL (Extract Transform Load) pipeline and to build the data warehouse of the company. In order to do so we needed a strategy. Note that as our team was the first consumer of the data warehouse, we had a good idea of what to build. This point definitely helped us to design and build something consistent and exhaustive. Our overall goal was to do it as quickly as possible. A firm aim was to migrate all relevant data we already had inside Gsheet to BigQuery. Global Infrastructure Regarding the infrastructure, we did not want to take any risk. We chose the top open sources products to help us. Airflow for the whole ETL infrastructure, using Composer inside GCP for the whole ETL infrastructure, using inside GCP BigQuery for the data lake infrastructure for the data lake infrastructure Gitlab for the versioning part and the CI/CD We have one git repository with the Python ETL code where each modification (merge into master post code review) is copied to the bucket used for Airflow (DAG folder). And we have a second repository with all the SQL queries where each modification is updating Airflow variables to modify the ETL. The latter uses the Airflow CLI to update the appropriate variable. Both repositories are stored on the private gitlab instance of the company and use gitlab-ci to run the post script after being merged on master. We would be able to elaborate much on this part therefore, do not hesitate if you have any question in this regard. Basically, Airflow is running Python scripts every hour to collect the data from our different sources, BigQuery is where we store the results. In the end, most of the “code” part is in SQL, transforming the data from a table to another. Python code is only there to provide some API calls and trigger data transformation. This required high literacy in SQL to transform mongo events to activity based and easily understandable tables, but it is perfectly doable even for a non-tech profile. Data warehouse conception As we wanted to act fast, we spent a lot of time designing the best systems to suit our needs. Here is where we ended up: First, we should use a BRIDGE level. BRIDGES are tables which are the result of the data collection (E from ETL) part. The data in these tables is messy, not that well organised and results from a unique data source. Second, we find a CLOUD level. CLOUDS are tables which are cleaning the data and where the complex data aggregations and processing are being performed. For instance we remove useless information, we aggregate the “event” data to another level of aggregation (if you have event level information, we transform this to offer level information or company level information etc.). CLOUDS are just a cleaner level of data. These tables are open to our business analysts. Yet, a lot of data levels are still needed for the analysis. Third, we have LAKES. LAKES are tables aggregating the data from different CLOUDS to provide all the information from a specific angle. For instance, we can find a lake dedicated to the operation team, a lake for the finance team, a lake for the incident part, etc. This data is open to every user of the company and is close to the concept of data lake we can observe pretty much everywhere today. Then, we use VIZ objects. VIZ are filtered data from one or several LAKEs and are dedicated to plug the data visualisation part. To make it simple, each dashboard inside Google Data Studio is plugged to a VIZ table. In the end, we ended up with the following design: Global schema infrastructure — December 2018 What to do next? We worked/created a lot and we could have said “our job is done, let’s move on to another project”. But obviously, this is the exact opposite from that: monitoring a data warehouse is a continuous job, we probably only did 5% of the whole journey. As any product, it is by definition never over and it needs to be continuously improved. But here is the list of aspects to be improved — if you have not started yet your Data Warehouse project within your company — to help you building it with even more efficiency: Remove useless components. As we wanted to migrate the data as quickly as possible, we did not focus on removing Redash. As for now, this is kind of a pain for us as we need to maintain it, etc. So the next level is to use python scripts to directly collect the data from Salesforce and the MongoDB dedicated database to populate BRIDGES. Improve data consistency. For now, we collect the data and we push it directly into BRIDGES. The goal is to clean the data before it is loaded. Meaning be able to detect strings in a better way for instance. Even if we put a lot of efforts in reducing the data consistency problem, as we still use a lot of download/upload of csv files, the data is not fully consistent. Sometimes, for instance, fields change from a type to another, which is not something you like to see (european zip code is a great example for that, phone numbers as well). Luckily we can trick the stuff with SQL casts but this is not the best way to do so. Improve performances of the data lake. Implementing a strategy quickly is often linked with lack of performances. As we are growing day after day, week after week, we keep on collecting each time more data. So we need to be aware of performances issues we might face in an uncertain but close future. We have to make some partition inside BigQuery (clustering fields), improve some queries, add custom indexes inside MongoDB, etc. This is very important if you know you are about to scale up the process and you must be ready for it. Do “real time” data. For now, we have a dedicated BI database which is a dump/restore from the production one. This is done once every hour. If manageable at the moment, when we scale up, we will need to do some “diff import” instead of dump/restore. The best solution would be to be directly linked with the production (I can hear from my desk all the engineers say “What the hell? BI directly connected to the production?? Are you crazy dude! You can impact the production performances!!”. Wait.). The production at Everoad is using an event sourcing approach, which means that we do have a message broker dispatching the messages to micro services. The goal would be to create a new micro service, listening to all the queues & messages and to push all of them inside the data warehouse. Then we would use this kind of “second event store” to start our transformation pipeline, meaning that we would have almost “real time” data. Improve the data access. For now, people have an access to BQ and templated queries to query the data warehouse. The next goal would be to provide a means to create easily data visualisation without our intervention. A tool like Superset (again from Airbnb) would be a nice instrument to have for instance. Improve the data visualisation part. When you want to set up some dashboards, Google Data Studio is an asset, pretty simple, straightforward and easy to use. As we grow, again, depending on the evolution of this tool (new features, etc.) we might change for another, using for instance Tableau or another software if appropriate (easier filtering system, ability to download the dataset you are seeing, etc.). For now, this would be a huge gap as every single dashboard is on Google Data Studio. Conclusion We had two dedicated persons for the project, we gained a lot of experience. We already knew the company quite well (which does help to understand operational metric you have to build etc.), we had a clear objective and we spent some time to design something which would be cost efficient. Using open source technologies will make you save decades. Basically, the Python code is not that hard, Airflow has a lot of connectors making it easier to start with and BigQuery is an incumbent of the market and you can honestly go blind and use it. Note that we had already done an important job, which is not mentioned in this article: the needs collection. We already knew what our users wanted. We knew which data to collect, which metrics to build, relevant dashboards to create etc. This is a really important starting point: you have to know what your clients want and need NOW and also know what they are going to ask tomorrow. This is really key as it is the main difference between “doing an extract” or “doing an ad hoc analysis” and “building a data warehouse”. By building a data warehouse you want to answer the present needs but also anticipate future needs. Meaning that you build something which can be easily changed, maintained, etc. Let’s conclude with the figures part:
https://medium.com/everoad/building-a-data-warehouse-in-six-months-what-did-we-learn-e058e42446f1
['Jérémy Wimsingues']
2019-03-01 08:16:31.626000+00:00
['Airflow', 'Redash', 'Big Data', 'Bigquery', 'Data Warehouse']
Strengthen Your Gym’s Social Media Strategy This Fall
Strengthen Your Gym’s Social Media Strategy This Fall Follow these social media content strategies to bring in new customers this season By: Samantha Koontz, Content Editor As green leaves turn golden, school bells chime to announce a new semester, and your favorite team tightens their laces for kick-off, we can only assume one thing: Summer is turning into fall. For small business owners, this means the time is right to grab the attention of new customers and strengthen your relationships with your loyal fans. As summer winds down and people are adjusting their fitness goals this fall, your gym has the opportunity to bring in some fresh faces. Traditionally, gym memberships slow down after the first quarter of the year, with only 8.3 percent of members joining in each of the autumn months compared to 12 percent in January. However, with the right strategy, you can show potential customers that a visit to your gym or studio is something they will fall for this season. With these three content tips at the ready, you can deepen relationships with your online community to make your customer base stronger than ever: 1. Show off your specials If your gym has a special or discount running, now’s the time to shout it from the rooftops. Fifty-seven percent of consumers will make a first-time purchase if they can take advantage of a special, so whether you’re offering student discounts on new memberships, a BOGO special on Zumba classes, or a free two-week trial for personal training, let your audience know! They’ll be more likely to stop by and try out what you have to offer. You might consider taking a tip from this gym, which put the important details of their special in a graphic: 2. Run a contest on Instagram Contests are an incredibly effective and fun way to drum up excitement about your business. They’re so effective, in fact, that 94.2 percent of social media users admit that online contests influence their awareness of new businesses. By incentivizing users to tag a friend or share your post, you’re spreading the word about your gym quickly and getting the chance to reward a few lucky participants. So, next time you’re planning out your content, throw a contest into the mix! A little healthy competition is good for your pages. This gym decided to do a giveaway for their contest — the winner and their BFF received a bundle of their performance wear. Here’s how they advertised the contest on Instagram: 3. Highlight your team members Sometimes, trying out a new gym can be intimidating, especially if you’re taking the first steps by yourself. Featuring your team members on your social pages is a subtle way to extend a virtual handshake and show potential customers some of the friendly faces they might run into at your gym. Below, you’ll see how one gym used an image of their teammates on Twitter to show off the welcoming, family-like atmosphere they’re proud to show off.
https://medium.com/main-street-hub/strengthen-your-gyms-social-media-strategy-this-fall-7159c6e23520
['Main Street Hub']
2018-08-28 14:01:12.274000+00:00
['Social Media', 'Social Media Marketing', 'Wellness', 'Small Business Marketing', 'Gym']
[Video] Islam and the Environment
The Holy Quran reminds people of the creation and says that Allah created everything that is on this earth so that man “may not exceed the measure”. But humans ignore this instruction and misuse the bounties of Allah. Although the question Allah asks “which of my bounties will you deny?” is rhetorical the answer is that by misusing Allah’s bounties man is denying them.
https://medium.com/virtual-mosque/video-islam-and-the-environment-e3125f5358a5
['Virtual Mosque']
2016-04-09 17:56:10.886000+00:00
['Chapter 55', 'Environment', 'Islam']
When A Computer Goes Bust
When A Computer Goes Bust It throws a kink in things Photo by Kari Shea on Unsplash This was not supposed to happen The computer was not that old and should have lasted for a few more years. The battery had gone out and was replaced. It was anticipated that the computer had some good life still in it. It was just before Christmas that our laptop computer went out. Something happened to the battery which was less than six months old. The computer guys said it was a defective battery. It basically burned up the computer so that it would not work and was dangerous to turn on again. Time for a new computer. It became our Christmas present, which we were forced to purchase right before Christmas. It would take a few days, they said, to get the computer after they took off the information from the old computer and installed it on the new one. No writing on Medium for a few days. It meant a break in the chain, and nothing could be submitted for several days. Not that it mattered a whole lot. It is not like Medium has been lucrative or has paid off for the time spent. Without the computer, it was time off from the daily grind to enjoy the Christmas holidays without some of the pressure. The new computer was ready a day earlier than expected. We picked it up the day before Christmas instead of the day after Christmas. Although it was basically the same computer as the old one, trying to get used to it was a bit cumbersome. The keyboard touch was different so it took some getting used to it. Always punching the right keys did not happen readily. Now it is the day after Christmas. It is time to get back on track. Keep writing and keep trying to find some success.
https://medium.com/illumination/when-a-computer-goes-bust-89306ffcc97c
['Floyd Mori']
2020-12-26 18:45:48.229000+00:00
['Success', 'Writing', 'Computers', 'Medium', 'Christmas']
Tonight’s comic used to feel bad about being weird.
More from rstevens Follow I make cartoons and t-shirts at www.dieselsweeties.com & @rstevens. Send me coffee beans.
https://rstevens.medium.com/tonights-comic-used-to-feel-bad-about-being-weird-4e511796bccf
[]
2019-05-06 03:00:45.026000+00:00
['Weird', 'Psychology', 'Comics', 'Depression', 'Friendship']
Android Espresso for Beginners
Usage Let’s say we have an application with an activity and its layout contains a view with id tv_hello . Our goal here is to find whether the view with the id tv_hello is displayed on the screen. Have a look: First, it’ll start searching a view with id tv_hello , and once it found the view using check function, it’ll verify the visibility through isDisplayed() . We can also use multiple constraints to narrow down the view search, as shown below: In this test, while searching for a view, it will consider two constraints: withId (with a specific id) and withText (with a specific text in the view). Actions Now it’s time to perform actions on the narrowed views. To execute actions, we have to invoke a perform-function on onView with the desired action, as shown below: We can also pass multiple actions as parameters to the perform function, as shown below: In this test first, it’ll type “Hello” on the view and then perform the click action. Only if both actions are executed without any error will the test succeed. Lists We’ve completed basic testing with the standard layout. What if we have complicated views, like a list? To write test cases on lists, we have another function called onData , similar to onView . Let’s say we have a list with a string ArrayList , and we need to find the item with text Kotlin and perform click functionality on that particular item. Have a look:
https://medium.com/better-programming/android-espresso-for-beginners-57628a15f8b4
['Siva Ganesh Kantamani']
2020-06-01 16:01:17.058000+00:00
['Programming', 'Mobile', 'Tdd', 'Android', 'Software Engineering']
The artist is absent
The Artist is Absent Wunderkammer Exhibit #4 Collage: Evgeniy Shvets/Stocksy Conceptual art of the 1960s has spawned a few exhibition approaches that we should take a closer look at. Between the two lockdowns this year, I had the opportunity to visit the new premises of the Haubrok Foundation on Strausberger Platz in Berlin Friedrichshain. Barbara and Axel Haubrok, who come from North Rhine-Westphalia, have been collecting contemporary art since 1988 — with a special focus on conceptual art. People argue about whether this is art at all — or rather a gesture — and what exactly is meant by it. “All conceptual art is just pointing at things,” said the painter Al Held. The author and curator Tony Godfrey summes it up as follows: “Conceptual art is not about shapes and materials, but about ideas and meanings.” He identifies four manifestations: Ready-mades à la Marcel Duchamp, an intervention that brings images, texts and objects into a surprising context, the documentation, in which the actual work is only visible through notes, maps or photos, or words that only represent the concept in terms of language and typography. “During the exhibition”, curated by Axel Haubrok and his son Konstantin, falls into the third category and interests me for three reasons. First: concepts are the core of my work. Second: How do you present concepts from past exhibitions in such a way that they can be understood by someone who has not seen them? Third: Admittedly, I also wanted to see the rooms on Strausberger Platz. A former apartment, two rooms plus kitchen (which is also used as an exhibition room). It is located in a monumental building complex from the 1950s on Karl-Marx-Allee, which was shaped by the architect Hermann Henselmann. This striking place, closely linked to the history of the GDR, is also a setting in Jonathan Franzen’s “Purity”. Favorite novel, favorite writer. I can get to Strausberger Platz from my home in Berlin in seven minutes by bike. A side entrance takes the elevator to the fourth floor. It smells musty, as is often the case in such old buildings, even if they are modernized, the light is ocher. Somehow I expect men in raincoats with leather briefcases to come around the corner and ask me what I’m doing here. Of course they don’t. The door to the apartment is open, voices can be heard from inside. I have registered for a time slot. A friendly lady greets and equips me with documentation that is tightly printed over several pages and lets me in. I turn right first. The floor-to-ceiling windows of the small salon with old herringbone parquet are open on this mild September day and offer a clear view of green treetops. There are high glass tables in the room, on which mostly documents and white printed cards are placed. In one corner there is a stereo system playing a sound recording. I had imagined the place would be more spacious, all in all the area should be maybe 60 square meters. Nevertheless, I feel lost for a moment, don’t know how to orient myself. I’m lucky, Axel Haubrok is there and provides the narrative as he leads me through the exhibition. He knows the stories behind every card and every document. For example, with regard to the title work by Robert Barry “closed gallery” he tells me, that in the late 1960s, Barry invited to exhibitions that were not open to the public. Haubrok raves about the still unrealized “Concept for a Book as an Exhibition Location” (1978) by Barbara Schmidt-Heins. Hans Ulrich Obrist, however, implemented his exhibition. He asked artists such as Ed Ruscha, Maurizio Cattelan, Gilbert & George, Isa Genzken, and Gerhard Richter to design postcards for him, and in 1993 he exhibited them in a hotel room in Paris where he was currently living — without asking for permission. “Hôtel Carlton Palace. Chambre 763” was the title of the exhibition and its documentation. Great idea. The reproduced cards in the slipcase can be seen on the premises of the Haubrok Foundation. As I listen to Haubrok, I think: There is a lot in this place that we can learn for exhibitions that are permanently threatened with being postponed, canceled and relocated to virtual space. There is no vernissage, no finissage, artists are condemned to stay away from their own opening event. If the audience is allowed into galleries, they have to wear masks, disinfect their hands and of course not touch anything. In many museums there were no guided tours as long as they were open, you had to find your way around on your own. The experience is limited or does not take place at all, according to Barry’s motto: “During the exhibition the gallery is closed”. I very much hope not for much longer. Take-away for innovators: “Not only what we can see and take with us is valuable, but also the ephemeral that we experience.” Collage: Evgeniy Shvets/Stocksy These are the findings, that I have taken with me for future exhibitions: 1. Physical experience remains pivotal. Yes, you can visualize ideas and even philosophical questions. The more complex the subject, the more important it is that an exhibition appeals to all our senses. It helps me to understand when I can walk through a room, look at things and return to any exhibit and relate the dimensions. A lot can be viewed online, but without the spatial dimension it is only half the fun. And — I find it interesting to see how others react to the exhibits, to hear what they are talking about with each other. 2. People make the difference. Exhibitions that raise questions, refer to connections and want to point out critical aspects are usually not self-explanatory. Even with the best didactics and technical aids such as audio guides and touchpads provided by museums. As a visitor, I take more information with me when there is someone who explains things to me and whom I can also ask questions. A round of applause to the knowledgeable, committed and often entertaining guides! I have learned a lot from them in museums, foundations, galleries, and also at company exhibitions in recent years. They are systemically important to me. 3. Documentation is essential. Without a catalog or some other form of documentation, such as collected invitation cards, letters and photos, many conceptual exhibitions would be lost to posterity. Some artists may prefer the ephemeral experience. But from the point of view of the public and our target groups, a catalog or a website can be an asset and should be understood as part of the concept. For many artists, the work and the publication belong together anyway. As it should be. Publications can be digital too of course. Think crossmedia. 4. Dada no longer works. In current times we are constantly experiencing deviations from conventions. Some even seem pointless to us. Under these circumstances it’s no real fun to be confronted with lights suddenly being turned off while you are visiting or exhibitions that remain closed to the public. We can create the surprising and the unexpected by changing the exhibition, not allowing it to be static. Visitors may be invited to participate. 5. Co-curation is the magic method. This is what the “strict curator” Otto Kobalek did in 1995. It has been documented in a video by Franz West and Hans Weigand, which can also be seen in “during the exhibition” at Haubrok Foundation. At the opening of the “Kollektion West” exhibition, not a single picture was hanging on the wall. Together with the visitors, Kobalek decided on the spot which work should be hung where. This by the way reminds me of the remarkable co-curation project that the NRW-Forum in Düsseldorf started in the middle of this year under the guidance of its artistic director, Alain Bieber: It is called nextmuseum.io. Interested curators could make suggestions via the social media channel Telegram and artists could upload their works. The related exhibition will be on view at the NRW-Forum in February 2021. More about that later. 6. The format will be the ever-tempting question. Where and how can concepts be made visible? To the extent that cities change and spaces become free, the answers to these questions can always turn out differently. But why not open apparently exotic locations for exhibitions? A hair salon to present the art of hair? A men’s room to address violence against women? Or simply send the concept — the philosophical question — nicely displayed and packed in a shoebox to 1,000 recipients, who in turn provide an impulse and then forward the box to the next thousand — well, what? Visitors no longer fits. People don’t come to the exhibitions, but the exhibition comes to their home. A kind of metamorphic chain staging. Exciting times are ahead of us in which we can — and have to — rethink and try out many things. Perhaps in a few years there will even be a completely new world of terms because “the artist”, “the exhibition” and “the visitor” will no longer fit in the traditional sense. The boundaries are blurring.
https://medium.com/the-innovation/the-artist-is-absent-d7af5bb3fe55
['Sepideh Honarbacht']
2020-12-22 22:32:19.983000+00:00
['Innovation', 'Concept', 'Conceptual Art', 'Creativity', 'Art']
The Ignored Obvious of UX Design Interviews (and interviews in general)
#1 Do Your Research Being a UX Designer, I use research to understand the problem at hand, know my audience, and gather enough context information to make an informed decision for further steps. Research is an integral part of my design process and I apply the same to interviews. In most cases, especially UX Interviews, the recruiter would share details like- name and role of the interviewer, the team you will be interviewing with, link to products that team work on, and link to company information. This is a good starting point for your research. Let’s go into details for what you can research about. Explore the company This is the foremost aspect you should research about because you are first an employee of the company and then a part of a respective team within the company. Each company have their own set of products, their business domain, culture and values that are developed overtime. It is valuable for the interviewees to read about these aspects and have a deep understanding of the company. It shows the interviewer and the recruiter that you are interested, and you care about the potential company you wish to work for. Strategy to research about the company- Check the resources shared by the recruiter before the interview. This might include links to videos, articles and website. Check recent news articles published online by and for the company. Get a sense of what is the buzz around the company right now. Connect with existing employees, via LinkedIn or Twitter, and learn from them about what they admire about the company. Connect with previous interns and ask about their experiences. Know your interviewer This too is a key step since the more you know your interviewer the better you know what to expect. It is not very different from the UX Design process which starts with empathizing with your users and this helps you design better for them. I employ the same approach to interviews. Sansa insists that she can make a better strategy against Ramsey because she knows him very well! Copyright belongs to Game of Thrones In most cases the recruiter will inform you in advance the interviewer for your interview. If not, you can ask the recruiter for details. Once you get the information, look them up on LinkedIn, Twitter, Dribbble, Behance and Medium. Check the past companies they worked for, what did they study and what was their career path. Identify their passion and personality as a designer. To help with this, you can ask yourself the following questions- Do they value visual design more than functional design or vice versa or both? Do they focus a lot on the process? Are they more interested in different tools and techniques used in the UX Design process? What is their role and responsibility in the team? Is there any recent podcast or interview where the interviewer was featured? If yes, go ahead and listen to it. This will help you anticipate their goals for the interview and expectations from you. Since this is all based on secondary research, the predictions will be based on a lot of assumptions and your understanding of all materials. There will be missing dots but, it is ok! You can learn more about the interviewer during the interview and connect the dots on-the-go. But this will give you a head start to prepare for the interview. For example, for my UX Design intern interview with Salesforce, my first interviewer had a visual design background and was currently working as a Senior UX Designer. So my focus while explaining the projects was on product design process and UI design. I tried and avoid any development jargons and explained everything related to development in a very simple manner. My next round of interview was with the Director of Software Engineering and Manager for the CX Tools team. Here, I knew I had to focus a lot on the big picture thinking, project scheduling and management, product strategy based on research, and finally convey how effective the solution was for users. I would say this strategy worked out pretty well for me and help me prepare strongly for certain aspects in respective interviews. How will this help you? Help you tweak your elevator speech and background information for the interviewer. You will know what key points to hit from your journey to get into design. Help you identify the right project from your portfolio to discuss during the interview. Help you tweak the project explanation based on interviewers interest. Research the product Once you know the interviewer, you should look into the product the interviewer works on. This might be a potential project that you will work on and it is good to have a sense of what you are getting into. Knowing the product will also give you an opportunity to ask questions and know more about the product during the interview. I will highly recommend reading as much as possible about the product. Where can you look for product information? In this era most products have some form of open source GitHub repo. Visit the repo and go through it’s documentation. If there is a Twitter account for the product, skim through the feed to know the recent discussions. Checkout any articles about the product and it’s documentation. This should give you a good overview of the product. If you want to go above and beyond, try out the product and have feedback for the respective team member who is interviewing you. This will show that you are not afraid to take initiative to learn the product and share your perspective. But be very sure to mention the assumptions you have made to form your suggestions. Each team have to abide to respective principles, encountered challenges and are tied to constraints while constructing a product. These are hard to know without talking to them. Thus, be considerate and respectful while putting your feedback in front of them. Following is an example of a sketch I made to present my idea for redesigning Salesforce Status website. As I mentioned before, I detailed out all assumptions and considerations to accommodate the proposal, and it was taken positively by the interviewer.
https://uxdesign.cc/the-ignored-obvious-of-ux-design-interviews-and-interviews-in-general-f5f385b39168
['Nishant Panchal']
2019-01-05 05:45:13.750000+00:00
['UX Design', 'UX', 'Jobs', 'Interview', 'Design']
How to Build a Web API
How to Build a Web API Guided Example to Set Up an API using Python and Flask to Make Data Accessible to Users Note: The code for this post can be found here Photo by geralt on Pixabay API stands for Application Programming Interface. It is a software intermediary that allows systems to communicate with each other. Most businesses online have likely built APIs for customers and/or for internal use. For example, when a user enters a URL into their browser, e.g. www.medium.com, they are making a request to Medium’s server. Medium will then give back a response to be interpreted and displayed on the user’s browser. Modern client-to-server communications are mostly handled by APIs. The type and response will be dependent on a set of dedicated URLs (endpoints) and the request type. In this guided walkthrough, we’ll use Python and Flask to build a Web API. Project Plan for Web API Web APIs democratize data and incentivize scientific research. Below is a small sample of what one might expect from a Web API (that provides brewery data). As downstream users of APIs, knowing the format of the response, which is commonly in JSON, enables us to consume this data for other applications in real time. This type of information should be found in the accompanied documentation. By the same token, designing an API from the perspective of a user is important to ensure it is useful to others. A good API will have well-designed URLs that make it easy for users to intuitively find resources. [ { "state": "CA", "city": "Napa", "name": "Napa Palisades Beer Company" } ] When to Choose a Web API A Web API isn’t the only approach to provide access to data. However, it is a better alternative to data dump when: Data set is large, making downloads resource-intensive Data changes or is updated frequently Data needs to be accessed in real time, perhaps for further processing Data doesn’t need to be accessed in its entirety at once RESTful API REST (REpresentational State Transfer) is a terminology that is frequently used to describe APIs. It is a framework that describe some best practices for implementing APIs. APIs that satisfy the REST principles are recognized as RESTful APIs. More information on REST can be found here. What the Flask Flask is a web framework for Python, which means it provides functionality to building web applications, including managing HTTP requests. We will need this feature to establish communication with our application. For demonstration, we’re using Flask in a very limited capacity, although it can be used to fully develop a robust website. Instructions for the installation and more can be found here. In addition, we will need to use an extension to handle the database connection to complete the API. Creating a Basic Flask Web Application We’ll begin by using Flask to create a simple web application. After learning the basics of a Flask app and making sure the software is configured correctly, we’ll turn it into a functioning API. Below is the layout of the API project folder. Importantly, it will contain an app.py code file and a breweries.sqlite file as the data source. Folder Structure -====app.py====- from flask import Flask app=Flask(__name__) @app.route('/') def index(): return 'Welcome' if __name__ == '__main__': app.run() In this step, we create an instance of the Flask class and pass the name of the application (variable __name__) as the argument to the constructor. A Flask object is a WSGI (Web Server Gateway Interface) application which means that the web server passes all the requests it receives to this instance for further processing. Flask maps HTTP requests to Python functions. This is achieved through the .route() decorator in the code. A Route binds a URL to a function, which can be programmed to respond to a request. The designated URL, ‘/’, or technically when no additional path is provided, is mapped to our simple user-defined function, index(). This function returns a simple HTML markup welcome text to be displayed in the browser. The request is then completed. The condition __name__==’__main__’ ensures that the .run() method starts the server when the script is executed as the main program. Run the application using Python and visit localhost:5000. Port 5000 on the local machine, or http://127.0.0.1:5000/ is the default port which Flask serves the application. $ python app.py ======================================== Output * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) To stop the server, hit CTRL+C in the terminal. Any URL that isn’t mapped to a view function will be met with a 404 Not Found error. Sample Data Building onto what we’ve created so far, we can implement a small API with sample data that is hardcoded within our web application. We designate the route ‘/sample/’ to call the user-defined sample() function, which returns the sample data in the form of a list of a Python dictionary with key-value pairs. When URL in route ends with a trailing slash (/), Flask will redirect the request without the trailing slash to the URL with the trailing slash. This means a request to /breweries will be redirected to /breweries/. The query results have to be serialized through the jsonify() method and turned into a response object. Creating a Database (Optional) To optimize the data transfer process, we’ll load them into a database and allow the application to query as needed. We use the Python SQL Toolkit SQLAlchemy to accomplish much of what we need. It also has an extension for Flask. For completeness, this demonstration will begin with data in the form of a Pandas DataFrame. Continue to next steps if the data is already in a database. -====csv_to_sqlite.py====- import sqlalchemy from sqlalchemy import create_engine conn=create_engine('sqlite:///breweries.sqlite').connect() df=pd.read_csv('breweries.csv', index_col='index') df.to_sql('breweries', conn) # use PRAGMA table_info to get column names # conn.execute('PRAGMA table_info(breweries); ').fetchall() sql_rm_query='ALTER TABLE breweries RENAME TO breweries_old; ' sql_crt_query='CREATE TABLE breweries (\ id bigint primary key,\ name text,\ website text,\ address text,\ city text,\ state text\ ); ' sql_ins_query='INSERT INTO breweries SELECT * FROM breweries_old; ' sql_drp_query='DROP TABLE breweries_old' conn.execute(sql_rm_query) conn.execute(sql_crt_query) conn.execute(sql_ins_query) conn.execute(sql_drp_query) We’re also choosing to use SQLite for our application for its easy of use. Normally, loading a DataFrame into a table can be done quickly with to_sql(). However, it doesn’t not allow a column to be defined as the Primary Key, which would be useful for our database. Since SQLite doesn’t allow significant changes to existing tables, we use a series of sql commands to create a new table with a primary key and insert into it the data from the old table. Connecting the Web Application to the Database With the database set up, making the connection is easy with SQLAlchemy. More information on SQLAlchemy can be found here. Having established the connection, the web application can perform queries against the database. For what we designed the API to do, we want to perform SELECT query when a designated URL is accessed. -====app.py====- from flask import jsonify from flask_cors import CORS from sqlalchemy import create_engine CORS(app) engine=create_engine('sqlite:///assets/data/breweries.sqlite') def fetch(): results=engine.execute('select state, city, name from breweries') return jsonify([{'state': result[0], 'city': result[1], 'name': result[2]} for result in results]) @app .route('/breweries/')def fetch():results=engine.execute('select state, city, name from breweries')return jsonify([{'state': result[0], 'city': result[1], 'name': result[2]} for result in results]) We use the flask-CORS extension to make cross-origin resource sharing possible, since we are hosting the database and API in a separate domain. Without going into great detail, CORS enables controlled access to resources located outside of a given domain. The create_engine() function produces an Engine object based on the database URL, and is usually the starting point for any SQLAlchemy application. The database URL usually includes username, password, hostname, database name as well as optional keyword arguments for additional configuration. The typical form of a database URL is: dialect+driver://username:password@host:port/database We designate the route ‘/breweries/’ to call the user-defined fetch() function, which executes an SQL query against the database and returns the results. P.S. For simplicity sake we did not use the Object-Relational Mapping (ORM) feature from SQLAlchemy, which it’s famous for. The idea is to allow queries, simple or complex, to be written using the object-oriented paradigm (Python). About Routes and API Design As it stands, the API returns all results to the user at one URL. This approach is inefficient and not a thoughtful design. We can improve our API by allowing users to narrow or filter the data they “request”. One way to solve this problem is to designate more URLs for the different potential use cases. We can create as many routes as needed for our application. This will be necessary when the application expands its functionality. Web applications use different HTTP methods when accessing URLs. For the purposes of this API, we will only use the default GET requests. That is, .route() defaults to .route(methods=[‘GET’]) def fetch_california(): # return filtered results @app .route('/breweries/california')def fetch_california():# return filtered results def fetch_texas(): # return filtered results @app .route('/breweries/texas')def fetch_texas():# return filtered results .etc While this might solve some problems from the user’s perspective, the code is cumbersome to write. Adding Query Parameters The routes that we have created so far were static. An alternative approach to our problem is to allow dynamic URLs by including variable(s) as part of the .route() decorator. This RESTful approach makes the API much more maintainable and usable. Variables can be added to a URL by marking sections with <variable_name>. The mapped function will receive the <variable_name> as an argument. The specificity passed through the URL are known as query parameters. -====app.py====- .route('/breweries/<query_string>') def fetch(query_string=None): query_param='where ' params=0 if query_string: for each_param in query_string.split('&'): key, value=each_param.split('=') if params>0: query_param+='and ' if key.lower()=='state': query_param+=f'state="{value.upper()}"' params=params+1 if key.lower()=='name': query_param+=f'name like "%{value.capitalize()}%"' params=params+1 if key.lower()=='city': query_param+=f'city="{value.capitalize()}"' params=params+1 results=engine.execute(f'select state, city, name from breweries {query_param}') else: results=engine.execute('select state, city, name from breweries') return jsonify([{'state': result[0], 'city': result[1], 'name': result[2]} for result in results]) @app .route('/breweries/') @app .route('/breweries/ ')def fetch(query_string=None):query_param='where 'params=0if query_string:for each_param in query_string.split('&'):key, value=each_param.split('=')if params>0:query_param+='and 'if key.lower()=='state':query_param+=f'state="{value.upper()}"'params=params+1if key.lower()=='name':query_param+=f'name like "%{value.capitalize()}%"'params=params+1if key.lower()=='city':query_param+=f'city="{value.capitalize()}"'params=params+1results=engine.execute(f'select state, city, name from breweries {query_param}')else:results=engine.execute('select state, city, name from breweries')return jsonify([{'state': result[0], 'city': result[1], 'name': result[2]} for result in results]) The <query_string> placeholder will map anything that comes after the ‘/breweries/’ URL. For example, if the user visit ‘/breweries/state=tx&name=brewery’, the ‘state=tx&name=brewery’ will be passed as the argument to the function as query_string. We can also designate multiple URLs to the same function. In this case, the fetch() function is triggered regardless of whether a query string is provided. The variable has to be included as a parameter in the function definition. As part of the API design, we let users know that the web application expects query strings in a specific format: /breweries/state=tx&name=brewery Much of the rest of the code is a series of string manipulation of the input variable to parse and build the SQL query statement. If no query_string is passed, the web application will return all entries from our database as before. API in Action Using the API, users can create Analytical tools that quickly summarize the data. An advantage of going through an API is that downstream consumers of the data are automatically updated with the database. As it turns out California (not surprisingly) leads the country in the number of breweries. Brewery Count by State (Plotly.js) Summary In this article, we used Flask to create an API. Once you’re satisfied with your web application, I hope you’ll consider deploying it online. I can’t wait to see what you create.
https://medium.com/python-in-plain-english/how-to-build-a-web-api-fa0f3bd73a71
['Kevin C Lee']
2020-10-21 21:45:57.518000+00:00
['Programming', 'Python Programming', 'Data Science', 'Python', 'Flask']
4 Signs You May Have an Anxious Attachment Style
4 Signs You May Have an Anxious Attachment Style How the anxious attachment style shows up in relationships I have an anxious attachment style. By no means do I wish this to define me, but my exploration of what this means for me and my relationships has been powerful and nothing short of transformational. The more I learned about what having this attachment adaptation looks like and how it shows up in relationships, the more empathy and patience I can have for myself. No, I am not “defective” or “difficult” — I simply adapted as a child to keep myself safe. As Diane Poole Heller says in her book The Power of Attachment, “ambivalently attached folks really want a relationship so their attachment system is full on.” The “full-on” really resonates with me and if you, too, have an anxious (also referred to as ambivalent) attachment style, it may resonate with you as well. Here are a few ways (summarized from Power of Attachment) that the anxious adaptation may show up in a relationship: It is stressful when people leave A partner going away on a business trip. Your lover is out celebrating with a group of their friends where alcohol and attractive strangers are sure to make an appearance. Maybe they even just had a busy day at the office and left your text from the morning unanswered. These situations can be enough to cause a great deal of distress for those of us with an anxious attachment style. I swear when I was dating someone, especially in the early days, I would start watching my phone for a text from my new love interest as soon as we separated. It was hard to say goodbye and trust that the next “hello” would happen. Fear that in my absence, their eyes would wander and they would leave me for someone else. Or perhaps, that they would simply realize I am not worthy of their love and I would never hear from them again. Sometimes, this does happen, unfortunately only confirming our fear of abandonment. For anxiously attached folks, however, goodbye is always stressful, no matter how open, committed, and communicative our partner may be. You are other-focused Ambivalently attached people may also find themselves more focused on others than themselves. They may be so consumed with scanning for threats in their relationship, that they can lose connection to themselves. This may show up as hypersensitivity to their partner’s actions, words, or lack of actions or words. We may look to others to help us soothe but unfortunately, this can lead to a loss of control because we become dependent on others to help us feel at home within ourselves. Connecting to oneself and learning to self-soothe can be life-changing for the ambivalently attached. I know for me this shows up as reading into my partner’s tone of voice, body language, and how they interact with me. I sometimes can pick up on whether or not something is wrong with my partner before they even realize something is bothering them. In a way, I see it as a bit of a gift but at the same time, it takes a lot of my energy. With all of this energy going into reading how other people are feeling, I often forget to check in with myself. When I am upset, I often want my partner to soothe me. Although there is nothing wrong with this, there are times he is not available or doesn’t have the capacity to do this. In those moments, I need to be able to soothe myself. This is something that took me many years to learn, and admittedly, I am still learning. Emotional regulation ain’t easy. You have an abundance of right-brain activity I found this to be fascinating: whereas avoidant people have an overactivity of left brain activity, those with an anxious attachment style have more activity in their right brain (Diane Poole Heller in The Power of Attachment). To simplify — the left brain is “thinky” or analytical and methodical whereas the right brain is “feely” and responsible for reading faces, emotional tone, and social cues (Stan Tatkin in Wired for Love). This can show up as hypersensitivity and jealousy with our partners. Although these reactions are cries for connection, the nature of these tendencies can end up pushing people away. This just further confirms our belief that people are bound to leave us. This also means we are highly sensitive and attuned to our partners’ feelings. I am open-hearted and incredibly loving and supportive and chances are if you are anxiously attached, these qualities reside in you as well. I think that it is important to acknowledge that although attachment adaptations can be challenging at times, they can also show up in beautiful and endearing ways in our relationships. You have a strong need for reassurance Those with an anxious attachment style need a lot of reassurance. I know that this can be annoying or tiresome for those in a relationship with us. If there is no reason for worry — why do we need so much reassurance? For me, reassurance feels a bit like a hug. I can feel it in my body. It is warming and reminds me that I am safe. With an attachment system that is always on and highly sensitive, reassurance helps me to remember that everything is okay and that I can turn it off for a little while. I find it shows up as clinging to my safety in a relationship. Not having control feels unsettling so an unreturned text message can leave me spiralling and I can very quickly become reactive. Cue twenty texts in a row and an alarming number of missed calls. A simple message or phone call in reply that is filled with warmth and reassurance can often be enough to put an anxiously-attached person at ease.
https://medium.com/wholistique/4-signs-you-may-have-an-anxious-attachment-style-e0511389a280
['Casey A.']
2020-09-28 15:41:39.895000+00:00
['Relationships', 'Attachment', 'Self Development', 'Anxious Attachment Style', 'Psychology']
#MakeoverMonday2019 — Week 2. Press Freedom dark’s horizon
#MakeoverMonday2019 — Week 2 Press Freedom dark’s horizon Original report Original interactive viz What I like about this viz: All What i don’t like:
https://medium.com/phat-vu/makeovermonday2019-week-2-8cd5dbab2649
['Pat Vu']
2019-01-10 09:03:30.459000+00:00
['Makeovermonday2019', 'Tableau', 'Visualization']
25 Best Free MacBook Mockups to Create Perfect Web/Portfolio Designs
25 best free clay/flat/white/dark MacBook mockups and templates in PSD and Sketch formats are introduced for you to create perfect web/portfolio designs. A good screen mockup in PSD or Sketch format helps designers and marketers make professional and attractive websites, portfolios and ad designs with simple clicks. However, searching for the perfect free laptop screen mockup can be time-consuming. If you are looking for free computer screen mockups especially for Macbook laptops to create a stunning web/app/portfolio advertising design, then, look no further. Mockplus has rounded up 20 of the best free MacBook mockups and templates in PSD and Sketch formats, including the common clay/white/flat/dark styles for you. Feel free to use these to improve your web/app/portfolio/advertising designs: Contents Table: 25 best free MacBook mockup & templates free download 5 best sites to download MacBook mockups & templates 3 must-have MacBook mockup generators 25 Best MacBook Mockups & Templates Free Download [PSD+Sketch] Take a look at 25 of the most professional and beautiful MacBook mockups and templates in different angles and scenarios: 1. Modern iPhone X and Macbook Mockup PSD Designer: Anthony Boyd Graphics Format: PSD Size: 276 MB Dimensions: 5000 x 3750 px About: This MacBook mockup example is a realistic mockup which is perfect for designers wishing to showcase their Mac OS website designs. It is created in Cinema 4D, which makes it the perfect choice for users seeking a very beautiful and fashionable screen mockup to showcase their websites or portfolios. Free download 2. Free Flat Macbook Mockup PSD Format: PSD Size: 6 MB About: This is a clean and free flat Macbook mockup template that allows designers to stylishly present their designs. Everything is editable and well detailed, allowing users to build their website/portfolio based on their own liking. Free download 3. iMac/iPhone/iPad/MacBook Clay Mockups [PSD+Sketch] Designer: Ramotion Format: PSD + Sketch About: A set of iMac, iPhone, iPad and MacBook clay mockups in black and white is packed in this mockup sample. It is carefully crafted and fully editable with smart reflections. It offers users both PSD and sketch formats for a better UX. Free download 4. Free Macbook Pro Mockup PSD Designer: PSD Graphics Format: PSD About: This is a high quality, realistic Macbook pro office environment mockup design. With this mockup, designers can freely replace images on the screen and edit any layer based on their web/portfolio design needs. It is perfect for showcasing portfolio or website design projects. Free download Related article: 8 best web design portfolio examples for learning 5. Fancy Free Macbook Pro Mockup PSD Designer: Artø Format: PSD About: This fancy free Macbook mockup sample is a clean and beautiful way for you to showcase your design projects. It features 3 color variations, raster backgrounds and editable screens with smart objects. Free download 6. Free Perspective Macbook Screen Mockup PSD Designer: Reza Azmy Format: PSD Dimensions: 2304 x 1440 px About: This free mockup template includes 50 items and 8 PSD files for designers. All objects and shadows are designed on separated layers. Users can fully customize these templates based on their design needs. It also features editable background colors and a PDF format Help file. Free download 7. Free Realistic MacBook Mockup PSD Designer: Reza Azmy Format: PSD Dimensions: 3000 x 2000 px About: With this realistic Macbook mockup, users will get 10 PSD mockup files with smart objects. They all are designed from different angles. Most of the objects, shadows and background are separated and editable. The mockup colors can be also adjusted, if necessary. Free download 8. MacBook & iPhone X Mockup PSD Format: PSD Dimensions: 4000 x 2500 px About: This Macbook mockup example is ideal for creating MacBook website presentations. It features separated objects and shadows. It should be noted that this mockup is partially free, with some premium features, requiring users to pay. Free download 9. MacBook Pro Mockup Freebie PSD Prototype Faster, Smarter and Easier with Mockplus Get Started for FREE Designer: Gustav Ågren Format: PSD Dimensions: 5000 x 3000 px About: Download this free Macbook mockup and enjoy its editable Touch Bar as well as smart objects. Its 3d effects are another great reason to choose it. Free download 10. Red Macbook Pro Mockup Free PSD Format: PSD About: This Macbook mockup features a realistic indoor environment, helping create a professional look for your website design. It provides smart object layers that allow users to change the objects with simple clicks. Free download 11. Free Home Office Desk with Macbook Pro Mockup PSD Format: PSD Size: 11.71MB Dimensions: 4000 x 2667 px About: This is a Macbook mockup on a home office desk, and features smart layers. It is free for personal and commercial use. Free download 12. Macbook Minimal Mockup for Sketch Freebie Format: Sketch Size: 60 KB About: This a white Macbook mockup in sketch format features a minimal, subtle and clay render style. It is free for everyone. Free download 13. White Macbook & iPhone Mockup Sketch Format: Sketch Size: 3 MB About: This is a super clean white Macbook and iPhone mockup template in Sketch format. Free download 14. Mackbook Pro Mockup Pack Designer: Alexander Format: PSD + JPEG Dimensions: 3000 x 2200 px About: This MacBook mockup pack allows designers to easily combine their designs into a perfect realistic photo showcase. It offers 6 PSD and 6 JPEG files for users to freely beautify their web/app/portfolio designs. If you are a newbie, its Help file is a good guide to help you improve your designs. Preview online 15. 10 Macbbook Scenes Mockup PSD Designer: Asylab Format: PSD Dimensions: 5000 x 5000 px About: This bundle of Mockups includes 6 PSD Macbook mockups and 4 PSD iPhone XS mockups. It features an isometric design style and rich default colors, including white, black, gold, rose, red, etc. Of course, users can further customize the color if they wish. Check details 16. Elegant & Clean Macbook Pro Mockup PSD Format: PSD Dimensions: 6000 x 4000 px Orientation: Landscape About: This mockup design pack includes 15 high quality PSD files and allows designers to create realistic and professional web or application designs quickly. It is packed with mockups of different angles, so you can adjust and showcase your designs in beautiful ways. Free download 17. Isometric Macbook with Shapes Mockup PSD Format: PSD Dimensions: 4000 x 2800 px About: This Mockup includes 6 PSD files with smart objects. Users can change the color of the included shapes or backgrounds freely. Check details 18. Minimalist Macbook Screen Showcase Mockup PSD Format: PSD Dimensions: 3000 x 2000 px About: This is a minimalist Macbook Screen mockup featuring a clean design style. It allows designers users to insert their website designs with one-click. In short, it is a perfect option for you to create a minimalist website design/portfolio/ad. Free download 19. Flying Macbook Pro Screen Mockup PSD Format: PSD Dimensions: 3000 x 2000 px Orientation: Landscape About: This beautiful Macbook pro screen mockup template features a flying angle design. It allows users to easily customize the background colors and element layers to create an eye-catching web design. Free download 20. 7 Creative Macbook Pro Scenes Mockup PSD Format: PSD Dimensions: 6400 x 4800 px About: This bundle of Macbook pro mockups is packed with 7 different scenes in high resolution 6400 x 4800 px. This template comes with a silver color, but can be easily changed based on your design needs. Check details 21. MacBook Mockups PSD Format: PSD Dimensions: 3500 x 2300 px Orientation: Landscape About: This is an Apple device mockup collection that supports MacBook, iPad and iPhone 5 devices. All mockups are designed with smart objects. You can freely edit the device screen with simple clicks. You can also use these device mockups together or separately based on your needs. In short, this pack is a good resource for you to create website/ad designs with a good choice of devices and colors. Free download 22. Clean MacBook PSD Format: PSD Dimensions: 4500 x 3000 px Orientation: Landscape About: This clean MacBook mockup sample offers designers 5 high-resolution PSD files to present responsive websites. Its separate layer sets are one of the most attractive features, allowing users to customize their web/app designs. Its rich scenario options are also worth exploring. A Help file is also included for a better UX. Free download 23. Flexible Macbook Mockups PSD Format: PSD Orientation: Landscape About: This is a vector and fully layered PSD mockup example with detailed, clean and outline styles. Users can customize their web or portfolio designs based on their own needs. Free download 24. Dark Bruno Paul Macbook Mockup Free download 25. Free Dark Macbook Device Mockup Sketch Free download We hope this collection of the 25 best Macbook mockups and templates can help you create gorgeous website/portfolio/ad designs. 5 Best Sites to Download MacBook Mockups & Templates If the above mockup list is not enough, below are the 5 best websites for finding Macbook mockups and templates that suits your needs: 1. Dribbble.com As one of the most popular places for designers to share their designs and gain inspiration, Dribblem.com is a good place for designers to search and download free design resources, including Macbook mockups and templates resources. 2. Behance.com Behance.com is another important website for designers to showcase fresh design work and download different design resources for free. Designers can search and find desired Macbook mockups there. 3. Elements.envato.com As a professional design material website, Elements.envato.com offers lots of creative and beautiful Macbook mockup templates for designers. Some of them are free, but others have a fee associated with them. Designers can download based on their own needs. 4. Creativebooster.net Creativebooster.net is also a design resource website that lists many free Macbook mockup and template resources. 5. Mockplus.com/blog Mockplus.com/blog shares different resources for designers to create excellent website/app/portfolio designs. It also offers a wide range of Apple device mockup and template resources. Check 9 amazing sites to get free mockup templates for designers 3 Must-Have Macbook Mockup Generators If you still cannot find your desired mockup, below are 5 must-have Macbook mockup generators for you to create the perfect Macbook mockups or templates on your own: Mockuper.net is an online mockup generator that allows designers to customize their Macbook mockups or templates with simple clicks. It offers a useful mockup library for creating mockups for different environments. Mockplus, an all-in-one rapid prototyping tool, is also a great mockup generator, allowing designers to bring their design ideas into interactive Macbook mockups. Mockplus offers designers a powerful component library, icon library and component library so that they can customize their mockups to very small detail. Of course, you can also use it to create your web/app prototypes, test and share them in 8 ways freely. Its new design collaboration and handoff tool, Mockplus iDoc is a handy design tool for designers and developers to create prototypes, comment designs and gather feedback, download and handoff designs, upload and manage design documents online effortlessly. This tool is another good mockup generator that enables users to create quality Apple device mockups with simple clicks. This mockup generator helps designers make perfect Macbook mockups easily and quickly. Wrap UP No matter what your purposes are, we hope these 25 best free MacBook mockups and templates can help you. Alternately, if you merely want to review them for design inspiration, the 3 must-have MacBook mockup generators (like Mockplus) can help you create striking web/ad designs on your own.
https://medium.com/dsgnrs/25-best-free-macbook-mockups-to-create-perfect-web-portfolio-designs-3f064c9a6ab8
['Trista Liu']
2019-05-25 06:10:19.962000+00:00
['MacBook', 'Template', 'Prototype', 'Free', 'Design']
Interesting AI/ML Articles You Should Read This Week (Sep 19)
“GPT-3 impressively explains the origin of everything” Kirk Ouimet article is a dialogue between himself and GPT-3, which is referred to as ‘Wise Being’. The content of the dialogue is around the origin of the Big Bang and other associated topics such as time, space and the Universe. I was truly expecting to be bored or at the very least be a little bit impressed by the output of the ‘Wise Being’ in the dialogue. After reading the entirety of the dialogue and article, I would have to admit that the responses from the ‘Wise Being’ felt almost human-like, and surpassed my initial expectations. The responses were well put together and had some form of logic, well as much logic as possible when answering questions that are outside the realms of human imagination. The key takeaway for me from this article is that the GPT-3 language model is clearly very robust and can mimic creativeness. It also has the ability to draw upon the relevant source of text from its training data to provide some decent responses. Although it should be noted that GPT-3’s responses are not actually unique and is not generated as a product of reasoning. Not for now, anyway. An excellent read for:
https://towardsdatascience.com/interesting-ai-ml-articles-you-should-read-this-week-sep-19-92ee6b14c12c
['Richmond Alake']
2020-09-18 22:36:10.289000+00:00
['Technology', 'Towards Data Science', 'Data Science', 'Artificial Intelligence', 'Machine Learning']
Dear Poets, Comic Artists, Humorists, Flash & Micro-Fiction Writers
The news that Medium is switching to a “read time” metric to reward stories in the partner Program has many up in arms. I write mainly one-minute humor, comics, micro-fiction and poems. So the news is relevant to me. Let’s face it; Medium has never favored creative writing. They prefer current event essays and confessional writing. We have always been the “second-page” categories. This has not changed. There is no Medium Poetry publication. Or Humor. Or Fiction. Will the new system affect us? Of course. For good or bad? We will see. I encourage you to wait until the new system is in effect before you revolt. Honestly, I am not getting rich on my humor, poetry, and micro-fiction. Or my long fiction either. Does anyone actually make decent money publishing creative writing on Medium? Why haven’t you written an essay on how? Everyone else has. I want to read it. I typically post daily, and my monthly earnings are still meager. If they go lower… What’s below meager?
https://medium.com/mark-starlin-writes/poets-comic-artist-humorist-flash-micro-fiction-writers-6403d8659200
['Mark Starlin']
2020-08-28 18:34:05.040000+00:00
['Writing', 'Essay', 'Medium', 'Money', 'Partner Program']
12 Factor App Principles and Cloud-Native Microservices
12-factor app is a methodology or set of principles for building the scalable and performant, independent, and most resilient enterprise applications. It establishes the general principles and guidelines for creating robust enterprise applications. 12-factor app principles got very popular as it aligns with Microservice principles. Below are the 12-factor principles Codebase (One codebase tracked in revision control, many deploys) Dependencies (Explicitly declare and isolate the dependencies) Config (Store configurations in an environment) Backing Services (treat backing resources as attached resources) Build, release, and Run (Strictly separate build and run stages) Processes (execute the app as one or more stateless processes) Port Binding (Export services via port binding) Concurrency (Scale out via the process model) Disposability (maximize the robustness with fast startup and graceful shutdown) Dev/prod parity (Keep development, staging, and production as similar as possible) Logs (Treat logs as event streams) Admin processes (Run admin/management tasks as one-off processes) Codebase (One codebase tracked in revision control, many deploys) 12-factor app advocates that every application should have its own codebase (repos). Multiple codebases for multiple versions must be avoided. Please do note that having branches would be fine. I.e. For all the deployment environments there should be only one repo but not multiple. Multiple apps sharing the same code are a violation of the twelve-factor. Here you should opt-in for shared libraries. From the 12-factor app perspective app, deploy meaning the running instance of an app like production, staging, QA, etc. Additionally, every developer has a copy of the app running in their local development environment, each of which also qualifies as a deploy. Different versions (the version is like a code change that is available in one environment but not in other) may be active in multiple deploys. Microservices: In Microservices, every service should have its own codebase. Having an independent codebase helps you to easy CI/CD process for your applications. Twelve-factor app advocates of not sharing the code between the application. If you need to share you need to build a library and make it as a dependency and manage through package repository like maven. Dependencies (Explicitly declare and isolate the dependencies) It talks about managing the dependencies externally using dependency management tools instead of adding them to your codebase. From the perspective of the java, you can think of Gradle as a dependency manager. You will mention all the dependencies in build.gradle file and your application will download all the mentioned dependencies from maven repository or various other repositories. You also need to consider the dependencies from the operating system/ execution environment perspective as well. Microservices: All the application packages will be managed through package managers like sbt, maven. In non-containerized environments, you can go for configuration management tools like chef, ansible, etc. to install system-level dependencies. For a containerized environment, you can go for dockerfile. Config (Store configurations in an environment) Anything that varies between the deployment environments is considered as configuration. This includes: Database connections and credentials, system integration endpoints Credentials to external services such as Amazon S3 or Twitter or any other external apps Application-specific information like IP Addresses, ports, and hostnames, etc. You should not hardcode any configuration values as constants in the codebase. This is a direct violation of 12-factor app principles. 12-factor app principles suggest saving the configuration values in the environment variables. It advocates the strict separation between the code and configurations. The code must be the same irrespective of where the application being deployed. As per “config”, what varies for the environment to the environment must be moved to configurations and managed via environment variables. Microservices: Externalize the configurations from the application. In a microservice service environment, you can manage the configurations for your applications from a source control like git (spring-cloud-config) and use the environment variables to not to maintain the sensitive information in the source control. Backing Services (treat backing resources as attached resources) As per 12 factor app principles, a backing service is an application/service the app consumes over the network as part of its normal operation. Database, Message Brokers, any other external systems that the app communicates is treated as Backing service. 12-factor app can automatically swap the application from one provider to another without making any further modifications to the code base. Let us say, you would like to change the database server from MySQL to Aurora. To do so, you should not make any code changes to your application. Only configuration change should be able to take care of it. Microservices: In a microservice ecosystem, anything external to service is treated as attached resource. The resource can be swapped at any given point of time without impacting the service. By following the interfaced based programming allow to swap the provider dynamically without impact on the system. Plug-in based implementation also helps you to support multiple providers. Build, release, and Run (Strictly separate build and run stages) The application must have a strict separation between the build, release, and run stages. Let us understand each stage in more detail. Build stage: transform the code into an executable bundle/ build package. Release stage: get the build package from the build stage and combines with the configurations of the deployment environment and make your application ready to run. Run stage: It is like running your app in the execution environment. Microservices: You can use CI/CD tools to automate the builds and deployment process. Docker images make it easy to separate the build, release, and run stages more efficiently. Processes (execute the app as one or more stateless processes) The app is executed inside the execution environment as a process. An app can have one or more instances/processes to meet the user/customer demands. As per 12-factor principles, the application should not store the data in in-memory and it must be saved to a store and use from there. As far as the state concern, your application should store the state in the database instead of in memory of the process. Avoid using sticky sessions, using sticky sessions are a violation of 12-factor app principles. If you would store the session information, you can choose redis or memcached or any other cache provider based on your requirements. By following these, your app can be highly scalable without any impact on the system Microservices: By adopting the stateless nature of REST, your services can be horizontally scaled as per the needs with zero impact. If your system still requires to maintain the state use the attached resources (redis, Memcached, or datastore) to store the state instead of in-memory. Port binding (Export services via port binding) The twelve-factor app is completely self-contained and doesn’t rely on runtime injection of a webserver into the execution environment to create a web-facing service. The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port. In short, this is all about having your application as a standalone instead of deploying them into any of the external web servers. Microservices: Spring boot is one example of this one. Spring boot by default comes with embedded tomcat, jetty, or undertow. Concurrency (Scale out via the process model) This talks about scaling the application. Twelve-factor app principles suggest to consider running your application as multiple processes/instances instead of running in one large system. You can still opt-in for threads to improve the concurrent handling of the requests. In a nutshell, twelve-factor app principles advocate to opt-in for horizontal scaling instead of vertical scaling. (vertical scaling- Add additional hardware to the system Horizontal scaling — Add additional instances of the application) Microservices: By adopting the containerization, applications can be scaled horizontally as per the demands. Disposability (maximize the robustness with fast startup and graceful shutdown) The twelve-factor app’s processes are disposable, meaning they can be started or stopped at a moment’s notice. When the application is shutting down or starting, an instance should not impact the application state. Graceful shutdowns are very important. The system must ensure the correct state. The system should not get impacted when new instances are added or takedown the existing instances as per need. This is also known as system disposability. Systems do crash due to various reasons. the system should ensure that the impact would be minimal and the application should be stored in a valid state. Microservices: By adopting the containerization into the deployment process of microservices, your application implicitly follows this principle at a maximum extent. Docker containers can be started or stopped instantly. Storing request, state, or session data in queues or other backing services ensures that a request is handled seamlessly in the event of a container crash. Dev/prod parity (Keep development, staging, and production as similar as possible) The twelve-factor methodology suggests keeping the gap between development and production environment as minimal as possible. This reduces the risks of showing up bugs in a specific environment. The twelve-factor developer resists the urge to use different backing services between development and production. Microservices: This is an inherent feature of the Microservices that is run using the containerization techniques. Logs (Treat logs as event streams) Logs become paramount in troubleshooting the production issues or understanding the user behavior. Logs provide visibility into the behavior of a running application. Twelve-factor app principles advocate separating the log generation and processing the log's information. From the application logs will be written as a standard output and the execution environment takes care of capture, storage, curation, and archival of such stream should be handled by the execution environment. Microservices: In Microservices, observability is the first-class citizen. Observability can be achieved through using APM tools (ELK, Newrelic, and other tools) or log aggregations tools like Splunk, logs, etc. By following the above-mentioned guidelines all you need is to debug an issue is to go to the central dashboard of your tool and search for it. Admin processes (Run admin/management tasks as one-off processes) There is a number of one-off processes as part of the application deployment like data migration, executing one-off scripts in a specific environment. Twelve-factor principles advocates for keeping such administrative tasks as part of the application codebase in the repository. By doing so, one-off scripts follow the same process defined for your codebase. Ensure one-off scripts are automated so that you don’t need to worry about executing them manually before releasing the build. Twelve-factor principles also suggest using the built-in tool of the execution environment to run those scripts on production servers. Microservices: Containerization also helps here to run the one-off processes as a task and shutdown automatically one done with the implementation. That’s all for today. Hope you have enjoyed the article. Please share your thoughts or ideas or improvements in the below comments box. References: https://12factor.net/build-release-run https://www.nginx.com/blog/microservices-reference-architecture-nginx-twelve-factor-app/ https://blog.scottlogic.com/2017/07/17/successful-microservices-with-12factor-app.html
https://medium.com/techmonks/12-factor-app-principles-and-cloud-native-microservices-a383f6abc97f
['Anji']
2020-09-13 05:40:42.743000+00:00
['Microservices', '12 Factor App', '12 Factor']
Making Africa in Shenzhen (Part Two)
Making Africa in Shenzhen (Part Two) Learning the Business of Manufacturing This is the second part of this series, read the first piece here. Huaqiangbei (the electronics market) in Shenzhen at night As I sat in the back seat of my new friends’ Volkswagen Golf car, listening to the mix of English and Mandarin R&B music, I felt a mix of gratitude and privilege for the opportunity to be in China and resolved to make the most of it. I thought about the events that had taken place in my life that had led me to that very moment. As I counted the street lights that rushed by me, I had a good feeling that I was on the right path in the “Silicon Valley of hardware”. Like many makers I know who had their tertiary education in Africa, the curriculum didn’t meet my ambitions when I studied engineering. I searched for more knowledge where I could and Massive Open Online Courses (MOOCs) became solace. I have had to learn programming in Python and C, become an expert mechanical product designer, build IoT and machine learning applications, and groom my entrepreneurial acumen the hard way. Like many hackers and makers, I invested in learning and I developed competence and boldness by undertaking freelance projects to make ends meet in the harsh economic realities graduates face after completing university. I know of some colleagues who have external hard drives loaded with any MOOC videos that had the download button. My ride to Shenzhen was figuring out why I, and many like me, subject ourselves to such rigorous discipline to learn beyond the scope of the university’s curriculum when it is not what most employers look out for. Now I find myself in a place where I felt I was going to be intellectually challenged and even felt a bit intimidated. I didn’t want to visit a factory where a high school student would challenge my mechanical engineering knowledge. Due to my tight budget, I had booked a hostel to stay at in Shenzhen and had surveyed many options for accommodation before leaving Accra. My hosts insisted on getting me a hotel near their office instead, in a residential zone of Shenzhen. It was almost midnight when my hosts left me, and I was ushered to my new hotel room. After a few calls home to inform everyone interested in my safety that I had arrived at my destination, I jumped to bed and into a deep sleep. I loved the weather I woke up to and loved the free Chinese breakfast my hotel served. At first I loved the denizens I met in elevators and subway stations who made me feel like a celebrity by their stares and offers to take pictures. After a while though, I felt like I was the only African in Shenzhen when even policemen asked for selfies. I was obviously the odd one on the crowded subway trains and in the busy night markets and I realized how easy it was not to have heard about Africa in this country. From the little I saw when I was there, Chinese media had virtually no African content and, so I thought people were relying on narratives from western media portrayals of Africa. The Chinese clearly hold their language and culture in high esteem. All street signs and most public instruction was in Mandarin and I thought this was good for promoting a national identity. I had never fancied learning Mandarin until I heard so many people speak it. Overall, I think I experienced a very strong culture shock that took some learning and observing to overcome. Night market and street food in Shenzhen I woke up the next morning to my phone ringing; my hosts had taken the responsibility for making sure I woke up on time not to miss the morning breakfast. I was chauffeured to the head office of this factory to talk with the head of the manufacturing company; I mean, the biggest boss at the top of the corporate chain. Here I was being taken very seriously again. After a nervous ride up the elevator I was introduced to a lean healthy-looking man behind a tea table. He served me tea before a few exchanges of pleasantries and then he pulled out his note book. We proceeded to talk about the economics and business behind the prototype we were going to build. I was made to understand that most of the figures were speculative but based on their experience in manufacturing. This was a conversation I wasn’t quite ready for because I was so focused on the designing of the prototype. I got to learn a lot about manufacturing out of this meeting. Manufacturing isn’t only about putting components together to get them working, it is also about the logistics and supply chain. Most makers and DIY enthusiasts rarely think about the supply chain of their designs and its manufacturability. My hosts gave me a detailed analysis of their production capacity and even the dimensions of the products in a shipping container. I think that sourcing for inputs and managing logistics of finished products is a major problem in manufacturing in Africa. Shenzhen has an amazing connection between suppliers, manufacturing factories, and logistics and handling companies. This connection is forged in deep trust and this is what makes the speed of manufacturing overwhelming. I took notes of every detail we discussed and even sketched the whole supply chain for my product. This also gave me a good sense of how to reduce costs and make certain contingencies for the mass production of my design. Working with these guys made me realize the advantages of manufacturing in Shenzhen. I think there is always going to be a learning curve for any engineer who attempts to manufacture in Shenzhen, however, it’ll be more of a mentality change that is needed to adapt to the speed of manufacturing in Shenzhen. The preparedness and work ethics I observed, and the attention I was given, even as an unassuming engineer building only a prototype and not commercial quantities was amazing. From my experience, to improve manufacturing in Ghana, the supply chain and sourcing for inputs must be shorter and the logistics of supplying to other markets must undergo some infrastructure upgrades. After the intensive meeting and being helped to figure out the best supply chain for my product, I was introduced to the engineer that I would be working with throughout my stay, a very animated and energetic man who has had over 30 years of manufacturing experience in the type of product I was trying to make. He seemed elated to be tasked with working with the visitor in the room even though he spoke very little English and I no Mandarin. We however communicated with drawings and the Google Translate mobile application whenever sensitive information was being communicated. Before going to Shenzhen, I had lots of debates with my colleagues about open innovation and I had just benefited from open innovation in a concrete way. The amount of knowledge that had been passed between me and my hosts was something that I would have paid consultants to attain elsewhere. Open innovation is embedded in the way of doing things in Shenzhen. I gained some confidence in showing people my designs and calculations and expected to get useful insights that would make my product better. The willingness to share technical knowledge revealed what I thought was a genuine interest in the success of my venture. Open innovation improves the exchange of knowledge, especially knowledge gotten from experience, helps in manufacturing and building better products. Collaboration between different people with business, manufacturing, engineering and financial acumen is key to driving innovation. Having these kinds of exchanges, not solely for financial rewards would create a culture of innovation in Ghana and would help Africa resolve most of its manufacturing challenges. In Shenzhen, I found the intellect of experts very accessible and at no additional cost, as long as you know who to ask. It was available to even foreigners like me. I recommend that African hardware startups interested in manufacturing in Shenzhen leverage the intellectual resources of these experts. I think it is best to always find an engineer from the factory you want to produce with to help you figure things out. No need to waste valuable time trying to learn these things by yourself in your garage. You don’t need to be a jack of all trades. You don’t have to be the only engineer that builds your product. It will only take longer than necessary. At the factory, just as in many parts of the country, I witnessed warmth from the factory workers. I was served some tea as we discussed the production plan. This included schedules and timelines and machine and factory layout to make the prototype production take place at a very fast but convenient rate. I have studied a little bit about manufacturing and factory and project design in the Kwame Nkrumah University of Science and Technology (KNUST) in Ghana. In Shenzhen I saw it being practiced and that’s where all the theories I had studied started to make sense. My fellow Chinese engineer showing me around the city’s manufacturing zones. My work schedule meant that I could finish the prototype production early and have a week to spare to tour the city. I liked this because prior to my arrival, I had far overestimated the number of days needed to produce the prototype. I knew Shenzhen was incredible with manufacturing, but this speed wasn’t quite what I anticipated. By the end of the day almost all the items needed for the productions had arrived at the factory and we were ready to proceed! I decided to try get back to my hotel on my own for the day to clear my mind and fully figure out my finances given how quickly the prototype was coming along. I also wanted to get to know the city and meet some locals and hopefully meet other potential business partners. My hosts pointed me towards the bus station where I got on a bus and headed back to my hotel. Follow this story in Part Three.
https://medium.com/tech-africa/making-africa-in-shenzhen-part-two-e4f2c41d34dc
['Desmond Koney']
2018-03-19 16:07:46.958000+00:00
['Shenzhen', 'Makers', 'Tech Entrepreneurs', 'Ghana', 'Startup']
Let’s Make Your Articles and Presentations Look Pretty
Let’s Make Your Articles and Presentations Look Pretty Seven tools to make your articles and presentations look professional and consistent. Great presentations flow, tell a story, and in general, look appealing. Similarly, well-written articles are accompanied by nicely designed graphics, imagery, and high-quality photos. Your writing style is your own, and that I can’t and won’t be able to help you with. However, I can try to improve the look and feel of your articles and presentations with a little help of a few popular toolsets. Let’s begin. Stock Photos I’ve seen and created my fair share of professional presentations, and there’s always one aspect that’s common to them all… one-liner impact statements. Whether it’s to drive a point home, scare the crap out of you, or quote a loosely related article or study, more often than not, these one-liners appear alongside a beautiful stock photo that by some stretch of the imagination is associated with the one-liner itself. Unsplash is my first choice for beautiful free photos. Before 🙄 I don’t believe this number After 😱
https://medium.com/weareservian/lets-make-your-articles-and-presentations-look-pretty-a2708975c8ab
['Marat Levit']
2020-08-10 03:00:57.159000+00:00
['Tools', 'Presentations', 'Articles', 'Design', 'Makeover']
Don’t Fear the Reaper — Your Keyboard is Not Contagious!
Don’t Fear the Reaper — Your Keyboard is Not Contagious! Thirty-Five followers… 35 followers... 30+5 followers. That may not seem like much to some writers on Medium, but it means a whole hell of a lot to the editors of Out of Ideas, Out of Time. This is me at a Zoom business meeting. Notice I have protected my ears as well as my nose and mouth from any external toxins We are also Out of Our Minds, because only 10 members of our group have written a chapter, and, unlike so many writers, we can subtract. Thirty-five minus ten is… (checking calculator on phone)… 25!! Twenty-five of you haven’t dipped your virtual toes into the collaborative waters of the current Stark mystery. While we extend our hearty congratulations to Michael Stang, Elle Fredine and Laura Johnson who all popped their literary cherry in our current saga, “The Toilet Paper Caper,” to the rest of you, we say “courage!” Add your voice to this cacophony of collaborators chasing comedy catharsis to combat the cold, creepy, and cruel combination of Covid-19 and Cornholavirus-45 currently contaminating our country. When I look at the heavyweight literary talent in our mailing list, only one thought enters my mind… Why haven’t they unsubscribed yet? But seriously, why haven’t we had more submissions? Roy, just send us a cartoon where you mention Stark. One. Lousy. Panel. Is that too much to ask? Tre, how about a short poem that Stark mumbles to himself? Tommy, you can’t squeeze out a single rule-of-three humorous reference? If you people don’t have the time to write a 3–5 minute chapter, send in a joke or a paragraph or an idea. Even a hand-drawn picture would help. Give us anything, and we’ll build the plot around it. Heaven knows, there’s no real plot at this point; nobody would be the wiser. Stay safe and keep laughing.
https://medium.com/out-of-ideas-out-of-time/dont-fear-the-reaper-your-keyboard-is-not-contagious-f2de206c13c2
['Lon Shapiro']
2020-04-08 00:26:21.462000+00:00
['Collaboration', 'Writing', 'Mystery', 'Toilet Paper Caper', 'Humor']
Poem of the Week: Man Down // Kate Tempest
Source: The Guardian Kate Tempest has a thing for the senses. Her pleasant delivery of spoken poetry is like small pieces of informational strums tugging at our hearts. She draws experience from touches, of tangible things to see, melted words and realisations readers are not often confronted with. Such is the magic of Tempest’s poetry; when I first read ‘Man Down’, I felt it a little more than I understood it. Tempest often plays with gender roles and the binary concept of sex. Sometimes she enforces them in a clever but self-confused manner, sometimes she overturns them with self-indulgence and nuanced perspectives. ‘Man Down’ from her Hold Your Own collection perpetuates the existence of a gender binary. It then undermines it with arguments that showcase gender fluidity. Tempest uses sensual imagery to elucidate parts of her argument and immerse the reader deeper into the crevice of each line. And with each word spoken or read, the listener and reader find themselves questioning their own identity and that so-called gender binary. ‘Man Down’ prioritizes accidental enlightenment using the sensualities of the words rather than direct confrontation. Which is why, despite at first not having any concrete understanding of who Tempest was prior to reading the poem, I still felt and agreed with the truths and explorations her poetry provided me. For example, the early lines “No man is a man all through. // I’ve seen you. Shivering. Fleeting weakness. // Cold rain scuffing its feet on the beaches”. These lines use the internal rhyme of “through” and “you” and the sibilance of the “s” to slowly draw the reader into the early warnings of the poem. We can tell just by those three lines that this poem addresses a man; it lets him know that it is okay to feel. The cold rain, feet on the beach, the fact that he’s shivering trying to brave through his weaknesses – all of these things are indicative of the front and façade of the concept of “man”. Source: New York Times Repetition is evident in most of Tempest’s poetry, especially within Hold Your Own. In this poem, there is an affinity for the use of anaphora. One particular use for it, that I personally gravitate toward, are the lines “The best boys would feel like a lady in your arms. // The best girls would fuck like a man, given half the chance. // The good ones are good ones because they are whole ones. // We’re at our best when we mean it”. These lines combine the anaphora of “best ones” with the parallelism of “ones”. This creates multiplicity in the notion of “one”; by constantly repeating it, she alludes to the many layers beneath the “one”. When she exchanges “one” for “whole”, she attaches a sense of completion into the thought. This crescendo comes at the peak of blurring the gender lines; the similes that connect “boys” and “girls” are supported by the following lines, “We come from man and woman combined // And we’ll carry these parts till we see our last day”. Finally, one of the best aspects of ‘Man Down’ is how well Tempest makes use of its structure. Whether it’s on paper or phonetically, she manages to catch interest with how the poem flows. She mixes short and snappy one-word sentences with long over-descriptive sentences to keep the poem fresh. This also creates rhythm whenever she speaks it, and makes it easier to follow. It also provides enough information for readers and listeners alike. The combination of a rhetorical hypophora structure with apophasis in the lines “I’ve got to stop telling you things. // You’ll give when you’re ready. // I’ve got to stop wanting. // Your mind’s made up. // I’ve got to stop pushing. // You’re trying to keep steady” really elucidates this conflict between the speaker and the one spoken to. This conflict is a reflection of the man’s inner war with himself, and the speaker’s desperate effort to help him come to a realization. Yet, they remain at the epochs of both their genders, trapped in the crumbling façade of their assigned roles within a structured society. The line “And what do I know? You’re the man here” supports this notion. ‘Man Down’ is a poem that speaks multiplicity, which layers and pries open the divide between men and women. It is not as explicit as her other poems; explicit in terms of sex, explicit in terms of exploration. But it does provide enough of an understanding into the inner workings of Tempest and her poetry. You can watch Tempest perform ‘Man Down’ and the rest of the poems from Hold Your Own down below. https://youtu.be/32i5zfcFt8g Words by Mae Trumata Want more Books content from The Indiependent? Click here
https://medium.com/the-indiependent/poem-of-the-week-man-down-kate-tempest-437a01fdee14
['Mae Trumata']
2020-07-12 12:51:09.246000+00:00
['Literature', 'Kate Tempest', 'Gender', 'Books', 'Poetry']
How Liquidity Can Enable Performance-Driven Decision-Making
In late October, the CEO of Harvard’s endowment, the largest of any university in the world at $40.9 billion, stated he was not pleased with the fund’s 6.5% returns in fiscal 2019. CEO N.P. Narvekar identified illiquid holdings as one of the drivers for the disappointing results. The Harvard fund is not unique in wanting to obtain liquidity for seemingly illiquid asset classes. While illiquid investments are a core component to many of these long-term investment portfolios, they can often remove the ability to make dynamic decisions when the investments are not performing as anticipated. Over the past few years, we’ve identified this issue as a pain point for many large institutions looking to invest in illiquid alternatives. In the case of Harvard, when there is a change in its investment strategy, they are forced to undergo a slow and protracted restructuring period in order to reallocate capital to different investment opportunities while still beholden to established performance estimates during this period. As endowments and other institutions are looking to adjust portfolios to address shifts in investment priorities or growing uncertainty of economic conditions, they chafe against illiquidity more and more. However, through the digitization of these funds or investments on the blockchain, investors are granted greater control over their portfolio performance because they can more simply transfer ownership to interested parties. Provided the necessary demand is available, investors and institutions can access liquidity through a network of prospective sellers and buyers in order to enable performance-driven decision-making. At RealBlocks, we’re seeing major demand for investment in the alternatives markets. A recent Preqin study states that the market size in 2017 was $8.8 trillion, and is anticipated to grow to $14 trillion by 2023. With immense growth in the alternatives market, we expect the demand for investor liquidity to also increase due to the reasons outlined above. Through our solution, we hope to help lead the transformation in developing a more open and accessible market for alternative investments. To learn more about what we’re doing at RealBlocks, visit our website at www.RealBlocks.com!
https://medium.com/realblocks-blog/how-liquidity-can-enable-performance-driven-decision-making-b7fd4b675f7f
['Realblocks Team']
2019-11-13 16:15:49.559000+00:00
['Investment', 'Investing', 'Alternative Investments', 'Blockchain Technology', 'Startup']
Suffering Under the Destructive Influence of Delusional People
Suffering Under the Destructive Influence of Delusional People They drive, they teach, they vote Photo by Woody Kelly on Unsplash This morning I woke up to find a woman had sent me a Biblical quote about lying followed by a declaration that the media can’t select the winner of a presidential election. It was a double whammy of crazy. She’s convinced that Biden committed election fraud and certain “the truth will come out.” You could feel the smirk in her words, and anticipation of her impending delight at the prospect of mocking me for believing other than what she knows to be true. People like her make up a large part of the underlying fabric of America. The delusional army This person was a Trump supporter who began her comment with a denouncement of lying. It’s almost stupefying to read something like that. If I were to begin writing out easily verifiable instances of Trump’s deceits, I’d run out of digital paper. What are you supposed to do in that situation? What recourse is at your disposal? In the past, I might have just laughed off such a comment and continued happily along with my life. This person has no grip on reality. How much damage can she actually do? They don’t even truly interact with our world, instead, they prance through a fantasy realm inhabited by speaking cartoon animals beneath a rainbow-colored sky. Apparently, they have a dim awareness of real-life traffic signs, otherwise, the highways would be teeth-gnashing kill boxes stained red with blood and illuminated by gasoline fires. Enough perception to inflict maximum harm Therein lies the rub. The problem is that even delusional people do have a dim perception of reality. In the midst of all their fantasy, they retain an awareness of where and how to push the button that will kill the world. It’s probably a parental instinct that makes us feel inclined to protect these poor, suffering fools. They’re like confused children, and sometimes the things they think are even cute. “Aw…you believe in fairies! That’s delightful!” But it’s important to remember that we have to resist that charitable instinct. The fairies can’t help them when they’re trying to perform open-heart surgery. Talk of fairies isn’t funny there. We have to remember these aren’t children, they are adults, and the damage they’re capable of is very real. The danger of delusion Don’t be fooled by their bumbling. They aren’t baby Yoda. Instead, we’re surrounded by Gremlins. I’ve had a lot of conversations with people throughout this election season, and a couple of consistent topics continue to bubble to the surface. Why are some people in our society so committed to following a political philosophy that will cause them nothing but pain and ruin? Why are they so committed to making their own lives worse and dragging the rest of us down with them? The election Tuesday, November 3rd was a very tough day for our country. I knew that it was likely that we’d see a “red mirage” as in-person votes weren’t counted before mail-in votes in many states. I’d thought I’d prepared myself for it, but when Florida went red, I became physically ill. My wife was in tears, and we went to bed thinking that Biden had no path to victory. Before I drifted off into troubled sleep, I reflected on Trump’s enduring racist attacks on my children. I also wondered what I was going to do when Social Security ran out. Then I became angry. What’s wrong with people? Why is half of our country willing to go along with a president that overtly attacks our national retirement fund? There should be riots! Normally even a delusional person becomes angry when you take money out of their wallet. Why are they blind to this theft? The confederacy of dunces Now I understand what John Kennedy Toole was trying to say, and why he didn’t make it. Delusional people inhabit every nook and cranny of our society. They have massive, well-funded institutions that help to cultivate and spread their insanity. These people are allowed to drive cars, they’re allowed to buy guns, they’re allowed to teach our children, they’re allowed to vote. Their numbers ebb and flow, sometimes slightly more than half, sometimes slightly less, but always wielding a terrible influence. At the end of the day, the cruel reality we must all accept is that it is the delusional people who have the greatest influence on the trajectory of our lives. They will be the ones who determine our leadership. They will be the ones who dictate what is said in the media and what voices are heard. Guns, guns, guns I have a cousin who is committed to his gun rights. He insists that he’s a “law-abiding citizen,” but if the police come to his home to take his guns, he’s going to shoot them. “I’d rather die from a bullet than starve to death in a concentration camp.” “What the hell are you talking about?” This is a person with no military training. He’s old and overweight. He takes blood pressure pills and heart medicine. Yet faced with the prospect of a hypothetical attack from the US military, he sees himself as a champion on a hill fending off an army of darkness. “Dude, when the police come to your house to take your guns, you won’t even have time to squeeze off a shot. They’ll be on you so fast you’ll be in handcuffs before you even recognize the door has been kicked in.” But he doesn’t even hear that comment. I’ll say it and watch as the sentence bounces off his deflector shields. The words disintegrate as if they were never uttered. They fall into a black hole. I can say them again and again and again and they clatter worthlessly to the ground to be absorbed by the Earth. FUTILE! Under the influence of idiots The worst part is that their delusion should be self-defeating. These people should be throwing themselves off buildings convinced that they will fly, or that angels will catch them. They should be gurgling bleach thinking that it will cure them of infectious disease. Yes, those things do happen, but not at the rate you’d expect with the observable affliction rates. We are surrounded by people who are both industrious and deluded. They are appallingly high functioning. They can hold down jobs, take care of themselves, remember to wash their clothing. They can work their way into important positions, only to succumb to a moment of insanity and inflict the maximum possible damage. They know to hide what they really think. They’ve adapted and created camouflage, but they’re everywhere! What are we supposed to do about this? How can we protect ourselves? The evolutionary need for delusion I think deep down that they are aware of their worthlessness. I think they live in a world of fantasy exactly because the truth is too painful. They can’t face it. Their delusion is a coping mechanism. It helps them have some semblance of a life, I guess it must keep them alive. My instinct throughout is to pity them, to feel compassion. “Oh, you need to think of yourself as a valiant cowboy to make it through the day? Okay, that’s cute.” They’re like the character in Dodgeball who thinks he’s a pirate. Yeah, that’s harmless, until you hand that guy the nuclear launch codes. Getting between a lunatic and his delusion is like getting between a mamma bear and her cub. In fact, it’s worse because the mamma and the cub are, in this case, the same psychopathic entity. A reality intervention The truth is it’s an aggressive thing to organize an intervention. It’s hard to take somebody you know and sit them down and force them to admit they are an addict. It’s painful for them and it’s painful for you. We have axioms that counsel us against taking such action. “Let sleeping dogs lie.” That’s so much easier than: “You’ve got to get your feet on the ground, you have to accept the world, the stuff you believe…it’s JUST NOT TRUE!” Sweet, sweet delusion They become hostile. Of course, they do! You’re forcing them to trade a sweet illusion for a bitter-tasting pill. The truth is that we all dabble in some form of illusion. It’s a powerful force. It’s a necessary force. “These people care for me, they’ll stand up for me if I get into trouble.” “Hard work pays off, you’ll be rewarded eventually.” “The world is fair, truth comes out in the end.” “Fake it till you make it.” “I’m fine.” A delicate balance Reality is harsh and cruel, fantasy is soft and engaging, and you need a little bit of both to get through the day. The reality allows you to make changes, the fantasy allows you to ignore the pain. Ideally, we work to a point where our reality is so perfect we don’t need fantasy. The people who are most engaged with fantasy have the dimmest perception of reality. They’re driven by fantasy, they vote based on fantasy, and their delusion perpetuates a miserable reality that obstructs the efforts of responsible people who are trying to make the world a better place. They see responsible people as a threat to their illusions “They want to take away our religion, our guns, our freedom.” “No, that’s not what we want to do.” “Don’t take my fantasy away, it’s all I have, it’s the only thing that allows me to endure this cruel world.” “But can’t you see? We can make the world better! We can improve the reality!” “NO!” What to do? We won’t convince them. All that’s left for us is to keep striving to improve reality and be diligent about stopping them when they reach out to shove a fork into an outlet. We must keep in mind they are inhibited by their refusal to perceive the truth of the objects they encounter. That’s our advantage. But we must always remember that they do have enough influence to bring about terrible destruction. Think of them as babies. Love your babies. Guide your babies. But never let the baby take the wheel.
https://medium.com/an-injustice/suffering-under-the-destructive-influence-of-delusional-people-3feb4c1e48ed
['Walter Rhein']
2020-11-12 01:38:48.521000+00:00
['Mental Health', 'Advice', 'Conservatives', 'Politics', 'Elections']
OSARO Raises $16M in Series B Funding
Attracting New Venture Capital for Machine Learning Software for Industrial Automation SAN FRANCISCO (October 3, 2019) OSARO Inc, a leader in machine learning software for industrial automation, has announced $16 million in Series B funding, with participation from King River Capital (KRC), Alpha Intelligence Capital, Founders Fund, Pegasus Tech Ventures, GiTV Fund, and existing investors as well as strategics, bringing total funding to $29.3 million. According to Co-founder and CEO Derik Pridmore, the funds will be used to invest in talent acquisition, international deployments, and advancing the OSARO Pick and OSARO Vision product lines to meet customer demands. The company’s flagship product, OSARO Pick, automates stationary picking stations in “goods to robots” distribution centers. OSARO’s robotic piece-picking software has improved performance and efficiency in e-commerce order fulfillment and intralogistics for multiple customers, including top material handling companies. OSARO plans to expand into handling order fulfillment in electronics, apparel, groceries, pharmaceuticals, and many other industries. “We are very excited to be leading this funding round,” said Megan Guy, Co-founder and Partner of King River Capital, who will be joining OSARO’s Board of Directors. “It is rare and exciting to work with a team that has both world class deep learning talent and a highly commercial orientation. OSARO’s perception and control software enables full automation of some of the most difficult vision, picking, and manufacturing problems, and its ability to integrate with a wide range of robotics hardware means that it can be deployed not only in greenfield environments but also as a retrofit solution to transform industrial automation.” Investment in warehouse and logistics automation is expected to increase from $8.3 billion in 2018 to $30.8 billion by 2022 (Tractica). OSARO’s proprietary software enables industrial robots to perform diverse tasks in a wide range of environments, addressing growing labor shortages in fulfillment centers worldwide. The company is transitioning the automation industry from static robotic systems into dynamic solutions. “A key element of our competitive advantage is OSARO Vision’s deep learning algorithms,” said CEO Derik Pridmore. “These algorithms generalize picking tasks with minimal training data and no SKU registration for quick, scalable solutions. In addition, as a software company, we support a wide array of commodity hardware and robotic arms which lets our customers select options that best fit their needs.” OSARO also announced that Kevin Pope has joined as VP of Engineering. With 30 years experience in high-tech product development, he led engineering teams at Applied Digital Access, Mahi Networks, and Calix. Pope will support the company in scaling their AI based robotic picking solutions worldwide. “OSARO’s approach of developing hardware-agnostic AI software for industrial robotics allows us to work in close collaboration with our customers, integrating OSARO products for their specific use cases, with a focus on scalability and robustness, providing our customers with a long-term competitive advantage. Our focus in the next year will be to increase our deployments in North America, Australia, Korea, China, Japan, and Germany.” stated Pope. View Press Kit Managing the Future of Work: How teaching robots the way the world works changes the world of workRobots aren’t necessarily primed to take over, but advances in machine learning are readying the mechanical components of the workforce for more complex and autonomous tasks. Startup OSARO specializes in deep reinforcement learning systems, artificial intelligence for industrial robots. CEO Derik Pridmore talks about the adaptive decision-making capabilities working their way into warehouses and factories, and the prospect of machines with a wider, more human range of cognitive capabilities. OSARO is extremely optimistic about the power of AI to solve problems now; to provide real value to people’s lives. We wanted to start with markets that are huge today, rather than markets which are still developing, like drones and household robotics. OSARO partners with robotic integrators around the world to automate industrial scale robotic systems in the e-commerce, and Automated Storage and Retrieval System (ASRS) industries, while testing systems for use in food preparation and automotive manufacturing.View our open positions at www.osaro.com/careers
https://medium.com/silicon-valley-robotics/osaro-raises-16m-in-series-b-funding-98e6e1e287f6
['Andra Keay']
2019-10-04 22:38:09.152000+00:00
['Manufacturing', 'Logistics', 'Artificial Intelligence', 'Automation', 'Robotics']
About The Guardians blind handling of the extent of Muslim intolerance and totalitarianism
Already in 2011, the Guardian reported on a shocking crime in Pakistan. The governor of Punjab province, Salmaan Taseer, had “advocated a reform of Pakistan’s controversial blasphemy laws” and taken up the cause of “Asia Bibi, a poor Christian woman … sentenced to death for allegedly insulting the prophet Muhammad.” Taseer was promptly machine-gunned to death by one of his own bodyguards, a devout Muslim called Mumtaz Qadri, who then submitted calmly to arrest and prosecution. To a huge numbers of Pakistanis Qadri’s actions made him a hero. was Hailed as a worthy successor to Ghazi (“Hero”) Ilm-Deen, a widely venerated Muslim saint who stabbed a Hindu blasphemer to death in 1929. Sentenced to death with found guilty of murder this resulted in fleeing the country of the presiding judge. Devout Muslims were at it again a few months after Taseer’s assassination. Shahbaz Bhatti, the Christian minister for minority rights, had also advocated a reform of the blasphemy laws. He since then was ambushed by members of the Taliban and assassinated exactly as he himself had foreseen he would be. By summarizing the situation in effect by 2011 the Guardian reported that Muslims in Pakistan had machine-gunned two politicians to defend the honour of the prophet Muhammad. But by 2015, the Guardian was then out of a sudden horrified to discover that Muslims in Paris were capable of such horrible actions by machine-gunning cartoonists for the same reason. Who could have foreseen that Muslims in Paris might behave like Muslims in Pakistan? It’s almost as though they don’t believe in free speech. Indications for the peril have been provided enough by the past. This demonstrates the incapability of the Guardian handling its own contents and correlating this with already expressed journalism. But it is much worse than just that. When Mumtaz Qadri was finally executed earlier this year, the Guardian published this pious editorial: The murder of Salman Taseer was in a literal sense a crime against humanity even if in a legal sense it was just another of the innumerable murders that have disfigured Pakistan in recent decades. He was the governor of the Punjab, who was killed by one of his own bodyguards, Mumtaz Qadri, because he had denounced the dreadful blasphemy laws that have been successively rewritten, widened, and made more stringent under Islamising governments since 1980 so that now people can be executed merely for “using derogatory words in respect of the Holy Prophet”. On Monday 29th February 2016 , Qadri was hanged in conditions of secrecy. On Tuesday, vast crowds attended his funeral to demonstrate their support for this murderer’s crime. Nor was this support confined to Pakistan. One of the largest mosques in Birmingham said special prayers for Qadri, describing him as “a martyr”, as did influential preachers in Bradford and Dewsbury. These have been strongly and rightly criticised by other British Muslims, and no doubt represent a minority view, but it is disappointing that there are still some imams who have learned little about mutual tolerance in the 25 years since the Rushdie affair, however much mainstream majority Muslim views have moved on. … It is not just the terrifying levels of intimidation that operate in Pakistan that keeps the law in place, but widespread democratic support. This looks like a reversal of all the great hopes of the closing decades of the 20th century and it is, but it is not an irreversible trend. … We can do better, and we must. Human dignity demands the right to question, to be mistaken, and even sometimes laugh about beliefs. Only on the basis of that kind of equality extended to all can we make a more just world. (The Guardian view on religious intolerance: a sin against freedom, 3rd March 2016) That is a typical piece of oportunistic posturing and dishonesty or at least hypocrisy. Who is this “we” who can and must do better? Presumably it’s the human race, so the Guardian is claiming the ability to reform humanity via its editorial column. Has the human race (or just the western part) already aquired all the abilities to just do that? But then we wouldnt discuss this case here. It’s posturing to feed its readers’ narcissism, nothing more. It’s also being dishonest about the true nature of Islam. Indeed interessing is how it is noted that supporters of Qadri in Britain “no doubt represent a minority view,” but is “disappointed” that “some imams … have learned little about mutual tolerance,” despite the way “mainstream majority Muslim views have moved on” since the Salman Rushdie affair. How does the Guardian know that Qadri supporters are in a minority and that mainstream Muslim views have “moved on”? The fact is seemingly for the journalist on Guardian: It doesn’t know. It merely has assumptions as it is not providing any evidence for the case. And is it really a call if very sizable minorities of Muslims have such beliefs? Like the 35% of young Muslims in Britain and 42% in France who are willing to tell pollsters that they support suicide bombings according to a Pew poll (presumably a low estimate). Another recent study (ICM Muslim Survey for Channel 4) finds alarming statements regarding the Mulims in UK and Europe too. Trevor Phillips states in his Channel 4 documentary his belief, that there is “a chasm” opening between Muslims and people of other faiths, and therefore that Muslims are different and apart from the rest of society. He states that it reveals “the unacknowledged creation of a nation within the nation, with its own geography, its own values and its own very separate future”. In his view, this means “we have to adopt a far more muscular approach to integration than ever, replacing the failed policy of multiculturalism”. Also the recent critic from the The Guardian to the study mentioned above is not holding stand for my arguments as the Guardian is failing to provide any real facts against the result of the poll and Trevor’s statements. As a result The Guardian summarises: “This is not the first time polls have been used to paint a picture of Muslims as at variance with British culture.” It appears that everytime a poll is not in favour of The Guardians political expression, it is “been used to paint a picture”. This is surely a true violation of good intentions and thereby we should ignore polls like this as it is not holding up to the real political goals of illusion. The Guardian can’t be serious. Naturally everything is being “used” and “to paint a picture.” The Guardian (and in case I myself here) is just doing the very same thing itself as by “painting a picture” for its very own opinion for which to disregard and to disqualify the above poll which does not cover its own political opinions. So it appears for the Guardian as a kind of bad morale to handle or use. What a great argument and logic from The Guardian. It just holds not stand. Surely, we would expect too much from a well-staffed newspaper like the Guardian to do their own investigation of Muslim organizations and people to find out about their opinions? In this case it would likely be both time-consuming and apparently dangerous, because the Guardian might not get the answers it is expecting to get. As taken in account the Guardian is seemingly wilfully blind about the extent of Muslim intolerance and totalitarianism. The Guardian reporting that the murderer Mumtaz Qadri has been acclaimed as a “martyr” by “one of the largest mosques in Birmingham” and by “influential preachers in Bradford and Dewsbury“ is raising big questions. If that is a “minority view,” where is the condemnation from the “mainstream”? Why are pro-Qadri mosques not being condemned and boycotted by anti-Qadri mosques? Why did the moderate Muslim majority not take to the streets to condemn both Qadri’s original crime and their misguided co-religionists who regard Qadri as a martyr? There is no evidence appearing for that critic. Well, waiting for moderate Muslims to demonstrate in favour of free speech is a lot like waiting for Godot. Moderate Muslims are very relaxed about killing in Muhammad’s name. The death of Salmaan Taseer proved that in 2011 and so does the death of Asad Shah in 2016. He was a Muslim shopkeeper in Scotland who used his Facebook page to promote inter-faith harmony with the following message: “Good Friday and a very Happy Easter, especially to my beloved Christian nation.” For saying that, he was stabbed and stamped to death by a hate-filled bigot who had travelled hundreds of miles from England with no other purpose.
https://medium.com/transparency-for-the-truth/about-the-guardians-blind-handling-of-the-extent-of-muslim-intolerance-and-totalitarianism-1cadad4ecd64
['Roderich Krogh']
2017-05-03 10:34:31.176000+00:00
['The Guardian', 'Culture', 'Totalitarianism', 'Islam', 'Journalism']
An End-To-End Time Series Data Science Project That Will Boost your portfolio
In this guide, I want to show you how to make time-series predictions of revenues based on real-life retail data, for these tasks I will be using a very common library: Prophet, developed by scientists at Facebook. Why Prophet? According to Prophet GitHub page: “A tool for producing high-quality forecasts for time series data that has multiple seasonality with linear or non-linear growth” Moreover, Prophet is integrated into the AWS ecosystem, making it one of the most commonly used libraries for time series analysis. The data The data used in this tutorial comes from a retail company, it has strong seasonality components due to the nature of the business where the data comes from. The data frame has been anonymized and it contains two columns: the datetime of the transaction and the amount of it. Transactions appear at different hours of the day and to reduce the number of noise the data has been re-sampled daily, summing up the total revenues. Additionally, the timestamp column has been converted to match the CET timezone, the main reason for doing this is to be able to have the data in an understandable format, making interpretation by us and eventually our clients easy. The following is the function used to achieve this: Transactions were collected between June 2018 and October 2019 and it contains 11284 records of sales. But let’s dive a little bit more into Prophet: according to Facebook Prophet’s documentation, the data to be fitted into Prophet must have a very rigid format: a column named ds for the points in time and another named y for the target. Using the following snippets it is easily possible to achieve this: At this step, the data might look ready to be used to fit a model but, after plotting the data a very important consideration has to be made:
https://towardsdatascience.com/an-end-to-end-time-series-data-science-project-that-will-boost-your-portfolio-6086d0204189
['Roberto Sannazzaro']
2020-09-18 16:21:54.050000+00:00
['Machine Learning', 'Artificial Intelligence', 'Towards Data Science', 'Data Science', 'Programming']
Exploratory Data Analysis of Stack Overflow Developer Survey-2020
Shape reveals the number of columns and rows in the data set By performing the above operation we get the information on the size of the data set it contains about 64461 rows and 61 columns that’s a huge amount of data. Data Preparation and Cleaning: The data we currently have contains 61 columns and that’s a large amount of info but for this exploration of data, we will restrict our analysis to the following questions. The demographic of the respondents and the spread of programming community across different geographical location What kind of programming skills, experience, and preferences is distributed across the globe Employment-related trends, information, opinions, and preference. hence I’ll be considering a subset of the actual data columns which are relevant to answering the above points List of columns selected for this analysis Now that we have the columns finalized let’s have a quick insight into the data by doing an info() call on the data. Most columns have the data type, Object, either because they contain values of different types, or they contain empty values, which are represented using NaN. It appears that every column contains some empty values since the Non-Null count for every column is lower than the total number of rows (64461). We’ll need to deal with empty values and manually adjust the data type for each column on a case-by-case basis. Only two of the columns were detected as numeric columns ( Age and WorkWeekHrs ), even though there are a few other columns that have mostly numeric values. To make our analysis easier, let's convert some other columns into numeric data types while ignoring any non-numeric value (they will get converted to NaNs) Converting to numeric values and replacing null values with NaN. Now the columns Age1stCode , YearsCode , YearsCodePro have been converted to numeric and if the value is not present it's replaced with Nan .let's look at the Basic statics of the numeric columns. Describe method reveals basic statics on numeric columns There seems to be a problem with the age column, as the minimum value is 1 and the max value is 279. This is a common issue with surveys: responses may contain invalid values due to accidental or intentional errors while responding. A simple fix would be to ignore the rows where the value in the age column is higher than 100 years or lower than 10 years as invalid survey responses. Using the drop method to eliminate entries which are irrelevant The gender column has multiple options, but to simplify our analysis, we’ll remove values containing more than one option. dropping genders with multiple options We’ve now cleaned up and prepared the dataset for analysis Exploratory Analysis and Visualization: Before we can ask interesting questions about the survey responses, it would help to understand what the demographics i.e. country, age, gender, education level, employment level, etc. of the respondents look like. It’s important to explore these variables in order to understand how representative the survey is of the worldwide programming community, as a survey of this scale generally tends to have some selection bias. Country Let’s look at the number of countries from which there are responses in the survey, and plot the 15 countries with the highest number of responses
https://medium.com/analytics-vidhya/exploratory-data-analysis-of-stack-overflow-developer-survey-2020-d8867ff28ece
['Mohiuddin Amanulla Chishty']
2020-10-08 14:49:19.058000+00:00
['Python', 'Data Science', 'Data Analysis', 'Data Visualization', 'Pandas']
Effective Way to Get Changed Rows in ADF BC API
Did you ever wondered how to get all changed rows in the transaction, without iterating through entire row set? It turns out to be pretty simple with ADF BC API method — getAllEntityInstancesIterator, which is invoked for Entity Definition attached to current VO. Method works well — it returns changed rows from different row set pages, not only from the current. In my experiment, I changed couple of rows in the first page: And couple of rows in 5th page. Also I removed row and created one: Method returns information about all changed rows, as well as deleted and new: Example of getAllEntityInstancesIterator method usage in VO Impl class. This method helps to get all changed rows in current transaction, very handy: Sample application source code is available on GitHub.
https://medium.com/oracledevs/effective-way-to-get-changed-rows-in-adf-bc-api-94a861840044
['Andrej Baranovskij']
2018-05-30 03:09:39.138000+00:00
['Java', 'Oracle Adf', 'Oracle']
How To Run A Happy, Profitable Business With Your Spouse
Photo by William Stitt on Unsplash Would you go into business with your spouse? Olivia and Heath Skuza have started three businesses together over the past twenty years. They are currently co-CEOs of B2B e-commerce platform NuOrder. The company enables brands and manufacturers to create digital catalogs of their products, send proposals and create orders for retailers. NuOrder employs more than 100 people, and more than 400,000 retailers use the NuOrder platform. Key customers include Asics, Levi Strauss & Co., Ted Baker, LaCoste and Nordstrom. NuOrder also recently secured a Series C round of funding worth $15 million. “Both Heath and I are extremely passionate and driven and hardworking people, and we work at crazy human speed,” said Olivia. Define Your Swimming Lanes Co-CEOs aren’t usually considered productive because it can confuse reporting lines and even lead to power struggles. Olivia attributes her healthy working relationship with Heath to clearly defined work roles. Heath is the visionary who ”sells the dream,” whereas Olivia “makes it happen” as an operations person “He very much owns sales, business development, marketing,” she said. “I own all customer success, renewals, services and support, and product and engineering. The Nordstrom partnership is something I own.” Entrepreneurs struggling to define their swimming lanes like Heath and Olivia should consider where their strengths and weaknesses lie. “Heath and I will never say, ‘Well, I need to have X amount of direct reports,’ or, ‘These people need to report to me,’” Olivia said. “It’s very important to make sure that you focus on what you’re good at.” Manage Disagreements Professionally Partners in every healthy relationship must learn how to manage disagreements constructively. Office place disagreements can prove difficult for spouses to negotiate as they affect employees and even customers. “If someone feels differently about a situation, but there’s defined swimming lanes, ultimately that person gets to make the call. There’s no stepping on toes,” Olivia said. She and Heath always try to solve work issues at the office at the end of the day. “We’re forced to work it out always because this is our life, both personal and professional. So when we come home, we need to make sure we’re talking to each other,” Olivia said. To this end, Heath recommends treating disagreements in the workplace like two professional colleagues rather than spouses. “Book a meeting with the other person just like you would with another team member,” he said. “Both parties must prepare for the meeting so they can present (factual) points about why they feel or believe in a different point of view. Quite often this preparation will lead to the right solution.” Agree On An Ideal Working Routine Some new entrepreneurs have no qualms about working sixty to eighty hours a week every week. That doesn’t work if an entrepreneur is in business with their spouse or they’re raising a family. Heath and Oliva have a two-year-old and adjust their work routine around family life. ”We rotate mornings. I’ll get up super early the first morning and do a work out and then get to work as quickly as I can, whereas Heath will be the one responsible for our daughter,” Olivia said. ”We’ll flip the next morning, so he’ll work out, go to the office, and I’ve got my daughter.” In most healthy relationships, both parties work together toward a common goal. A couple builds a life together. Two parents strive to raise a child. And business partners work on building a profitable venture. Olivia and Heath show it’s possible to do all three.
https://bryanjcollins.medium.com/how-to-run-a-happy-profitable-business-with-your-spouse-c732ee2be686
['Bryan Collins']
2019-09-23 16:39:31.650000+00:00
['Spouse', 'Profitable Company', 'Working Parent', 'Working With Spouse', 'Startup']
COVID-19 Media Factsheet
A weekly newsletter on the business of media from @AtlanticMedia, home of @TheAtlantic and @NationalJournal. Subscribe here: bit.ly/2miVHiQ Follow
https://medium.com/the-idea/covid-19-media-factsheet-35329ff75cb2
['Tesnim Zekeria']
2020-05-26 21:20:01.568000+00:00
['Journalism', 'Covid Fact Sheet', 'Media']
The questions to ask when hiring a designer or design agency
You need some design and — darn it — you need it now. That, or your boss has tasked you with finding a design agency. The ways of finding a design agency are myriad: you already have an existing relationship with one; you ask a friend or colleague, ‘who did your website?’; or you type ‘design agency’ into Google and see who’s SEO or PPC does the best job of getting them on page one. Some of those methods may feel like you are buying a lottery ticket, but either way, you will eventually have a shortlist of design agencies to talk to, and you need to discern who is the best fit. If you haven’t ever contracted a designer or design agency before, how do you know who is charging the right amount and who will do the best job? Well, from the other side of the fence (yes, I am a designer) here are a few questions that will help you make the right decision. But first, some due diligence. Your end of the bargain We designers are problem solvers, so you need to be able to articulate what your problem is. (We can collectively find out what the problem is, but you want to keep costs down, right?) Issues usually get articulated in a design brief. No two clients are the same, however. Some clients write extensive briefs; they know which H1 tags on their website pages rank well in Google and have particular technical requirements. Others just think their current website isn’t performing well and have some anecdotal evidence to support it. It doesn’t matter which camp you are in, but either way, you should come armed with an understanding of what your problem is, and more importantly, what your goals are. Those are business objectives, to you and me. Without those, you will waste a lot of time. Some briefs try to solve the problem by dictating the outcome. Don’t be disheartened if an agency tears the brief up regardless. You are contracting a (stellar) design agency for what goes on inside their heads, not what they do with their hands. Knowing what your budget is will also help. Neither of us wants to ask each other out on a date, only to find out we just aren’t compatible or even in the same league. It’s not you. It’s not us, either. The mistake to make here is you think you just need a website and any digital design agency will do. There are websites and (as you can imagine) there are websites, but more on that later. If you are in the Insurtech sector, for example, talk to agencies that either specialise in that sector or have a good understanding of how an intermediary business works for an intermediary has many audiences. It’s tempting to support your sister-in-law’s business. But if her business solves problems for the hospitality sector, she isn’t going to know who your audience is or understand the pain points of a digital start-up. It’s going to lead to some awkward conversations around the dinner table at Christmas, that’s for sure! So, the questions a prospective client needs to ask a designer. 1. ‘What is your process, and how do you approach a new project?’ As I stated above, good design solves a problem, so this question looks to weed out what process the design agency will go through to help understand your problem and subsequently solve it. Design agencies don’t roll solutions off a conveyor belt; they sell expertise and outcomes. Yes, design may seem like a process shrouded in mystery, and designer clichés like beanbags, PostItNotes and Macbooks. Still, like any good consultancy practice, a design agency should have a thorough and rigorous process that has been honed over the years. If you feel like your design agency is pinning the tail on the donkey blindfold, it’s because their process lacks the upfront research and they are relying on good old-fashioned ‘throw it at the wall and see what sticks’. Design is an iterative and collaborative process, so you, the client, are going to be involved. Find out what the agency will deliver at each step, and in turn, what they expect of you. At the very least, there will be a discovery phase, a design phase, and a production phase. Be prepared to receive homework and hand it out. At the outset, the agency will ask lots of questions to get to know your business. They will listen and take those learnings away, re-frame them and present them back to you to ensure they have understood your business and objectives. Your objectives might mean more sales; or a clearer understanding of your fractured product offer; or maybe a recruitment drive. Either way that key performance indicator underpins all that you and your agency should measure yourselves against. Once you have agreed on the problem, you will start to see visual and verbal solutions that you will have to feedback on. Once everyone has agreed on a creative route, your work won’t stop there. You may have to supply content or provide access to the board for a photoshoot. Each step will require time and input from both you and your team. Sometimes you will be seeking to turn your business around, so expect to make big decisions and assemble all the people that are going to be involved. If you leave the founder or CEO out of this process, expect the project to go off the rails. When you helicopter him or her in at the end, don’t expect them to buy into your solution, no matter how good. Get everyone involved. (Well, everyone that has the power to derail you. 20 people around the table are as dangerous as one.) Ultimately though, if you don’t get involved, expect mediocre results. (You’re not like that though, are you? No, sir!) Design can change your business, so undoubtedly you will want to get involved. So, that simple question weeds out two critical factors: does the agency have an effective process they have used time and time again with demonstrable results, and how will I as a client need to contribute to the project both in terms of time and resource? 2. ‘How long will it take, and how much will it cost?’ Doesn’t seem like the killer question, does it? (State the obvious, why don’t you, Marcus!) Of course, you want to know how long it’s going to take and how much it’s going to cost. It’s not so much the question, but the answer or answers you need to be wary of, for they will be wide-ranging and varied. When you compare timelines and quotes side-by-side, how do you know which agency to buy from? Can you even compare agencies side-by-side? Be wary of the agency that says they can deliver to your boss’ ridiculous timeline. Yes, you want to please, but if everyone else is saying it will take three to four months, the agency that promises a website in a month is going to deliver a boilerplate solution, or they are going for the sale knowing they will miss the deadline. You know your boss, so you can work out how well that conversation will go if you miss the deadline, or the results are mediocre, at best. Of course, a freelancer is going to be cheaper than an agency, so taking cost away, how do you then decide who the best fit is? As stated before, there are websites, and there are websites (or brands, or products). Well, perhaps the next question will help. 3. ‘What am I buying?’ Sounds trite, I know, but to discern who is the best fit you will need to understand what you are buying. Naturally, all your shortlisted agencies deliver output, but where you start to see differences is in the way an agency arrives at a solution. That’s why you ask the question ‘What is your process, and how do you approach a new project?’ Once you understand the process, you can begin to understand the inputs and outputs of each stage and where the value is. If your agency wants to spend a day or two with you to interview your sales team, marketing team, customer support team, and the C suite, then that, right there, represents a tremendous amount of value. That’s input, and dare I say it, the input is possibly the most crucial part of the project. There will be additional things like copywriting, photography, illustration or animation that will help you stand head and shoulders above your competitors. Just look at brands like Mailchimp, or Stripe, or Asana. Apart from their stellar products, there is a reason why they stand out. They invested in creating a tone of voice or visual language that is unique to them and all that they stand for. If your agency does their due diligence, they will find out where you see your company in the broader marketplace and carry out a litmus test to see what sort of personality you want to project to the world. Well-crafted copy or illustration can do just that, but ask if that is included or is extra. It will ordinarily be extra because neither you nor the designer will know at the outset what the reaction to that litmus test will be. When I said there are websites, and there are websites, some are perfunctory and serve a purpose. In contrast, others can be a manifestation of a pivoting business that can double or triple revenues. Know what you are buying. Some agencies sell websites, and some sell change. It just happens to be a website that is the catalyst for change and therefore has inherently more value. 4. ‘How do you handle differences of opinion or conflict with a client?’ We all like to think that the design process is going to be as smooth as a Roger Federer return. But the truth is, we are human; mistakes or differences of opinion can occur. Sometimes project timelines slide because, we the designers, underestimated the task of how difficult it was to get the board to agree on a strategic direction. Or, sometimes, you, the client can sit on feedback for a month without any adjustment to the overall project delivery date, and all those things can cause friction. So to gauge the professionalism of your agency ask them that very question. Are they transparent (and dare I say, human) and do they open up? Can they defend their work? Do they put thought into each decision they make, and can they articulate why they made that choice? If your agency can walk you through the decisions you collectively make, based on the input you have given, there is less chance of friction as you are all heading in the same direction. You want to find an agency that can guide you — they are the experts after all — but you don’t want a maverick who just doesn’t listen. Another way of gauging the intelligence and professionalism of an agency is asking them what hurdles they had to overcome in any given project in their portfolio to find out how they navigated them. Trust your instinct here too. You and the agency you invite in will be on your best behaviour in the first meeting. You have to try and cut through that, and ask yourself, do I like these people? If the answer is no, evaluate the pros and cons of entering into a relationship with an agency that you can’t rely on or lean on when the going gets tough. 5. ‘What work are you most proud of and why?’ In some situations, you will have decided to bring in an agency based on their reputation and portfolio alone, so you don’t need to go over old ground. You know they can do the work. In other instances, it may be a case of a bit of show and tell. Designers will inevitably tailor their portfolio to your needs. They will walk you through what the client problem was and how they solved it. Be wary of the agency that does nothing but talk about themselves. If you ask an agency what work they are most proud of and why, it will unearth what motivates them. Designers love doing good work — don’t we all — so finding out what gets them out of bed in the morning is a sure way of discovering if they are a good fit. This question is likely to get your designer animated and will help you read why they are passionate about certain things and whether they align with your needs, or not. 6. ‘What do you look for in a client?’ Another way of finding out if an agency is suitable for you is what qualities they look for in a client. If I could take you into the mind of a designer for a moment, then this will help you understand their motivations. Designers love doing good work, as it scratches an itch all human beings have. Not all jobs present opportunities to do good work, or that itch is just out of reach even if you are double-jointed. So, when you invite a design agency in to brief them, they will be sizing the project up for creative potential as well as working out if you are an ally or high maintenance. By asking them what they look for in a client, you will immediately know if you fit their criteria. Hiring designers (or anyone come to that) is no longer just about take. It’s about what you can give too. They will also be assessing what barriers are going to be in the way of getting to design nirvana. If your CEO is super prescriptive and thinks he or she knows what good design is (he or she well might!), then that is a barrier. Agencies want grown-up relationships, not parent-child relationships so expect questions on the chain of command and how good you, as the client, are at gathering feedback. 7. ‘What is good design work?’ The answers to this question will tell you a lot about a design agency’s motivations. Try to move beyond the knee jerk or vanilla response. ‘I just want to win awards!’ Well, what awards? Most creative? Most effective? Best B2B SaaS marketing website award? ‘I want to get loads of followers and likes on Instagram or Dribble’. (I made that one up, but I kid you not, there are some vacuous designers out there.) ‘I love working with small businesses and SMEs because design can put a dent in their universe. I can’t put a dent in Google’s universe, but I can in yours, and that’s what gets me out of bed.’ (Sorry, that’s my motivator. But you’d hire me, right?) Put simply, designers love their job and their craft, and there is an agency out there that simply loves doing what they do for businesses just like yours. It doesn’t stop at the output though. Agencies are looking for long term relationships with their clients. Aside from keeping the lights on, ask them why is that? Look for answers that align with your needs. 8. It’s the designer who should be asking all the questions, not you. Well spotted. That isn’t a question, but it is something to be wary of. If you find yourself listening to the creative director tell you how fantastic their work is, then chances are you aren’t going to get a proposal back that outlines what your problem is even if you have written the most watertight brief. The question you have to ask yourself is, how will the agency gather all the information they need to give me a meaningful indicative cost? When you are contracting a designer or design agency, you must ensure that they have understood your problem and have seen a similar set of pain points before in other clients’ businesses. Listening might not seem like a critical design skill, but believe me, it is essential, and if they aren’t listening at the outset, are they going to hear you when you are giving feedback? Summary Yes, buying design can seem like a leap of faith, but by asking those questions, you will begin to understand why some agencies are more expensive and why, what value they bring and what their underlying motivations are. As long as you have done your homework beforehand and invite the right agencies in it may well boil down to personal chemistry, and that’s fine too. The best outcomes arise from mutual respect, where open dialogue and debate can happen, because tricky decisions have to be made. Only you know what’s right for you, but at least you now have the questions to find the right partner.
https://uxdesign.cc/the-questions-to-ask-when-hiring-a-designer-or-design-agency-695b92254773
['Marcus Taylor']
2020-07-24 11:15:21.091000+00:00
['User Experience', 'Design Process', 'UX', 'Design', 'Hiring']
Writers of The Process
Lucius Patenaude Work: I am a film director and writer currently living in Nashville, Tennessee. I freelance with local productions to support myself. I still have much to learn. Passion: Novels, film, animation, video games. I consume stories in any way, shape or form. I seek out stories to teach me more about the world I live in and challenge my beliefs. I especially love stories that explore the relationship between God and man. Most of my personal writing is high-concept fiction with abstract themes though I very much enjoy shooting documentaries about everyday people. Tidbits: My extended family is made up of scandinavians from Minnesota and pioneers that helped build Texas, so I’m basically a viking cowboy. Though born in Texas, I was grown in Northern Thailand and homeschooled through high school. My skills include cooking Thai food and firing Civil War-era cannons. Most recommended movies: Sound of My Voice, Children of Men, Noah. To see some of my work visit: luciuspatenaude.com. Adrian Patenaude Work: I’m a writer trained in advertising and public relations, but I have also dabbled in poetry, short stories, blogging and screenwriting. I currently reside in the beautiful city of Austin, where I’m working at a PR firm downtown and getting connected with the local film industry. Passion: I believe in the power of stories, whether they be in the form of books, music, poetry, graphic novels or film. My own stories tend to explore themes of faith and culture, two of the most significant aspects of my life. I want everything I write to be courageous, communicating unexpected truths about God, the world and myself. I’m also fascinated with the way social media is shaping our psychology and passionate about writing great characters in order to eradicate stereotypes. Tidbits: I’m a movie buff, global nomad, published poet, scrapbooker, sweet tooth and cat enthusiast. I can speak Thai, code in CSS & HTML, bake sourdough bread and write a killer thank you note. I’ve always wanted to go to Japan, but never have, and never wanted to go to India, but did a few summers ago. My happy place is somewhere between a beach and a bookstore. INFJ. Oh, and I’m proud to be Lucius Patenaude’s little sister. Lauren Quigley Work: I graduated from Abilene Christian University with a Bachelor of Science in Nutrition, and minored in both Communication and Digital Entertainment Technology. I’m currently job hunting and doing freelance work in San Antonio, Texas. Passion: Writing, film, any and all media. Being captivated with a truly great story is one of my favorite feelings in the world, and I want to give that feeling to others through words and a camera. I believe in authenticity, asking questions that might not have answers, and shining light on the everyday experiences that make up life. Tidbits: I grew up in a family of six, was homeschooled through high school, and my first solid experience in creative writing was through Lord of the Rings and Pirates of the Caribbean fan fiction. I studied nutrition because I’ve always wanted to help people live long lives, I love food, and the human body is amazing. Sometimes I talk like Batman to make my husband laugh. Joseph Quigley Work: I am a software developer for USAA specializing in mobile apps with many computer-related side projects outside of the 9-to-5. Passion: Good stories, whether in books, video games, movies, photographs, or a stranger sitting next to me on an airplane. I also enjoy creating things with technology that make people’s lives easier or more enjoyable. I am happiest when I can merge stories and technology together. Tidbits: I come from a family of engineering, mechanical, and entreprenuerially-minded people. I love to cook for others and I yell a lot when playing intense video games. I am currently working on making a video game with some of my best friends and my wife. Emmy Sparks Work: I recently graduated with a biology degree and an ambiguous vision for the future. Currently, I work as an Administrative Assistant for a church, crossing off the last thing I ever wanted to try: being a secretary. I should probably make more life goals. I TA Anatomy and Physiology lab courses on the side, which means I literally have skeletons in my closet. Passion: Ice Cream. Pizza. I believe that there should not be so much animosity among religious people toward science, or vice versa. By nature, neither party can prove or disprove the other. In fact, what I have discovered so far is that science reveals that there is room for God, which I find exhilarating. There are so many things I want to say about this topic. I hope to do so soon. Also, I highly value child sponsorship, and sponsor two children through World Vision and Compassion International. Tidbits: Once upon a time I told my Nana that when I grew up I would be a writer in New York City. We’ll see. I find joy in thinking about thoughts and cooking a lot of food for one person and another person (a note: I love him). I have a goofy dog named Charlie because I needed something to save and he has a terrible case of sad puppy eyes. His Instagram is @charliestatus, because what stranger wouldn’t want to see pictures of my dog. I slay dragons. Caroline Nikolaus Work: While I spent more time learning, studying, and playing music in college, my degree is actually in psychology (music minor). Saving the possible pursuit of graduate school for later years, I have moved to Nashville, TN as a solo artist and writer. Currently “starving” away as a waitress. Passion: People and that thing called Music. If I have any certainties in this life, they are to help and love people in any way I can, from newborns to the oldies. I happen to think music brings change and touches the soul in phenomenal ways, besides the fact that it is a universal language (ah!). I have a passion for serving, culture, for a world outside of my own, and for understanding each person as they are and how they became that way. Tidbits: I grew up in America, Japan, and Germany. Traveled extensively and don’t want to stop. Ever. Part of my heart’s been planted in Africa. My family is extremely close, something of a rarity these days. Oh, I’m the youngest. Photography and videography on the side. Huge lover of art museums, forests, walking barefoot, English tea/tea time, ultimate frisbee. I could say more. Those days when I can sit at the piano and play, lose all sense of time and enter into my made up reality, created and shifted with every new note I play. Brandy Rains Work: I have an Art Education degree but I don’t really know what that means. I am also currently pursuing a masters in Education, but I don’t know what that is, either. And because of that, I’ll be working for a non-profit in Nashville when I finish grad school. Passion: Being outdoors. I currently live in the mountains, so I suppose I have some sort of steel-like lung. Writing poetry, rock climbing, reading, hiking, and drinking pretentious yet environmentally conscious fair-trade coffee. Attempting to be humorous and most of the time failing miserably. Tidbits: I can’t tie a cherry stem with my tongue, but I CAN eat at least 32 grilled shrimp in one sitting, and I feel like that’s just as impressive. I performed my first slam poem less than a year ago, and it made me a better person. I imagine sometimes what it would be like to have enough abs to matter. I’ve worked in social justice/non-profit longer than I’ve been attending college. I am the first person in my family to graduate college. I’m one of “those” people who tries to take Instagram seriously as a photographer. Julia Curtis Work: Currently working on my English degree at Abilene Christian University. No, my plan is not to be a teacher, though who am I to decide my future. Passion: People, their stories, and what they do with them. This can be through books, movies, video games, and yes even television. I devour any form of a good story. I like making stories of my own, and being in other people’s stories through theater. I daydream about camping in the mountains and going to Hogwarts. God has blessed me with opportunities to travel, and travel I must. Tidbits: My homes have included the areas of California, Honduras, Japan, New Jersey, Arkansas, Ecuador, and Texas. Food is everything. Seriously it keeps you alive. I love people watching. I have the greatest friends in the world. Krista Cukrowski Work: I’m a Senior Digital Entertainment Technology student with minors in Graphic Design and Art, graduating in just a few months. I work as a Media Assistant, teaching software tutorials and helping students hurdle over blockades in their creative urges. Passion: Art, good food, and even better company. I’m of the opinion that there’s not much that can’t be bettered with a solid community of both peers and mentors (and a home cooked meal). Grad school will be coming for me within the next few years for Art History and Museum Studies. Tidbits: I am a professional whistler. There is nothing else that I claim such proficiency in. National parks and museums are my second home. Ray C. Loyd Work: I used to be a cashier, sneaker reseller, and financial aid dependent. Now, I take photos and shoot video on a freelance basis, while selling my soul writing words for ads or tech companies. Also an aspiring film director, lebenskünstler and sane person. Crosses fingers. Passion: On most nights, I’m either nerding out on movies, basketball, food, or camera gear. Fairly certain that I must be passionate about having life crises, too, if I keep putting myself through so many. So let’s include these: thinking about old flames, feeling older than I am, and suppressing my only-child syndrome. I like small, sentimental stories that feel big. Tidbits: I had a name change in 7th grade. 8th grade was weird. Then I moved. Please, please, please watch any Paul Thomas Anderson or Richard Linklater movie and tell me you loved it. Jeramy Garner Work: I am a technology coordinator and network administrator for a private school. I also freelance as a photographer, DJ, and all around tech person. Passion: Food Having spent several years of my childhood on food stamps I grew to appreciate the importance of even a single hot meal. As I got older, I grew to appreciate how food was made. Now in my 20's I know my way around a kitchen fairly well, and am learning how important it is to share that knowledge with people. Tidbits: I have a degree in psychology. I worked for a year as a substitute teacher where I taught Pre-K. I also blues dance and swing dance.
https://medium.com/the-process-collection/names-and-faces-3b0b95a3d83c
['Lucius Patenaude']
2015-03-13 21:31:33.557000+00:00
['Biography', 'Twenty Something', 'Writing']
Detecting Toxic Comment
Dive into NLP: using Multi-label Classification techniques Authored by Chengx Li, Chengxi (Michael) Yang, Haopeng Wang, Hao Zheng MOTIVATION In modern society, as we get more and more data, there is a huge need for organizing and classifying data. And the classification problems often involve predicting multiple labels at the same time, which is so-called Multi-Label Classification. We share the interests on this topic with many data scientists as well as scholars. After some in-depth survey on the potential applications of Multi-Label classification, a project done by a group of young students at Simon Fraser University has presented very promising results in toxic comment detection using multi-label classification techniques. We had the honour to have one of the members of this distinct group to discuss their project. Here in this blog, we are going to dive right into how multi-label classification is implemented for toxic comment detection. They made improvements based on a simple baseline model and compared this model to other great solutions to properly evaluate their work. Firstly, what is Multi-Label Classification? What is the difference between Multi-Label Classification and Multi-Class Classification? Let’s recall the Binary Classification Problem. Suppose we have two class, and we want to decide whether an item belongs to Class 1 or Class 0. If we build a fully connected Neural Network, and the last two layer of our network might look like this: Notice that there is only one node in the output layer, and the activate function is Sigmoid, which will return a value in the range of (0, 1). This value can be viewed as the probability that this item belongs to class 1. Then we can set a threshold value, for this example 0.5, if the output of the activation function is greater than the threshold, we classify this item belongs to Class 1, otherwise this item will be assigned as Class 0. For the Multi-Class Classification problem, the most famous one is the ImageNet problem: classify image to one of the 1000 classes. The last two layers network structure might look like this. Suppose we have three classes that we like to classify an item to Class 1, Class 2, and Class 3, and one item can only belong to one of those three classes. In the output layer, the number of notes is equal to the number of classes. No matter what activation function is used for the last layer, we just apply the Softmax function to the last layer. Softmax can be viewed as a normalization over the output layer, so that sum of all the output notes add up to 1, in our case, y1 + y2 + y3 =1. Each value represents the probability of an item belongs to a certain class. And we classify the item to the class which has the highest probability, it doesn’t matter if this probability is greater than 0.5 or not. For Multi-Label Classification tasks, it’s very similar to Multi-Class Classification. The difference is that in Multi-Label tasks, one item can be classified to have multiple labels, while in Multi-Class task we already mentioned, one item can only be classified to one of the classes. Suppose we have three classes (or labels to be more precise). Here is the last two layers of the Network: There are three notes in the output layer with Sigmoid activation. In this case, we compute a probability for each class, and set a threshold. If a probability for a class is greater than the threshold, we label that class. In our example, both the top and bottom node in the last layer have probabilities greater than 0.5, so we label this item as both Class1 and Class 3. In this task for toxic comment classification, it’s a multi-label classification problem with six labels: toxic, severe_toxic, obscene, threat, insult, identity_hate. A sample of training data might look like this: We need to build a model which inputs a comment, and outputs all the toxic labels it belongs to. APPROACH 3 different models were implemented by the author to compare performances and results. Baseline: Logistic Regression Recurrent Neural Network with ELMo Embeddings Recurrent Neural Network with Convolution Filter APPROACH 1: LOGISTIC REGRESSION The first approach used TF-IDF to vectorize word from the dataset and feed TF-IDF matrix into logistic regression. TF-IDF is short for term frequency-inverse document frequency which is a method to display how important a word is to a document in a corpus. The tf–idf value increases proportionally to the number of times a word appears in the document and is offset by the number of documents in the corpus that contain the word, which helps to adjust for the fact that some words appear more frequently in general. APPROACH 2: RECURRENT NEURAL NETWORK WITH ELMo EMBEDDINGS ELMo embeddings (Embeddings from Language Models) is tailored to learn the semantics and context of the words. ELMo has an internal bi-LSTM which will compute the probability of forwarding and backward language model. Basic RNN is not very effective, so the Long-short-term-memory (LSTM) was introduced, the repeating module does more operations which enable the LSTM network to remember long-term dependencies. Bidirectional RNN with LSTM was used to capture information in both forward and backward sequence of words in sentences. The output of the LSTM layer is the concatenation of bidirectional LSTM cells. Dropout was used during model training. The model will produce an output of 6 probabilities specifying whether the sentence is a toxic comment. Using a fully-connected layer, multiple neurons will be reduced to 6 outputs. A sigmoid activation is used in the layer to form a probability ∈[0,1]. APPROACH 3: RECURRENT NEURAL NETWORK WITH CONVOLUTIONAL FILTER This neural network setup is similar to the one above. The following is the difference compared to the previous model: ELMo embedding is replaced by a glove embedding layer constructed from the vocabulary of a dataset. Convolution filter is applied to the output of the LSTM layer. RESULTS As the bar chart shown in Figure 1, we can see that the accuracy score of Logistic Regression, ELMo+Bi-RNN and Bi-RNN+convolution models are 0.9756, 0.9791 and 0.9836 respectively. The baseline logistic regression has the lowest accuracy score among the three models. Both of the neural network models have effective improvement. As shown in Table 1, the ELMo+Bi-RNN model achieves an improvement of 0.35% accuracy score on the test set compared to the baseline. Besides, the Bi-RNN+Convolution model also has an improvement of 0.82% accuracy score. However, based on the experiments compared with the baseline, Bidirectional Recurrent Neural Network with convolutional filter works better than the ELMo word embedding fed into Bi-LSTM layer. Takeaway from the analysis ELMo embeddings do not provide much improvement over the baseline. ELMo is trained on Google news data. The vocabulary of news corpus does not contain impolite words. However, there are many impolite words in toxic comment dataset. Because of the vocabulary does not overlap, ELMo embeddings do not provide informative data for the neural network. Both neural network models show improvement over the logistic regression baseline. This is because the neural network produces a high dimensional classification boundary for the dataset. Logistic regression produces a classification function with fewer dimensions compared to neural networks. Thank you so much for your time, hopefully, you’ve enjoyed reading our blog post. With your help, we can work together to help raise awareness on cyber-bullying, by simply clicking the clap button, you could influence or even inspire more and more people to help to build a safer, healthier online environment. Letting more data scientists know, multi-label classification is armed and will be ready for trolls and their toxic comments. References Kaggle Inc (2018) Toxic Comment Classification Challenge, Available at https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge Sujay S Kumar (2nd Oct 2018) ELMo Embeddings in Keras, Available at http://sujayskumar.com/2018/10/02/elmo-embeddings-in-keras/ Edward Ma (30th Oct 2018) ELMo helps to further improve your sentence embeddings, Available at https://towardsdatascience.com/elmo-helps-to-further-improve-your-word-embeddings-c6ed2c9df95f Michailidis, Marios. (2017). Investigating machine learning methods in recommender systems (Thesis). University College London, Available at https://mlwave.com/kaggle-ensembling-guide/ Amar Budhiraja (15th Dec 2016) Dropout in (Deep) Machine learning, Available at https://medium.com/@amarbudhiraja/https-medium-com-amarbudhiraja-learning-less-to-learn-better-dropout-in-deep-machine-learning-74334da4bfc5 Rohith Gandhi (Jun 26, 2018) Introduction to Sequence Models — RNN, Bidirectional RNN, LSTM, GRU, Available at https://towardsdatascience.com/introduction-to-sequence-models-rnn-bidirectional-rnn-lstm-gru-73927ec9df15
https://medium.com/sfu-cspmp/detecting-toxic-comment-f309a20a5127
['Chengxi Li']
2019-03-11 07:26:36.370000+00:00
['NLP', 'Deep Learning', 'Cybercrime', 'Machine Learning', 'Big Data']
Report: #EarthDay2020 Webinar On “Climate Action & COVID19
The webinar tagged “Climate Action and COVID-19” focused on Health, Education, Agriculture and Water relating to the COVID19 pandemic and our survival during Post-COVID19. Hosted by Olumide Idowu — the Co-Founder of ICCDI, the event was graced by various professional in all of the selected areas of the thematic concerns. Speakers #EarthDay2020 Webinar On “Climate Action & COVID19 The speakers were; Chris Chukwunyere; a public health practitioner with experiences in community and public health development who spoke on “Health and COVID19.” John Agboola; a farmer, youth advocate, and an agricultural researcher for sustainable development in Africa who spoke on “Agriculture and COVID19”. Mmanti Umohi; a lead consultant of PurplePatch Consult and educational psychologists who spoke on “Education and COVID19”. Raquel Kasham Daniel; an educator, entrepreneur and a development practitioner in Nigeria who spoke on “Children and COVID19” and; Victor Ogunsola; a researcher and development practitioner who spoke on “Water and COVID19”. Many issues discussed were on the current global pandemic at national, regional and global levels at it relates to our primary means of survival currently treated by growing global health crisis. The challenges and solutions proffer are but not limited to; Water Challenges: Over 50 million people in Nigeria have no access to fresh, clean water, particularly in northern Nigeria. The latter heavily depend on rainfall to get access to water to conduct water-related activities. A finding has shown that 69.7% in monetary value to purchase water since the pandemic has risen and it requires 6 litres of water daily to wash hands. Subsequently, 50% of the rural population spent at least 500 naira daily to buy water to drink and conduct domestic chores. Solutions: Water should be a priority for all nations, particularly in Africa; hence, governments have to ensure water provisions for all. If not, the water crisis would be inevitable now and after covid19. The municipal players in the water resource sectors should be critical agents and partners in championing the cause of providing water. Agricultural Challenges: The global health crisis has caused a lower production of food and as strained further, food supply chain compounding the unemployment challenges among farmers who are mostly rural dwellers, and causing more widespread hunger. Currently, the hunger crisis has increased. According to a United Nations report, over 135 million people are living in hunger. Solution: · Massive investment across all sector, particularly in the agricultural industry, should be of priority. Monetary support, seeds, and fertilizers should be easily accessible to help end markets value addition and the multiplicity of food production. At this time, innovative ideas by individuals and governmental support will be a holistic approach to rescue the nation from nationwide hunger and starvation post-COVID. Public Health Challenges: · Many individuals fail to maintain physical distancing against the rule to do so, increasing the global death rate. The impact of this on the economy has left 2 million people unemployed and many more to come during post-COVID. The mental health cases, domestic abuse, and violence have increased rapidly, shutting down marriages and limiting social interaction of people. The pandemic kills people faster, creating no time for solutions, unlike climate change which is slow and accumulative, yet, it gives us time to act now to be able to mitigate and adapt. Solutions The only solution to curb the spread of COVID19 is to stay at home and frequently wash our hands to decrease more casualties. Poper funding of public health care facilities and health care providers should be of priority. It is crucial to invest in these facilities to help save lives because there will still be pandemics to come but our capacity to handle it depends on our today’s choices. This period has given us a sense of urgency and proactiveness for the future. It is left for us to use it to save our world. Educational Challenges: In our world today, over one billion students are out of school, this will not only affect the teacher-student physical relationship but has interrupted the “social flow” of student greatly The educational system in Nigeria adopts a face-to-face teaching method. With the stay at home, over 85% of student who lives in rural communities and attends public schools will lack access to information, education, and adequate learning materials. Many parents have become overwhelmed by this crisis and are not adequately equipped and mentally stable to help educate their children. The lack of investment in advanced teaching technologies, training materials for teachers as well as the students will create more chaos in teaching students after COVID19 The lack of financial capacity to pay staffs will lead to an override during post-covid19 Solutions: As a nation and as a people, we need to rethink strategies, restructure and re-evaluate our investments, spendings, channel our energies towards innovations and partnerships. Complied by Abimbola Abikoye Project Lead ICCDI AFRICA
https://medium.com/climatewed/report-earthday2020-webinar-on-climate-action-covid19-ba20d5d464bf
['Iccdi Africa']
2020-04-25 08:27:55.397000+00:00
['Health', 'Covid 19', 'Climate Change', 'Agriculture', 'Wash']
Use Python to Convert Polygons to Raster with GDAL.RasterizeLayer
Use Python to Convert Polygons to Raster with GDAL.RasterizeLayer Scale-up your geoprocessing workflows with Python Photo by author. When you work with spatial data, it’s inevitable that you will need to implement information from both a vector and raster data source for the same location. This task can easily be accomplished manually, but it often becomes quite cumbersome when the process must be automated across a large number of features, time periods, and/or datasets. Discrete, irregularly shaped polygons do not always play nice with structured, rectangular grids. In such situations, conversion from vector to raster (or sometimes vice versa) is often the best option. In a previous example, I demonstrated how vector to raster conversion can be implemented to calculate raster statistics within polygon boundaries. In Python, the root of polygon (vector) to raster conversion lies with the gdal.RasterizeLayer() function. This article demonstrates multiple usages of gdal.RasterizeLayer() . For this example, we’ll consider how to rasterize the channel network, shown below, which is represented by polygons. This is a simple example because the channel network is concurrent with the underlying grid, so no resampling of either layer needs to occur. Imports and Input Data We’ll use the OSGEO Python modules to handle geographic data: gdal for raster data and ogr for vector data. Then we need to load a raster and a polygon layer. We'll also need the geotransform values for the raster. For best results, ensure that both the raster and vector datasets use the same coordinate reference system. import gdal import ogr fn_ras = 'path/to/raster' fn_vec = 'path/to/vector' ras_ds = gdal.Open(fn_ras) vec_ds = gdal.Open(fn_vec) lyr = vec_ds.GetLayer() geot = ras_ds.GetGeoTransform() Setup the New Raster Now use gdal to create a new raster for the rasterized polygons. In this case, I'm making the new raster concurrent and orthogonal with the input raster. You can see my zonal statistics tutorial for an example of dynamically adjusting raster size for polygons. drv_tiff = gdal.GetDriverByName("GTiff") chn_ras_ds = drv_tiff.Create(out_net, ras_ds.RasterXSize, ras_ds.RasterYSize, 1, gdal.GDT_Float32) chn_ras_ds.SetGeoTransform(geot) GDAL.RasterizeLayer() Here comes the rasterize magic. We’re simply going to pass the new, empty raster, the band number of the new raster to update (band 1, the only band in our case), and the layer to rasterize to gdal.RasterizeLayer() . In this case, the result is a value of '1' inside of polygons, and '0' outside of polygons. By assigning a no-data a value, we get the output image below. gdal.RasterizeLayer(chn_ras_ds, [1], lyr) chn_ras_ds.GetRasterBand(1).SetNoDataValue(0.0) chn_ras_ds = None Rasterize By Attribute Sometimes you may want to preserve a value from the vector attribute, like an identification value. This can easily be accomplished with gdal.RaterizeLayer() using the options or the attribute argument. You may want to review the documentation for the rasterize options. gdal.RasterizeLayer(chn_ras_ds, [1], lyr, options=['ATTRIBUTE=chn_id']) chn_ras_ds.GetRasterBand(1).SetNoDataValue(0.0) chn_ras_ds = None Now you can see that each reach has been rasterized according to the channel id (chn_id) value. Conclusion
https://towardsdatascience.com/use-python-to-convert-polygons-to-raster-with-gdal-rasterizelayer-b0de1ec3267
['Konrad Hafen']
2020-12-29 22:55:58.696000+00:00
['Towards Data Science', 'Data Science', 'Geography', 'Python', 'GIS']
Ground Rules for Writing for Psych Ward Experiences
For people who’ve been through the psychiatric inpatient system, voluntarily or involuntarily. This is not for people that work in them, but people who were put in them (unless you’re both). See the rules before emailing [email protected] to see about becoming a writer. Follow
https://medium.com/psych-ward-experiences/ground-rules-for-writing-for-psych-ward-experiences-82605fe2b3f
['Kit Mead']
2016-03-15 16:14:40.913000+00:00
['Mental Health']
FlutterPub is now CodeChai ☕
CodeChai Logo For the past year, me and my partner Danish Amjad have been running FlutterPub publication on the side with our lives. Although this year 2020 have been very hard and challenging due to Covid-19, but we are happy with the progress FlutterPub have made. With over 13,000 followers and more than 12,000 views a day, this publication has got hundreds of awesome articles on Flutter by amazing writers around the world. But as time passed away, we realized that when there are more active publications about Flutter such as Flutter Community etc. then it doesn’t make sense to have other publication as well. This not only makes confusion for readers but also for writers, who have to decide which publication they want to put their articles in. This may increase competition, but we believe sharing knowledge should not be competitive. Rather it should be more encouraged and friendly.
https://medium.com/codechai/flutterpub-is-now-codechai-91838c00dc11
['Wajahat Karim']
2020-11-05 12:40:51.231000+00:00
['Coding', 'Programming', 'Software Development', 'Flutter', 'Productivity']
Adrienne Rich, 1929–2012
The wonderful poet Adrienne Rich has died. Here’s an excerpt from her Twenty-One Love Poems from 1977, via Richard Lawson. (And here are several more, from The New Yorker.) No one’s fated or doomed to love anyone. The accidents happen, we’re not heroines, they happen in our lives like car crashes, books that change us, neighborhoods we move into and come to love. Tristan und Isolde is scarcely the story, women at least should know the difference between love and death. No poison cup, no penance. Merely a notion that the tape-recorder should have caught some ghost of us: that tape-recorder not merely played but should have listened to us, and could instruct those after us: this we were, this is how we tried to love, and these are the forces they had ranged against us, and these are the forces we had ranged within us, within us and against us, against us and within us.
https://medium.com/the-hairpin/adrienne-rich-1929-2012-126a0af30263
['Edith Zimmerman']
2016-06-01 21:52:01.986000+00:00
['Books', 'Adrienne Rich', 'Poetry']
Things I Miss
Things I Miss Not least, I’m mourning my second office Inspiration strikes. (Image credit: Angela Bailey on Unsplash) Between pandemic fears and homeschooling horrors, only a very few things have remained constant. While I’m fine — we’re all fine — there is so much that I miss, little things that I didn’t know meant so much to me in the before. People. And solitude. I had to give up planned visits to my family in Canada, and a selfish vacation in France. I had to give up my running group, and the bootcamps I teach, first due to physical distancing regulations, and then due to my-children-are-always-at-home. The irony is not lost on me: I miss people, colleagues and friends, but because I contain multitudes, I mostly miss being alone. The second office(s) I miss choice. I miss being alone at my living room desk, a gorgeous old secretary that came with our flat, but I also miss the luxury of being able to work at my “second office,” Starbucks…well, two Starbuckses and my local pub, really; all three are dog-friendly. I didn’t work there often, no more than once every three weeks, but now I can’t even imagine the idea of having another option besides sharing the dining room table with the three kids (distracting and bickering-y), or being more than arm’s-length away at my own desk, which, with no direct line of sight to children (not) doing schoolwork, is not much of an improvement. My reusable cup Another relic of the time before COVID is my reusable cup. I wrote my name on it with a Sharpie, so they can’t possibly get it wrong. Because I’ve had to chug down giant Starbucks mugs of still-steaming hot lattes too many times when Ziggy-the-office-dog decided he’d had enough, I learned to tuck a clean one into my satchel, as well as a knit coffee sleeve insulator. Now, the environment is back to dealing with disposable cups (though far less often, from me at least), but I still try to have a sleeve with me. My satchel I rarely carry my Writerly Satchel(tm) anymore, probably because I rarely leave the house. When I do, I don’t take my laptop with me. It’s my number one piece of writer paraphernalia: a gorgeous old leather Roots bag that I found on eBay. I bought it last summer; I had just returned from a holiday in Florence, in which I ran out of time to find a new purse. Now, I don’t buy expensive purses, but decided that, if I were to have a souvenir from Italy, it might as well be a leather purse that I promised to use for years. But I ran out of time, then refused to buy one in the airport, on principle. When I got home, my leather-purse pocket money was still burning a hole in my…well, my pocket, and I decided I’d rather have one from Roots. While new ones are far too pricey for my thrifty self, I paid £50 pounds for a used, someone-else-broke-it-in-perfectly satchel, and it will last forever. Most importantly, when I carry it, it is a shield against imposter syndrome. I look like a writer. It easily holds Ziggy’s favourite (small) blanket to lie down on, a chewy stick and some treats to keep him occupied so I can work. It’s like when I used to bring cheerios to keep my kids busy when they were toddlers, but far weirder and more involved…oh yes, and my notebook, pen and laptop, too. My routines Way back, in the Time Before March, a trip to a “second office” started with a nice long walk with Ziggy. First stop: the dog park, involving as much frolicking and racing around after balls and other dogs as possible, so that he’d be tired enough to relax and let me work. I didn’t play; I was weighed down by my Writerly Satchel(tm), after all. After, muddy and wet, I’d go to the Starbucks close to the park, or the pub on the way back, or the other Starbucks, on the way home from the other park, set up my mobile hotspot on my phone, and sip my chai latte or slurp my soup while Ziggy worked away at his chewy. Did I get more done in Starbucks than at my desk? No, but I got things done. I blocked out distractions (that weren’t Ziggy, at least), and got to feel like a writer in public. Satchel: check! Notebook: check! Laptop: check! Look at me, I’m writing! The way it is now Here at home, there is less of that. Here, we’re all sitting around the dining room table, the kids on their school iPads, me on my laptop, all trying to work, trying to stay focused, trying to drink my mug of tea before it gets cold. Ziggy curls up on his old bed under my chair, relaxed and sleepy. I’m still a writer here; in fact, I’m a better writer and far more productive despite my surroundings. I mean, I wrote and edited my book, but it’s just not the same as being out. Would I really trade this for how it was? Do I want things to go back to the way they were? Yes, ish. I know they won’t, not for months, maybe not ever. But I want options. I want to live selfishly and be alone, except when I want to be with people. I want the other writers at Starbucks to size me up when I set up my laptop, to see my satchel laid casually beside me, and to see me as one of their own.
https://medium.com/illumination-curated/things-i-miss-f0720e563e16
['Karen Hough']
2020-10-19 07:27:07.229000+00:00
['Coffee Shop', 'Writing', 'Solitude', 'Pandemic', 'Writers Life']
I wasted over 6 months on a machine learning project because of this stupid error…
I wasted over 6 months on a machine learning project because of this stupid error… Peijin Chen Follow Oct 14 · 6 min read Well, it wasn’t a complete waste, you live and learn. But I spent a lot of time on the project and wanted to write it up as an academic paper — which means you have to consider what added value your paper is offering to the world, and I realized, after seeing my error, that I was not sharing a skillful time series forecasting model as I had hoped, but just, well, a barely better than an educated guess model. In my case the data was drawn from the UCI Machine Learning Repository, and had to do with air quality and pollution in Beijing, China. TL;DR — I should have used Mean Absolute Scaled Error (MASE), not R2 (coefficient of determination), or Symmetric Mean Absolute Percentage Error (SMAPE), or anything else for that matter, to judge the skill of the forecaster. In this hourly measured data set, I thought my model’s one-step ahead forecast R2 score of 0.95 was dope, but remember, R2 is just comparing your model’s performance against a naive regressor that just predicts the mean. That, in certain cases, is setting the bar a bit too low. Formula for MASE, from Wikipedia On the other hand, the MASE is just the ratio of two mean absolute error (MAE) performances — the one in the numerator are the errors from your model, and in the denominator are the errors from a “naive” regressor, meaning, a model that simply uses the previous step to forecast the next step — which is, in some cases, a lot smarter than just using the global mean. Here’s some Python code that will tell you what the MAE for a naive forecaster would be: import numpy as np print(np.mean(np.abs(np.diff(my_time_series)))) Why this? Because if you predict the last step, then your error is just the difference between successive points — take the absolute value of that series, and then find the mean. In that dataset, pick one of the sites and then use as your time series the measurements of PM2.5 (particulate matter < 2.5 microns in diameter, measured in micrograms per cubic meter). You will find that, roughly speaking, the MAE of a naive regressor might be something like 10.8 or so. Sample of PM2.5 data from Beijing for a few days in May 2016. This is hourly data. So that means on average, the net change in PM2.5 from hour to hour is about 10.8. That means that in order to be a more skillful forecaster than this, you have to improve on that number. And guess what? I couldn’t really do THAT much better. The accumulation and diffusion of PM2.5 isn’t THAT fast as to change drastically in an hour. Sure, I could cook up a model that managed a 10.3 MAE or so, but is that really so much better than 10.8? Not in terms of making any practical difference to anyone’s health, I don’t think. The naive forecaster presumes you know the value of the observed value one step ahead. That means that if I know the PM2.5 value at 3PM, that I barely need a fancy machine learning model to give an educated guess as to what the error would be 4PM. With all the other features that the dataset features — such as the O3 (ozone) or CO (carbon monoxide) or PM10 measurements , as well as meteorological features such as dew point and temperature and time variables like day of the week, hour of the day and month of the year— it’s not easy to do much better than the naive regressor. Sure, adding a few more lags might help. But I’ve tried that, and it seems to make things worse, with the MAE of the model coming in at 15 or so. On a realistic level, what is more useful is a forecast for 6 hours or 24 hours or even a month ahead. Or maybe you just want some model that can generally tell you what the PM2.5 should be, without knowing in-depth pollutant measurements. That is, maybe you want a model that can say “okay, in Beijing, on a hot and humid but clear day in July in Beijing, what would you expect the PM2.5 to be at around 4PM?” How practical is that though? Well, it might be computationally cheaper — you could check out the weather forecast from any old app and use that as a “guess” as to what the weather will be, say, in a week. No fancy sensors needed. And maybe it’s easier to predict temperature and barometric pressure 2 days or a week into the future than it is to predict something like SO2 levels in a week. Again, it all comes down to the predictability of a series, which has a lot to do with its autocorrelation structure and general amount of (sample/permutation) entropy. Specifically, the features that I used were wind direction (e.g. NW), month of the year (e.g. December), day of the week (e.g. Tuesday), temperature, hour of day (e.g. 16:00), the dew point, wind speed, and atmospheric pressure. I decided to show here how H20.ai’s autoML creates and scores models. Here’s what I got. You can see that there are Gradient Boosting Models, XGBoost (a variant of GBM), as well as Random Forests and stacked ensemble forecasters as well. H2o autoML results Well, it looks like the top MAE of 26.8 isn’t so bad for just using weather variables, but then I took a look at how it did on the test data — which is just the last 25% of the data (about a year’s worth of hourly data) — and this is what I found: MAE isn’t as good for the unseen data The MAE on the unseen data is 41.4. I believe that with some tuning or some other learners you can get it down to under 40, but probably not much better than 36 or 37. How bad is that in real life? Well, if your model forecasted the PM2.5 levels to be 236 and it ended up being 200, well, that’s still bad air quality, right? And if you estimated it to be 0.00 but it ended up being 36, most people wouldn’t care either. It’s not like you would be deceiving them into thinking the air was really better or worse than it was. But still, how does that compare against a “naive” forecaster that just uses monthly and hourly averages to predict the PM2.5 levels? The point is, maybe try to compare your model to 2–3 different levels of “naive” to see just how good it is, or how much performance gain there really is. Think about what someone who doesn’t know ML but knows some basic math might do to make an educated forecast. If your forecasts are just 5% better than theirs, well, is all that ML really worth it for creating a model that was nearly pointless because it wasn’t much better than the forecast of a naive regressor? I lost sight of that fact. Don’t do what I did.
https://medium.com/ai-in-plain-english/i-wasted-over-6-months-on-a-machine-learning-project-because-of-this-stupid-error-be972f5e5d8c
['Peijin Chen']
2020-10-15 22:42:06.433000+00:00
['Machine Learning', 'Forecasting', 'Artificial Intelligence', 'Data Science', 'Time Series Forecasting']
The Baffling Disappearance of Shelley Luty
The Baffling Disappearance of Shelley Luty A waitress left work and vanished — and police have been searching for her and her car for almost forty years. Photo by Ricky Singh on Unsplash Shelley Luty was in a good mood when she showed up at work on August 23, 1982. The 19-year-old single mother had been working the 5:00pm-11:00pm shift at the Llanerch Diner in Upper Darby, PA for about a month. Because she was still in her probationary period, she was only working part-time at the moment, but her boss had been impressed with her work and was going to promote her to full-time the following month. Shelley had the kind of friendly, outgoing personality that was necessary to be a good waitress, and her good looks didn’t hurt, either. Shelley and her two-year-old daughter, Jenny, had been living with Shelly’s mother and stepfather, Barb and Bill Skay, but that was about to change. Shelley had an appointment the following morning to sign the lease on an apartment. It would be the first time she and Jenny would be living on their own, and Shelley was looking forward to it. Jenny’s father lived in Texas and wasn’t involved in the young girl’s life, but Shelley’s life revolved around her daughter. Each night, she would call home on her break so she could talk to the little girl, and she always brought some kind of treat home with her. Barbara, Shelley’s mother, also worked at the Llanerch Diner. She worked the overnight shift, coming in to start her work day at the same time Shelley was finishing up. Barbara got to work a little early that Monday night, so she was able to talk to Shelley for a few minutes before the change of shift. Shelley had driven her step-father’s mint green 1978 Impala, and she asked her mother if it would be alright if she took the car to visit a girlfriend when she finished up her shift. Barbara was sure it would be, but told her to stop home and check with her stepfather just in case. Shelley said she would. She had already purchased a pack of M & M’s candy for Jenny, she could drop them off at the same time. As Barbara was preparing to start her shift, she took note of a customer who looked as if he was trying to talk to Shelley. Barbara described him as being in his 20s, and he was blond and athletic-looking. As she watched, he grabbed ahold of Shelley’s arm to get her attention. Shelley pulled away from his grasp, but didn’t appear overly concerned. She spoke to the man for a few moments, and he then left the diner. By this time, Shelley’s shift was over. She left the restaurant, and witnesses would later say they had seen her speaking to a man in the parking lot of the diner. From the way they described the man, it was likely the same person Barbara had observed inside the diner. Whoever she was talking to, the conversation was brief, and Shelley came back into the diner just a couple of minutes later. She collected her coat — a gray sweater jacket with a big blue bird on it — and headed for the door. After one final wave at her mother, Shelley was gone. What happened to her after she exited the diner for the last time would be a source of speculation for decades to come. Shelley Luty (Photo provided by Upper Darby Police Department) The parking lot at the Llanerch Diner was small, and usually filled with the cars of customers. There had been no empty parking spaces when Shelley had arrived at work earlier that evening, so she had parked her stepfather’s Impala across the street from the diner. Although the street was not very well-lit, it wasn’t considered particularly unsafe. The Llanerch was located at the intersection of Route 1 and Route 3, both busy roads that saw traffic 24 hours a day. When Shelley walked out of the diner, it should have taken her only a few seconds to get into the car and drive away. No one knows how long it took her to walk to her car, or if she even made it to the car. Neither Shelley nor the Impala would ever be seen again. Bill Skay wasn’t overly concerned when his stepdaughter was late coming home. Shelley was a very responsible young woman, and he was sure she had a good reason. His wife had been a waitress long enough for Bill to know that there were times when it was too hectic for the waitresses to finish all their side work during their shift, meaning they would have to stay late to complete it. He assumed that was what had happened to Shelley. But as it got later, Bill began to get concerned. A quick call to the diner was enough to confirm that something was wrong. Shelley had left right on time when her shift had ended at 11:00pm. Hours had passed and there had been no sign of her. Bill called police and reported her missing. For Bill and Barbara, the situation was very straightforward. Shelley was missing, and she never would have voluntarily gone missing, therefore something was wrong and the police needed to find her. But for the police responding to the call, it wasn’t that easy. Bill and Barbara were about to get a crash course in life as the parents of a missing adult. They were going through the same heart wrenching emotions that any parent of a missing child experienced, but because their child was legally an adult, police were much less inclined to look for her without definite signs of foul play. It had been a year since Adam Walsh had been kidnapped and murdered, and missing children advocacy groups were springing up all over the country. But there were no groups geared towards finding missing adults. Shelley’s parents were on their own. Shelley was driving a car like this one (Photo via CarDomain) Upper Darby Police Detective Nick Bratsis was in charge of Shelley’s case. He found no evidence of foul play, but there was something about the case that bothered him. He’d been a cop for more than 20 years, and he had learned to trust his instincts. Even as he explained to Bill and Barbara that Shelley was legally an adult and it wasn’t against the law for her to be missing, he knew that this case was more serious than a young adult simply taking off on vacation and not telling anyone. First there was the issue of Bill’s car. If Shelley had decided to simply take off, she would have expected that, at the very least, her stepfather would call police and report the car stolen. The mint green Impala was the kind of car that would stick out on the road, and it would be an unlikely choice for someone trying to make an anonymous getaway. But there was something bothering Detective Bratsis far more than the brightly colored car: Jenny. Everyone the police interviewed told them the same thing. Shelley was a wonderful mother, completely devoted to her daughter, and she never would have left home without her. Detective Bratsis believed them. Shelley may have been an adult, but she wasn’t the type who would simply walk away from her entire life without looking back. Although the detective admitted that he still had no evidence that foul play was involved, he was beginning to believe that Shelley had not left under her own free will. Police interviewed all of Shelley’s friends, family, and co-workers, looking for any clue that might bring them closer to discovering what had happened on that humid August night after Shelley left the diner. She mentioned to her mother that she was planning on meeting up with a girlfriend after work; it was why she wanted to borrow the car. Police confirmed with one of her friends that they had made tentative plans to meet that night, but Shelley never showed up. Everyone who knew her was at a loss. It was as if she had simply vanished, taking Bill’s car with her. Police released a composite sketch of the man who had been seen talking to Shelley at the Llanerch on the night she disappeared, hoping that someone would be able to identify him. They stressed that he was not a suspect in Shelley’s disappearance, but since he had been in the area on the night she went missing they believed he could have seen something that could assist the investigators. Unfortunately, they were never able to determine who the man was. Police sketch of the man seen talking to Shelley (Photo provided by Upper Darby Police) Shelley’s parents told the detectives that their daughter had previously fractured her skull, and she had a history of epilepsy as well. She had been on anti-seizure medication for years to keep her epilepsy under control. Concerned that the medication might increase the possibility of birth defects, she stopped taking it when she was pregnant with Jenny. She had remained seizure-free throughout her pregnancy, so she had not gone back on the medication after Jenny was born. Many children who suffer from epilepsy eventually outgrow it, and Shelley hadn’t had a seizure in years. Was it possible that she had a seizure on the night she went missing? Seizures can cause a variety of side effects, but some of the most common include problems with vision, confusion and a loss of awareness or blacking out. If Shelley had become disoriented due to a seizure, there’s no telling where she could have ended up. A few wrong turns would have had her heading for the Delaware River, but it seems unlikely that she would have been able to make it that far without getting into some kind of an accident. The streets she would have been on were fairly busy at all hours of the day. Someone probably would have noticed if she was swerving all over the road. It’s possible, but not likely. Her mother told detectives that she was unsure how Shelley would be affected by stress. If someone had abducted her or gotten into a physical confrontation with her, perhaps this could have triggered a seizure. No one heard any cries for help that night, but maybe Shelley was too incapacitated to cry out. Upper Darby detectives continued to investigate Shelley’s disappearance, and the FBI soon joined in the investigation as well. It was a very unusual move for a missing adult case, and most likely had something to do with the fact that both Shelley and the car were missing. Neither the FBI nor the Upper Darby Police Department would comment about what it was that triggered the FBI’s involvement. Detectives investigated every lead that came that way, but nothing seemed to be bringing them any closer to Shelley. They interviewed dozens of people, even going as far as tracking down one of her ex-boyfriends who was stationed in Germany. They asked some of the people to take polygraph examinations, and everyone cooperated with them. No one that was administered a lie detector test showed any signs of deception. One of Shelley’s former boyfriends saw a model in a magazine that he thought looked like Shelley, and when he showed it to some of her family members, they agreed. It looked like a possible break in the case, but when detectives tracked the model down, it wasn’t Shelley. The woman did bear a remarkable resemblance to Shelley, but had spent her entire life living in California and had never heard of the Lutys. In October of 1984, the FBI closed their investigation into the disappearance of Shelley. Their exit from the case was just as abrupt and shrouded in mystery as their entrance had been. Upper Darby detectives continued investigating the case, but they were running out of leads to follow and the case soon went cold. They sent Shelley’s dental records to a number of different jurisdictions that were seeking to identify an unidentified body, but there were never any matches. It was good, in a way, because it meant that Shelley could still be alive, but for her family, nothing was worse than the pain of not knowing what had happened to her. Bill and Barbara had essentially taken over the role of parents for Jenny, and she even began referring to Barbara as her mother. She had been so young when Shelley disappeared, and had precious few memories of the mother who had loved her so much. Shelley remains classified as a missing person, and Upper Darby police are hopeful that one day someone will finally come forward and give her family the answers they deserve. Shelley Diane Luty was 19 when she vanished after leaving an Upper Darby diner. She is a white woman, with reddish brown hair and blue eyes. She was 5’3” and weighed about 125 pounds. She has a small scar on her nose, and a triangular-shaped scar on one of her arms. She had chipped her upper right front tooth, and it was capped at the time she disappeared. When she was last seen, she was wearing her waitressing uniform that consisted of a white shirt and dark pants and had a gray sweater coat with a big blue bird on it. She was driving a mint green, two-door 1978 Chevrolet Impala. Right before she went missing she was seen speaking to a white male, 25–29 years old, about 5’10” to 6’0” tall, with blond hair and an athletic build. If you have any information about Shelley, her car, or the man she was last seen with, please contact the Upper Darby Police Department at 610–352–7050.
https://medium.com/lessons-from-history/the-baffling-disappearance-of-shelley-luty-e8376bbab8a
['Jenn Baxter']
2020-12-19 15:36:07.331000+00:00
['Nonfiction', 'History', 'True Crime', 'Pennsylvania', 'Unsolved Mysteries']
21 Predictions about the Software Development Trends in 2021
1. Centralized Infrastructure: Cloud, cloud everywhere During COVID-19, most of the industry suffered heavily, albeit a handful of industries. Cloud is the forerunner industry, which actually becomes stronger than ever during the pandemic. If there were any doubt and uncertainty in terms of Cloud adoptions, COVID-19 has wiped that away. A global-scale catastrophe like Corona showed that we not only need Cloud for upscale, we also need it for down-scaling, i.e., when demand for our services drops significantly. Think about the tourism and transportation industry that has to maintain their expensive data centers, although their market drops 90%. Forrester predicted Global Public Cloud IT infrastructure market would grow to a whopping 120 Billion USD with 35% growth in 2021: No matter which industry you are in (Government, Startups, Agriculture, Healthcare, Banking), plan Cloud migration as the entire world moves to Cloud sooner than later. There will be a huge shortage and high demand for Cloud-Native Engineers in 2021 and onwards. If you are an IT engineer, jump into any MOOC (Massive Online Open Course) to earn your Cloud certificate. The good news is that many of them are offering free months during Covid. Also, the major public cloud providers are offering free courses. Recently the biggest public Cloud provider Amazon has declared that they will give free Cloud Computing training to 29 million people between 2021–2025: 2. Decentralised Infrastructure: Edge Computing will see exponential growth In contrast to the Public Cloud where we want to have a centralized Data Center for Data and Compute power, there are many scenarios where we want to have the opposite, i.e. the Data and Compute power near the end-user. Some are very low latency (5 to 20 ms), high bandwidth, regulatory reasons, Real-Time use cases, smart and powerful end-user devices, etc. Although Edge computing is an old concept and we are using Edge Computing in Content Delivery Network (CDN), it is gaining popularity in recent years. With the rise of connected vehicles (autonomous Cars, drones), online gaming, IoT, smart devices, and edge AI/ML, Edge Computing will be a gigantic market in 2021 and beyond. Another key reason Edge computing will be key in 2021 is the rise of the 5G mobile devices. In 2021, two groups of industries will fight for the market share in Edge computing. One group will be the public Cloud providers like Amazon, Microsoft, Google, as reported here: Here again, Amazon is the leader with many services like AWS Snow family, AWS IoT Greengrass. Microsoft is also providing edge services with Azure Stack Edge, Azure Edge Zone. Google is also moving its Data Center services to the end-user with Google Anthos. The other group is the industry that already has the Edge Infrastructure like Telecom Companies, Data Center Providers, Network Providers. If they can move fast and leverage their advantages (i.e., existing Infrastructure), they have the opportunity to lead here. The Hybrid cloud provider RedHat (IBM) will be a key player here with its Hybrid Cloud Platform OpenShift, and engagement in OpenStack. Recently Samsung has joined with IBM to develop Edge Computing solutions: State of the Edge is the initiative to make Open Standard for Edge Computing to make the Edge computing vendor-neutral. Recently State of the Edge became part of the Linux Foundation. Like CNCF, the State of the Edge will also gain more moment in 2021 and onwards. Prepare for many innovations, mergers, neck-to-neck fight, and standardization in Edge Computing in 2021 and beyond. 3. Cloud: AWS is leading, but Multi-Cloud will be the future Among the public Cloud vendors, there is no question about who is the leader. In Q3 2020, Amazon is leading the public Cloud market with a 32% market share, as shown below: Microsoft had another strong year with its cloud offering and enjoyed 48% annual growth in 2020. In Q3 2020, Microsoft has a 19% market share compared to a 17% market share in Q3 2019. As it is now, Google is the third-largest public Cloud provider with its 7% market share in Q3 2020. In 2021, Amazon and Microsoft will keep their position as the first and second spots, respectively. However, Alibaba will take over third place in 2021 as it is just behind Google with a 6% market share in Q3 2020. Also, the Multi-Cloud initiative will get more momentum in 2021. Many companies also moving to a Multi-Cloud strategy. CIA has recently awarded its Cloud contract to multiple vendors instead of one single vendor: Until now, Amazon was reluctant to join the multi-cloud initiative to protect its market share. But as we had already seen with Microsoft 10 years ago, the whole industry and community are bigger than the biggest individual company. Recently, Amazon has silently joined the Multi-Cloud initiative: The Cloud Native Computing Foundation (CNCF) plays a key role in the Multi-Cloud movement and has arguably surpassed the Linux Foundation. In 2021, we can see more growth in the CNCF. Also, Multi-Cloud service providers like HashiCorp will become more important in 2021. Some outstanding projects also provide API compatibility with popular vendor-specific Cloud Services like MinIO (providing AWS S3 compatible Object Storage). In 2021, there will be more initiatives like MinIO so that we can easily lift-and-shift popular Vendor lock services. This is good news for the whole industry as I dream of a world where companies can deploy their application in Multiple Cloud seamlessly. 4. Containerization: Kubernetes is the Emperor, and Docker will slip away Containerization is the Core Technology of the Cloud Native IT, whether Public Cloud, Private Cloud, or even Edge Computing. For several years now, Kubernetes has established itself as the leading Container Orchestration and Management Technology. Like Linux ruled the Data Centers previously, Kubernetes is ruling the Public Cloud and Private Cloud landscape. Initially, Google was the leading force behind Kubernetes, but now almost all Giant Tech companies put their weight behind Kubernetes. All the major public Cloud providers are now offering managed Kubernetes Service (Amazon EKS, Azure AKS, Google GKE) along with their managed Containerization Services. On the other side, RedHat is offering managed Kubernetes Service in private Cloud with OpenShift. In 2021, we will see more adoption of Kubernetes as it is the core component in the Hybrid-Cloud or Multi-Cloud strategy. Non-traditional Enterprise applications like AI/ML, Databases, Data Platforms, Serverless, and Edge Computing applications will also move to Kubernetes. On the flip side, Docker is slowly losing its charm as a Conternization Technology. There are already initiatives to standardize the container format and runtime, and two of them are getting huge traction in recent years. One is the Kubernetes led Container Runtime Interface (CRI). The other one is the Linux Foundation lead Open Container Initiative (OCI). Recently Kubernetes has deprecated Docker in favor of CRI and planning to remove Docker completely in late 2021 in its upcoming Kubernetes release (1.22): As Kubernetes is the 800-pound Gorilla in the containerization ecosystem, 2021 will be the beginning of an end for Docker. On the upside, the CRI and OCI will get more momentum in 2021, and especially the CRI based containers will get a huge boost in 2021. 5. Computing: Quantum Computing will get momentum Quantum Computing is the most revolutionary technology on this list. Like the digital computer, it has the potential to impact every sector. I have created a list of hottest Technology for the 2030s, and Quantum Computing was in the number one spot: To put into perspective: if we think about today's most advanced Supercomputers as a normal human being, for example, a Chess Player or an 8th-grade math student, then Quantum computing is the Supergenius like Magnus Carles, who can play with 50 average Chess players at a time or genius Mathematician like Euler. There were some significant breakthroughs and advancements in Quantum computing in 2020. In June 2020, Honeywell claimed that it had created the most powerful Quantum Computer, beating the previous record set by Google: Only a few days ago, a group of scientists from the University of Science and Technology of China (USTC) showed that Quantum Computer could beat the most advanced classical Supercomputer comfortably for a particular task (Gaussian boson sampling): Many governments and Tech Giants are exploring and investing in Quantum Computing. Google and IBM are two of the biggest players in this field. Google even launched an open-source library TensorFlow Quantum (TFQ), for prototyping the Quantum Machine learning models: Amazon is also offering managed quantum computing service via its Amazon Braket Cloud Service. Considering the massive interest and its infinite possibilities, there will be some breakthroughs and jaw-dropping discoveries in Computer Quantum in 2021. If you want to explore Quantum Computing, then you can use the Open Source SDK qiskit, which is also offering the free course: 6. Blockchain: The roller coaster ride will continue Blockchain (Distributed Ledger) is also one of the major disruptive technologies developed in recent times. Technology-wise, it has the potential to change the whole industry. Although Cryptocurrency played a major role in popularizing the technology. It also played a major role in moving the Technology in the “Peak of Inflated Expectations” in Gartner’s Hype Cycle curve. Many rogue entities capitalized on the popularity of Bitcoin and created scam projects to cheat common people who wanted to be rich in a short time. Now Blockchain is going through the “Trough of Disillusionment” of the Hype Cycle curve. Also, Governments are interfering in Cryptocurrencies to prevent scams. Recently the Chinese government has seized a Cryptocurrency scam “Plus Token Ponzi”: Facebook released its Cryptocurrency Libra in 2019 but got intense regulatory pressure in 2020: Other open-source Blockchains like Ethereum are putting a Code in the Block, making it possible to use it as a smart contract, which is the future of Blockchain. In 2021, Blockchain will be used more as a Smart contract mechanism, and hopefully, it will enter the “Slope of Enlightenment” phase. Blockchain will get a major boost in 2021, as China has put it in its ambitious 500 Trillion “New Infrastructure” plan: 7. Artificial Intelligence: AI will be for all As one of the hottest Technology in recent times, AI has also seen many breakthroughs in 2020. Another interesting trend is that AI slowly started to enter all sectors with the slogan “AI for all.” In the natural language processing domain, GPT-3 was the biggest breakthrough that came in May 2020. The US company OpenAI created GPT-3, which has made it possible to create human-like text using Deep Learning. Only after four months, the entire world was simultaneously amazed and shocked when the following Guardian Article was written using GPT-3: In 2021, there will be a breakthrough in Natural Language Processing, where AI will write articles or write small software programs. The other interesting development was the AutoML 2.0, which enables Automated feature engineering. In 2021, there will be a major advancement in Full Cycle AI Automation and more democratization in AI. AI is not unbiased, and ethical AI is getting more traction. Another major trend in AI is explainable AI, which will need an explanation for why AI has taken a certain decision. In 2021, there will be major progress in these fields as the EU has set regulations to explain AI’s decision. AI will also see major adoption in the Aviation industry in 2021 and beyond. Only a few days ago, the US Air Force used AI as Co-Pilot to fly an Aircraft: AI will also be the centerpiece in Chinese digital-based infrastructure for the future: Expects lots of mind-blowing innovations and democratization in AI in 2021. 8. Deep Learning Library: It will be TensorFlow 2.0 and PyTorch Google and Facebook are the two dominant players in Deep Learning and Neural Network. Google’s key business is searching capability, and it is the leading innovator in Natural Language Processing. Facebook’s key business is a social network, and it has to handle Images, Videos, and Text. In Image processing, Facebook is the tech leader with many innovations. TensorFlow from Google was the leading library in Deep Learning, but everything changed in 2016 when Facebook released PyTorch. PyTorch used Dynamic Graph instead of Static Graph (used by TensorFlow) and more Python friendly. Google reacted by creating TensorFlow 2.0 in 2019, which copied many PyTorch features (Dynamic Graph, Python friendliness). It also works perfectly with Google’s Collab, a very modern and powerful Notebook. Since then, Google enjoyed the upturn of TensorFlow 2.0’s popularity. Currently, TensorFlow is the most popular Deep Learning framework, according to the Stack Overflow Developer Survey, 2020:
https://towardsdatascience.com/21-predictions-about-the-software-development-trends-in-2021-600bfa048be
['Md Kamaruzzaman']
2020-12-25 11:21:21.811000+00:00
['Software', 'Programming', 'AI', 'Data', 'Cloud']
The Ultimate Foundation for Self-Control
This quote is helpful in laying bare the foundations of what you need to ground yourself. We clearly live with the first three enemies more than we do with the latter three. Often, part of transformation isn’t about taking big, bold leaps, it’s about being able to assess what we want less of and controlling the parts of ourselves that want to override the system and function on autopilot. The three skills for self-control, however, are all about processes. The Willpower Instinct: How Self-Control Works, Why It Matters, and What You Can Do to Get More of It by Kelly McGonigal shows us how willpower is a mind-body response, not a virtue. It is a biological function that can be improved through mindfulness, exercise, nutrition, and sleep. Willpower is not an unlimited resource. Too much self-control can actually be bad for your health. Temptation and stress hijack the brain’s systems of self-control, and that the brain can be trained for greater willpower. Willpower is centered in a specific region of the brain (within the prefrontal cortex). It uses more energy than almost any other brain region, and therefore it gets tired after prolonged use each day. It’s also like a muscle, in that training it through specific meditations, and breathing exercises increases its strength and endurance. Interestingly, over time, it is paradoxically far easier to resist temptations if you don’t try to repress them, but instead actually focus on them. Many mindfulness studies have proven this counterintuitive finding.
https://medium.com/big-self-society/the-ultimate-foundation-for-self-control-709b9960a19c
['Chad Prevost']
2020-12-09 13:46:47.594000+00:00
['Personal Development', 'Books', 'Self Improvement', 'Two Minute Takeaway', 'Authors']
Could Nim Replace Python?
Compiled Executables A common theme with Python is requiring Python in order to run Python, and this includes an application’s dependencies. This is problematic because it means that Python applications to be packages in one way or another with said dependencies. On top of that, it’s very likely that virtual environments will be frequented. While this isn’t terrible, and to confess most statistical languages do exactly the same, Nim does this significantly better by packaging an executable with the included dependencies needed to run. This not only makes managing dependencies from system to system a breeze, but also makes deployment EASIER than Py (see what I did there?) These compiled executables are also compatible universally across the Unix-like systems, Linux, Mac, and Berkley Software Distribution, but also the Windows NT kernel. Compiled executables take care of dependency issues and make it incredibly easy to publish an application, or even deploy an API with a simple “.” or “source” command. Universal Nim has a serious advantage to Python in that not only is Nim capable of being compiled in C, but also C++, and more excitingly: Javascript This means that not only does Nim have the potential to fill Python’s role as the scripting language that runs the data-based back-ends of the web, but Nim can also be used as a front-end similarly to Javascript. This is a huge benefit over Python. While Python is certainly great for deploying endpoints, and often does the job fine, having single-language fluidity across the board certainly has its advantages! Features Nim’s code-base is primarily structured on the functional paradigm. This means that Nim can be a very expressive language, and furthermore can easily implement far more cool features than Python can. One of these features is one of my favorite features of all-time to be implemented into programming back in 1958 with the release of Lisp, macros. (I will never understand why this is Lisp’s mascot, src = Common Lisp) Macros and meta-programming have been around for nearly as long as computing itself, and can be very useful, especially on the grounds of machine-learning. Speed It’s no secret that as scale goes up, using Python for everything can be very problematic. This is because many training algorithms utilize a recursive Cost or Loss function that is intensive to run for any language. There are lots of languages and ideas with the intent to counter-act this, such as Julia, Python In Python (that’s a rabbit hole in and of itself), and more successfully: Cython. With these solutions, however, come their own problems as well. Though Julia, which is, in fact, my favorite language, and likely the most apt to be a replacement for Python, does not have anywhere near the ecosystem that Python flaunts. Though there is a PyCall.jl, typically performance when using it dips below that of Python’s, and in that case, why not just use Python? Python in Python is an interesting concept, but has yet to see great implementations, as the concept itself is quite complex. Even worse, Python in Python is much more difficult to implement than a solution like Julia or Nim. As for Cython, contrary to popular belief, Cython does not work universally, and relying on it probably isn’t a good idea (been there, done that.) Nim has the advantage of being faster than Python. For scripting, Nim’s added speed could certainly change the way that system-maintenance and various scripts are run. Using Nim might not be as fast as Julia, or C, but with the simple similarity to both Python and Bash that it boasts, it could certainly be a lot easier.
https://towardsdatascience.com/could-nim-replace-python-547145afcfd5
['Emmett Boudreau']
2020-02-08 04:54:57.568000+00:00
['Machine Learning', 'Python', 'Nim', 'Data Science', 'Programming']
An Update on Our Weekly Specials
An Update on Our Weekly Specials Dealing with depression and finding deals
https://medium.com/spiralbound/an-update-on-our-weekly-specials-f30f33774fc6
['Jef Harmatz']
2019-04-29 16:52:18.478000+00:00
['Cooking', 'Depression', 'Mental Health', 'Comics', 'Food']
What makes one mature?
Am I mature? Portrait by great photographer Becky Rui at Happy Start Up Summer Camp 2019 What makes one mature? Trying to answer a personal question from a friend. Suddenly a young friend from Iran sends me this question out of the blue, on facebook: “How did you learn to be mature, or act as a mature person?” I wonder about this question. Why me? She is a very happy playful person and so am I. We’ve met only a few times, and always play was involved, like through Switchball, a sports I invented with a friend. So why me? And what is maturity anyway? So, if I’m mature, what makes me so? Let’s answer this from a playful scope, because that I feel is our connection. I’m much older, but have kept my playful side alive. How does that make me mature? And no, being playful does not mean immature or silly. First of all, I think, time helps to become mature naturally, if you just pay a little attention. We’re, at the least, like rocks in a sea or desert; storms will happen and shape us over time. And thus it helps, each time when friction happens, to just observe and reflect on how you react towards it. Then, even subconsciously, learning will happen. But to stay playful at every age we have to add a layer. Seemingly strong people have strong reactions. “Never ever someone will hurt me like this, even again!” And then their strategy is all about avoiding, defending, preparing. It seems strong, but to me it feels poor if your life is aimed at that one thing. It reminds me of spoiled kids, who will defend their conviction or entitlement to a tee. So you lost a love, got stolen from, made a mistake. It hurts. Yet to live you have to try again, ready to be hurt again. And I don’t mean being naive. Having failed a 1001 times on stage, I learned I don’t die from it. I tried and tried again. Now I am quite fearless on stage. A poem made famous by a very mature man: Nelson Mandela. There’s this thing that I wonder about. Some get shouted at on the street once and from then on, always fear leaving the house. Others suffer years of racism, poverty, prison and still keep their dignity and humanity alive. I have a dear friend, who has suffered a lot of violence in her youth. Rather than being bitter, she’s amazingly willing to learn, overcome and even heal others as a therapist. Maturity therefore means taking the lesson, and use it to shape yourself, rather than be shaped by the experience, as a victim. Which makes me think of the movie Invictus about Nelson Mandela. Now there’s maturity of a next level. I had the luck I could keep playing. At Christmas I even rather join my nephews and nieces in play, then join the adult conversations. As long as they enjoy me as participant in their games, I think, I have kept an inner flexibility. I strongly believe, people who are playful, are more open. They take more in, and are more willing to be shaped by it. They are more willing to try again, being forgiving. Being playful, I interact, and are more willing to try out new ideas, more open to listen, less defensive. What is there to defend, really? What is there to defend? What is truly there can’t be defended, only be experienced. Take Donald Trump, an example most of us know. The more he defends his ‘genius’, the more most of us think ‘what a clown’. We have no control what others think of us. I have no control over how you react to this text. I’m already happy you made it to here. ;) That last sentence just popped up and I’m willing to give it a try, rather than control every aspect of this text. The idea that some might think, ‘ha, smart manipulation’, fills me with a little sadness. I don’t try to control everything. Hence I can be playful. So, I think mature people stay open, are willing to listen to criticism, will consider other peoples opinion (that doesn’t mean having no strong opinions). During my recent adventures as social media commenter on Youtube I too have behaved spoiled, opinionated and silly. I wondered how it worked and what it might lead to, so I gave it a try. All I can say is that I observed my behavior and I was amazed how easily I fell in all the traps. This let me to write four different blogposts; one one social media commenting, one on politics beyond left and right, and one on how modern journalism is warped by default. So the maturity in this is certainly not my online behavior, but my willingness to observe it, learn from it and use it. In everything we do are big questions hidden and lessons available. If you dare play with that, many wonders will happen. Floris *) Thank you Emy, for asking this question.
https://medium.com/the-gentle-revolution/what-makes-one-mature-891fc174ac50
['Floris Koot']
2020-02-16 23:40:24.288000+00:00
['Philosophy', 'Floris Koot', 'Education', 'Psychology', 'Maturity']
React JS: Passing Props
Passing a String as a Prop App.jsx Parent Component http://localhost:3000 Browser App.js is the parent component. It currently has no state declared and even so, no child component to pass it to.Let’s see what passing down a prop looks like from a parent component to it’s child. Passing A Prop App.js Parent Component Now App.js has some state defined with a key of name set to the value of a string — "Chris Kakos" . On line 17, the parent is passing a prop — which is appropriately called name — down to its child component and setting the value of it to the current state — this.state.name — inside of JSX tags. Receiving A Prop ChildComponent.js Child Component ChildComponent.js is a functional component written in ES6 syntax. It is important to understand that props are sent down from the parent as an object. When a child component receives props, it can access them by using the identifer appropriately name props . Therefore, any prop passed down can be accessed with dot notation, starting with props followed by the name of the prop you wish to access — in this case name — as demonstrated on line 11. http://localhost:3000 Browser Passing a Function as a Prop App.js A more intermediate example would be to pass down some state as well as functionality that modifies state. First thing is to declare a piece of state to modify. In this case, I decided to create a counter that increments by 1. On Line 11, I set a key named count to a value of 0 . Next I have to create the functionality that is going to modify the current value of count . On Line 15, I wrote an ES6 arrow function. I start by making a copy of the current state of count — it is not recommended to directly modify state. I then set the state to increment by 1 every time increment() is invoked. Now that the functionality has been built, I use some ES6 to demonstrate the destructuring of two variables — Line 23 — and pass the current state of them, as well as my newly built functionality, into the ChildComponent as props. ChildComponent.js Child Component The props are received the same as the previous example only this time, I need a way to invoke increment() in order to modify the current state of count . To achieve this — on Line 19 — I added a button with an onClick Event Handler that will invoke increment() each time the button is clicked, ultimately modifying the current state of count . http://localhost:3000 Browser *CLICK* Browser *CLICK* Browser *CLICK* Conclusion Props play a vital role in making code more modular. This was a quick demonstration to illustrate how to get started. There is a lot more complex was to use props that really optimize the functionality. I hope this has been a good starting point. For further information and examples, look no further than the official React documentation. Onward.
https://medium.com/swlh/react-js-passing-props-a65bb5200891
['Chris Kakos']
2020-09-11 19:53:45.463000+00:00
['JavaScript', 'Web Development', 'Software Development', 'UI', 'React']
The U.S. Is Not Declining and 2020 Is Not the Worst Year Ever
Protestors in Downtown Los Angeles | Photo by Mike Von on Unsplash The U.S. Is Not Declining and 2020 Is Not the Worst Year Ever The history of declinism and science of nostalgic preferences It has become a fashionable sentiment that the United States is en route for its demise as a global power. In his very recent geopolitical book, Prisoners of Geography, journalist Tim Marshall challenges that viewpoint. “The planet’s most successful country,” Marshall writes, “is about to become self-sufficient in energy, it remains the pre-eminent economic power and it spends more on research and development for its military than the overall military budget of all the other NATO countries combined.” Marshall adds that even in the new age of “cheapening of political dialogue” and “populist leaders” (e.g. the arrival of Donald Trump in office), the U.S. is ultimately behaving how it’s always behaved. It’s prioritizing maintaining its place as number 1 on the global stage via diplomacy with Western Europe and treaties with countries (like Taiwan) to halt the progress of the running-up competitors such as China and Russia. But the speculation is more or less this: The U.S. is preparing to self-detonate, implode on itself, collapse from within. With nearly 4 years of Trump’s nasty, polarizing rhetoric, videos of police militancy/brutality like scenes from dystopian horror films (or perhaps Xinjiang), and the resurgence of hate groups said to be working for the president himself—is it possible that the U.S. is turning to an authoritarian model to more urgently pursue its agenda on the international level? Certainly. Yet, what if all this doomsday-provoking peril is merely more growing pains of a country (even if unconsciously) trying to outrun its discriminatory and tyrannical imperfections of the past? Present Misfortunes, Past Misfortunes Historically speaking, the U.S. has been through a myriad of rounds of autocratic abuse. Manzanar and other camps were built on U.S. soil and housed thousands of Japanese Americans. Hundreds of Americans were subject to aggressive investigations (and some were imprisoned) during the McCarthy Era and Red Scare. Revolutionary voices were silenced and repressed during the Vietnam-counter culture/desegregation era. All of these much-discussed historical injustices included much of what we’re seeing today: police surveillance/brutality, plenty of racism, and lots of ideological resistance to progressivism. Are today’s events mere extensions of these abuses? Or have we learned and are we continuing to learn? We can certainly say, as Ginsberg wrote in his 1956 poem “America,” that America's libraries are “full of tears.” New tears are wetting our cheeks in the face of yet more social injustices. Perhaps humans are diabolical fools incapable of reading and learning from history. Perhaps those in power would rather not learn from history. They’d rather jostle us around and trouble us over issues that should already be resolved. Instead, they continue to increase the class gap and store us smug in our polarized, political-party boxes—or so many are saying. Add the global pandemic to the mix and it’s no wonder we’re being engulfed in memes promoting 2020 as the most annoying, downright worst year of them all. I bet, however, if the world had access to meme-generating technology during the 1918 Spanish Flu which killed nearly 1% of the world, we’d look back on them feeling somewhat overdramatic, at least over calling 2020 “the worst.” Pandemics and epidemics aside, if you’re reading this, you’re likely living in a somewhat stable environment (unlike much of the world). 2020 may be the worst year ever for refugees fleeing war-torn nations, for people starving in Venezuela, and the over-700 million people worldwide without access to clean water—but it’s not so bad for us who are missing our nights out at the bar and for us who are canceling vacation reservations. This isn’t meant to downplay the mental health crisis now exacerbated by the pandemic. It’s very real and what is “better” and “worse” is subjective; the inner worlds of individual humans can be universes away from each other. But, overall, catastrophizing the present (which we’re arguably doing) is not a new phenomenon. Declinism and Our Nostalgic Preference for the Past Tending toward declinism, or the belief that society is retreating into its ultimate conclusion, is nothing new. The Middle Ages, for example, later coined the “Dark Ages” by ecclesiastical historian Caesar Baronius, marked a time of “intellectual darkness” between the fall of Rome and the Renassaince. During this time, war, pillage, and disease were extremely rampant and the “dark” in “Dark Ages” seemed to go beyond denoting a decline in access to knowledge. In turn, many people during medieval times (and even later during the Renaissance) thought the world was ending. There were various, widespread apocalyptic predictions. According to Jason Boyett’s book Pocket Guide to the Apocolypse, early French bishop Hilary of Poitiers predicted the world would end in A.D. 365. Boyett’s book goes on to describe the once-extensive belief that the end-of-times felt near during the Black Death, the deadliest pandemic recorded in human history. Present times always seem like end-times because humans appear to have evolved with a memory bias, creating nostalgia for the past and suspicion of the present. There’s even a scientific basis to this, a basis that researchers at Carnegie Melon University describe as “nostalgic preferences.” They write: “People believe everything from the general state of their country to the quality of their television programming has declined from its past zenith.” Following this logic explains why we catastrophize amid the throes of adversity even when such catastrophe was equally or even more prevalent in the past. It explains why we often remain skeptical of notions of “the end,” whether that be the end of civilization, the end of the world, or the end of enjoyment in 2020.
https://medium.com/discourse/the-u-s-is-not-declining-and-2020-is-not-the-worst-year-ever-ad8544c765b0
['Jacob Lopez']
2020-09-07 18:46:05.430000+00:00
['2020', 'History', 'Philosophy', 'Politics', 'Science']
Asking Deeper Questions
Top of the morning to you, lovely people! I don’t know about you, but I adore questions. I love asking them, not so much answering them. I like the infinite possibilities that open up for me to reflect upon that don’t require a definitive answer. This Thursday’s story is all about questions. Can we expect to obtain genuine insight into the mysteries of life? That is up to each and every one of you to decide. All I can say is: Be still and listen!
https://medium.com/know-thyself-heal-thyself/asking-deeper-questions-a32c5aff56f2
['𝘋𝘪𝘢𝘯𝘢 𝘊.']
2020-12-03 16:09:59.543000+00:00
['Humor', 'Energy', 'Short Story', 'Storytelling', 'Life Lessons']
Black Mirror Isn’t as Dark as it Used to Be, and That’s Okay
About a year has passed since the fifth season of Black Mirror came out, and some of the fans have been a little disappointed. There were plenty of complaints, from the reduced episode count, to the re-use of already-explored technologies, but one critique that rings out above the rest is how lighthearted this season has been. There were very few stomach-churning twists, very little existential dread. Two of the three episodes ended on what most can agree were positive notes. The general understanding is that Black Mirror, the horror/sci-fi anthology series, has finally gone soft. But that’s not necessarily a bad thing. [Caution: mild spoilers for multiple episodes below.] The first thing I want to clarify is that yes, I get it: the darkness was a huge part of what made the show so great in the beginning. The show made its name off of disturbing, cynical episodes like White Christmas, Shut Up and Dance, and White Bear. But the bleakness has never been the core of the show, and as of season five, that core has still never been lost. (No, not even in the Miley episode.) Black Mirror is, at its core, a show about the way technology enhances or exposes the aspects of humanity that were already there. That’s all. Every episode has the same basic formula: there’s some sort of technology that has obvious upsides, but humans — being humans — decide to use that technology in unexpected ways. These ways don’t have to be bad and they don’t have to be good; they just need to be compelling. And in that regard, season five stands up just as well as the rest of the show. Credit: Netflix Striking Vipers, for instance, takes the concept of hyper-realistic VR gaming, something that seems very much plausible in the near future, and uses it to explore such complicated human themes around sexuality, love, and infidelity. Not only that, but it does this in a way that has never been done before. What was the last movie you’ve ever seen with a love triangle like this? When was the last time a show made you grapple with the sort of questions this episode made you grapple with? Just like White Bear and Fifteen Million Merits, I could talk about the implications of Striking Vipers for hours on end. Meanwhile, I can’t help but wonder if Ashley O’s arc in Rachel, Jack, and Ashley Too was intended as a direct commentary on fans’ expectations for the show itself. Ashley wants to try out new music, but she’s been pigeonholed by her fans to a specific genre. When she finally gets to branch out and sing other stuff, some of her “biggest fans” (remember those girls who cried at the news that she was in a coma?) are appalled and storm out of the concert. Credit: Netflix Whenever I see people complain that the new season just doesn’t feel like Black Mirror, I’m reminded a little bit of those teenagers who cried over Ashley O’s coma, only to be appalled when she starts singing what she wants to sing. Black Mirror may have made a name for itself by taking stories in the darkest direction possible, but I doubt the writers want to keep going for that same type of tone over and over again. They want to throw in a little more optimism, tell different types of stories, but of course this alienates a lot of the fans who loved them for their darker moments. But that’s okay, because the writers are getting to tell the stories they want to tell. They aren’t catering their writing to appease their fans. I remember when San Junipero came out — the first episode to give viewers any hope for the future — and talking to a handful of fans who wished the story had gone in a more typically Black Mirror direction. “I wish Kelly had chosen to die naturally,” one person had said, “and then the episode could’ve ended with Yorkie alone in the simulation, waiting forever for Kelly to come back.” Someone else had an even darker idea: that Kelly would want to pass over, but would die unexpectedly before she got the chance. Yes, both of those endings would’ve been heartbreaking. They both would given the viewers that same sinking stomach feeling that so many of the best episodes produced. But what would be the point? What would a tragic ending for San Junipero say that hasn’t already been said in the ten episodes that preceded it? Fifteen Million Merits used its cynical ending to make a statement about how genuine rebellion can be repackaged into something that defeats its own message. White Bear used its cynical ending to make a statement about outrage culture and retributive justice. What profound statement would the writers be making by refusing Kelly and Yorky their happy ending? A dark ending would not have worked for San Junipero. Worse: a dark ending would’ve been boring, predictable. It would’ve been lazily repeating that same pattern the audience had come to expect. So much of what made this episode great was the contrast with what came before it: after ten episodes in a row of everything going wrong, of watching our protagonists get all their humanity drained out of them, it can’t be understated just how powerful it was when things finally went well. For the show to finally say, “you know what? Maybe things won’t be so bad.” Credit: Netflix One of the best things about the recent lighter episodes of Black Mirror is how they’ve widened the range of where each story could go. By the time we got to season three, we’d been trained to expect the absolute worst. We went into every episode knowing we’d be subjecting ourselves to another hour’s worth of stress and pain. San Junipero taught us that wouldn’t always be the case, and now after finishing season five, it’s clear that the possibilities are endless in future episodes. The first episode of season six could end on a dark note, a happy note, or anywhere in between. Isn’t it more exciting to have no idea what to expect? I think a lot of us need to deal with the idea that a story being bleak and cynical does not necessarily equal a story being deep. Nowadays, we have a tendency to think that happy endings are easier to write; that by nature they can’t be as memorable or as compelling as an ending that’s dark. We have a tendency to believe that negativity is synonymous with wisdom, and that being optimistic about anything is the same as being naive. This is such an unproductive, self-defeating attitude for people to have, and it’s part of why I’m not upset that Black Mirror seems to have pushed back on it, if even just a little. Because in the year of 2020, when authoritarianism seems to be spreading across the globe, when the climate crisis is accelerating at a faster rate than ever, when there’s a massive pandemic and the economy is in shambles, how much value does cynicism really add?
https://medium.com/make-it-personal/black-mirror-isnt-as-dark-as-it-used-to-be-and-that-s-okay-e18cb18814c8
['Michael Boyle']
2020-08-06 04:35:40.854000+00:00
['Storytelling', 'Culture', 'Film', 'Television', 'Medium']
Farmer Recoups Additional $110,000 in Crop Losses Thanks to Drone Imagery
Farmer Recoups Additional $110,000 in Crop Losses Thanks to Drone Imagery Silicon Falcon Micro Aviation uses DroneDeploy to help farmer increase crop insurance claim It was a terribly wet July in Western Kentucky, and for the area’s tobacco farmers, this spelled trouble. Heavy rains, such as the ones seen this summer, can decimate a tobacco field, leaving a farmer with huge losses to bear. As a result, most carry extensive insurance on their crops. The standard in the industry is for crop adjusters to survey damage manually, walking only select sections of a field and taking pictures of damage. Doing any more than this manually would require far too much time. Based on the information gathered, the adjuster must extrapolate to estimate the damage across the entire field. Given the limited amount of data they have to work with, even the most seasoned adjuster can sometimes come up with an estimate that is far below the actual losses a farmer has suffered. As is illustrated in the following case study, brought to us by Gregg Heath of Silicon Falcon Micro Aviation, farmers and crop adjusters are increasingly turning to the commercial drone industry to help recover a fair percentage for lost crops. UAVs a Natural Step for Retired Pilot Starting a commercial drone business seemed like a natural next step for Gregg Heath, who spent most of his career working as a professional pilot. His business partner is also a pilot, as well as a retired state trooper who uses drones for accident reconstruction. Together they started Silicon Falcon in January of this year. Being around aviation for most of their professional lives, they were both somewhat familiar with trends in the drone industry. But it wasn’t until they went into business for themselves that they began to realize just how indispensable commercial drones are becoming for a wide variety of industries. Heath has spent the last year growing his business and refining his skills. If he has just one piece of advice for someone starting out in the commercial drone industry, it is this: Find a testing ground where you can practice flying your drone in a variety of scenarios. You will enhance your skills and also develop a great portfolio of maps to show to prospective clients. For his part, Heath had an existing relationship with a local farmer (he taught him how to fly a plane several years ago.) During the last growing season, he used the farmer’s fields as his testing ground, flying them at various stages of growth. “Testing your drone in real scenarios is relatively quick and doesn’t cost anything,” says Heath. “It gives you great experience and the chance to play around with different settings in a real-world environment.” As it turned out, this testing ground led to a turning point in Silicon Falcon’s business. Prior to this July, Heath and his partner had primarily used their drones to monitor crop health for farmers near their home base in western Kentucky. But when the farmer who owned the test fields suffered huge tobacco crop losses during the summer rains, he called on the pair to help with his crop insurance claim. The map that resulted from the process, and the insurance savings it brought the farmer, has quickly become news in the community. As a result, Silicon Falcon expects that helping farmers recoup the cost of lost crops will become a larger part of their business. DroneDeploy Tailor Made for Crop Surveying The losses Heath’s client faced were devastating. Nearly 100 acres of tobacco crop was flooded, rendering it unviable. But, after visiting the damaged fields, the crop adjuster offered the farmer a 34% loss. The farmer had assumed the damage was closer to 50%. Faced with the possibility of swallowing a huge margin of loss, he called on Silicon Falcon to produce a map that would provide a much more comprehensive picture of the damage. “DroneDeploy is really tailor made for when you are surveying crops,” says Heath. [click to tweet] Using the RGB4K camera and a near-Infrared sensor, Heath flew the field with his Phantom 3 Pro. After about an hour of uploading, and three more hours during which DroneDeploy processed the images, he was able to produce a crop health map, providing real-time data only a day after after the insurance adjuster’s visit. Heath used the plant health algorithms and histogram within DroneDeploy to help estimate the crop loss Aerial Map Finds Damage Not Seen By Insurance Adjuster The Silicon Falcon team Using the area tool on an orthomosaic map, the Silicon Falcon team gathered a rough idea of the crop damage, drawing lines around the obvious areas of bare ground. They then applied the histogram scale to highlight variability within the field and get a better idea of the damaged areas. Ultimately, they came up with a crop loss of almost 50%, compared to the 34% offered by the insurance adjustor. Armed with a 3-foot by 3-foot orthomosaic map, the farmer convinced the insurance adjuster to revisit the site. Using the annotated map as a reference point, the adjuster ground-truthed a targeted section of the field and, based on the new information he gathered, offered the farmer a 47% loss. This amounted to an additional $1,100 more per acre above the original claim amount. For the 100 acre field, this meant the farmer recouped $110,000 more than he had initially been offered. A farmer knows his land. When a major loss occurs, he is probably the first to know just how much damage has been done. But proving that damage is difficult because crop adjusters can only spend time walking a small portion of a field. An aerial survey of crop damage gives farmers and adjusters real-time, comprehensive evidence to support insurance claims. An annotated map produced in DroneDeploy can be used as a reference point for an adjuster to conduct targeted ground-truthing. The result is a far more accurate picture of a farmer’s losses, and hopefully, a fair insurance settlement.
https://medium.com/aerial-acuity/farmer-recoups-additional-110-000-in-crop-losses-thanks-to-drone-imagery-7ac922bbcaf2
[]
2016-11-18 21:11:13.181000+00:00
['Insurance', 'Mapping', 'Agriculture', 'Drones']
Decision Tree from Scratch in Python
Decision trees are among the most powerful Machine Learning tools available today and are used in a wide variety of real-world applications from Ad click predictions at Facebook¹ to Ranking of Airbnb experiences. Yet they are intuitive, easy to interpret — and easy to implement. In this article we’ll train our own decision tree classifier in just 66 lines of Python code. Let’s build this! What is a decision tree? Decision trees can be used for regression (continuous real-valued output, e.g. predicting the price of a house) or classification (categorical output, e.g. predicting email spam vs. no spam), but here we will focus on classification. A decision tree classifier is a binary tree where predictions are made by traversing the tree from root to leaf — at each node, we go left if a feature is less than a threshold, right otherwise. Finally, each leaf is associated with a class, which is the output of the predictor. For example consider this Wireless Indoor Localization Dataset.² It gives 7 features representing the strength of 7 Wi-Fi signals perceived by a phone in an apartment, along with the indoor location of the phone which can be Room 1, 2, 3 or 4. +-------+-------+-------+-------+-------+-------+-------+------+ | Wifi1 | Wifi2 | Wifi3 | Wifi4 | Wifi5 | Wifi6 | Wifi7 | Room | +-------+-------+-------+-------+-------+-------+-------+------+ | -64 | -55 | -63 | -66 | -76 | -88 | -83 | 1 | | -49 | -52 | -57 | -54 | -59 | -85 | -88 | 3 | | -36 | -60 | -53 | -36 | -63 | -70 | -77 | 2 | | -61 | -56 | -55 | -63 | -52 | -84 | -87 | 4 | | -36 | -61 | -57 | -27 | -71 | -73 | -70 | 2 | ... The goal is to predict which room the phone is located in based on the strength of Wi-Fi signals 1 to 7. A trained decision tree of depth 2 could look like this: Trained decision tree. Predictions are performed by traversing the tree from root to leaf and going left when the condition is true. For example, if Wifi 1 strength is -60 and Wifi 5 strength is -50, we would predict the phone is located in room 4. Gini impurity Before we dive into the code, let’s define the metric used throughout the algorithm. Decision trees use the concept of Gini impurity to describe how homogeneous or “pure” a node is. A node is pure (G = 0) if all its samples belong to the same class, while a node with many samples from many different classes will have a Gini closer to 1. More formally the Gini impurity of n training samples split across k classes is defined as where p[k] is the fraction of samples belonging to class k. For example if a node contains five samples, with two of class Room 1, two of class Room 2, one of class Room 3 and none of class Room 4, then CART algorithm The training algorithm is a recursive algorithm called CART, short for Classification And Regression Trees.³ Each node is split so that the Gini impurity of the children (more specifically the average of the Gini of the children weighted by their size) is minimized. The recursion stops when the maximum depth, a hyperparameter, is reached, or when no split can lead to two children purer than their parent. Other hyperparameters can control this stopping criterion (crucial in practice to avoid overfitting), but we won’t cover them here. For example, if X = [[1.5], [1.7], [2.3], [2.7], [2.7]] and y = [1, 1, 2, 2, 3] then an optimal split is feature_0 < 2 , because as computed above the Gini of the parent is 0.64, and the Gini of the children after the split is You can convince yourself that no other split yields a lower Gini. Finding the optimal feature and threshold The key to the CART algorithm is finding the optimal feature and threshold such that the Gini impurity is minimized. To do so, we try all possible splits and compute the resulting Gini impurities. But how can we try all possible thresholds for a continuous values? There is a simple trick — sort the values for a given feature, and consider all midpoints between two adjacent values. Sorting is costly, but it is needed anyway as we will see shortly. Now, how might we compute the Gini of all possible splits? The first solution is to actually perform each split and compute the resulting Gini. Unfortunately this is slow, since we would need to look at all the samples to partition them into left and right. More precisely, it would be n splits with O(n) operations for each split, making the overall operation O(n²). A faster approach is to 1. iterate through the sorted feature values as possible thresholds, 2. keep track of the number of samples per class on the left and on the right, and 3. increment/decrement them by 1 after each threshold. From them we can easily compute Gini in constant time. Indeed if m is the size of the node and m[k] the number of samples of class k in the node, then and since after seeing the i-th threshold there are i elements on the left and m–i on the right, and The resulting Gini is a simple weighted average: Here is the entire _best_split method. The condition on line 61 is the last subtlety. By looping through all feature values, we allow splits on samples that have the same value. In reality we can only split them if they have a distinct value for that feature, hence the additional check. Recursion The hard part is done! Now all we have to do is split each node recursively until the maximum depth is reached. But first let’s define a Node class: Fitting a decision tree to data X and targets y is done via the fit() method which calls a recursive method _grow_tree() : Predictions We have seen how to fit a decision tree, now how can we use it to predict classes for unseen data? It could not be easier — go left if the feature value is below the threshold, go right otherwise. Train the model
https://towardsdatascience.com/decision-tree-from-scratch-in-python-46e99dfea775
['Joachim Valente']
2019-10-31 21:44:02.664000+00:00
['Machine Learning', 'Python', 'Scikit Learn', 'Decision Tree', 'Cart']
4 Tips to Improve Your Public Speaking Skills
4 Tips to Improve Your Public Speaking Skills What I’ve Learned as a First Time Webinar Speaker Photo by Nycholas Benaia on Unsplash I recently spoke at a Webinar about the Mentorship Effect hosted by Correlation One, a data and analytics training program sponsored by leading employers. Although this is not my first time speaking in front of a crowd, I have never been invited to be a panelist before. The idea of being one of the speakers as a new grad sounds totally intimidating but extremely exciting. I have been attending many Data Science Meetup events, where the speakers share their success stories. Who would have thought that one day I would be invited to share my experience as well! I’ve always been dreaming that one day I would be one of the speakers and inspire people, but I would never have thought this day would come so soon. Me feeling nervous but excited to share my experience. (Image by Author) Growing up, I was told by almost all of my teachers that I would be terrible at public speaking because I was too shy and never spoke up. I remember a couple of years ago, my boss even suggested I attend Toastmasters to improve my public speaking. I only realized last year that I, too, could be a good public speaker. When I was at Metis, I had to give a presentation for each project I had worked on. I remember the first feedback I received after my first presentation was that I needed to speak up. So, I decided that I had to improve my public speaking skills. In the end, my teachers and fellows were impressed by how much I’ve grown, and that meant a lot to me! No matter how bad you used to be, you can always expand your skillset. In this blog, I want to share some of the lessons I’ve learned that improved my public speaking skills. Writing Your Speech Once you’ve got a speech lined up, what you can do now is to prepare what you want to say. But where do you start? I started with a rough outline of the topics I wanted to cover. Actually, kinda the same way I would for writing any of my blog posts. It’s not necessary to stick to your outline, but having an outline can keep you on track so that you don’t miss anything. When writing your speech, write your speech the way you would usually talk. If you want to drive the audience’s attention, feel free to add some small talk or humor. It’s helpful to look up some examples if you are not sure how to write an outline. After you’ve got your topics figured out, you could start thinking about how much time you want to spend on each topic/question. If this is a presentation, I try not to put more than three bullets per slide. The rule of thumb is to not list too many details on your slides, because what’s the point of listening to your speech when your audience can literally just read it from the slide! Practice Practice, practice, practice! Unless you’re the type who can talk about anything off the top of your head, you need to be prepared. Since English is my second language, practicing is even more important to me. I’m the type who tends to forget what I need to say under pressure. However, if you practice enough, I can assure you that this awkward situation will never happen! Review your PowerPoint. You could even time yourself. PowerPoint allows you to add on an on-screen timer that helps you keep track of how much time you spend talking on each slide. This will help you make sure you’ll be able to cover all your points in a limited time. I don’t recommend making last-minute changes to your deck, because it can leave you feeling somewhat unprepared. You might even end up skipping the slide since you forgot what you wanted to say. If you are willing to go the extra mile, I recommend recording yourself. After listening to your own presentation, you can try to fine-tune your speech. If you are not convinced by what you said, how can you expect the audience would be? Besides, it’s a good way to keep track of your progress! Dealing with Nerves You are almost there!! Feeling nervous before the presentation is totally normal for a lot of people. Not gonna lie, I always have sweaty palms when I’m nervous. That’s totally ok! One way that really helps me stay calm during my presentation is not to think about the audience. You could pretend you are talking to yourself or your family. You could also find several dots/objects in the distance to focus on when you are speaking, so it may look like you are looking at the audience when you are not. This is something I’m going to have to improve on as well. But I do believe that the more I do it, the better I will be. And so do you! Feedback Congratulations! You did it!!! If you receive any negative feedback, don’t feel bad. If someone is willing to take the time to criticize you, you should take it as an opportunity for you to grow and be better. Remember, everything can always be improved as long as you put the effort! “I’ve learned that people will forget what you said, people will forget what you did, but people will never forget how you made them feel.” ― Maya Angelou The biggest thing I was afraid of was that people would be judging me. It turns out people judge you way less than you judge yourself. You might think people might NEVER forget the embarrassing things you did or said, but most people are actually pretty forgetful. They might not remember what exactly you said, but they would remember whether you were confident. So don’t worry about too much, just do it! Final words In the end, I wanted to say that don’t let where you are at determining where you can be. “If you work hard, stay focused and never give up, you will eventually get what you want in life.” ― Donald Miller I feel incredibly grateful that Correlation One invited me to be one of the panelists. Also, I’d like to give a big shout-out to Correlation One for hosting the DS4A program. If you are interested in Data Science, want to get connected to the hiring managers and recruiters 👥 from Fortune 500 companies, and have an industry mentor guiding you, don’t hesitate to APPLY! Thanks for reading! I hope you find this blog helpful.
https://towardsdatascience.com/what-ive-learned-as-a-first-time-webinar-speaker-f94419ce4729
['Kessie Zhang']
2020-11-22 19:23:41.611000+00:00
['Life Lessons', 'Self Improvement', 'Data Science', 'Women In Tech', 'Startup']
Why You Should Never Consent to a Coding Test in an Interview
Credit: Author Software engineering interviews nowadays often involve some kind of coding test or programming exercise and I think that’s a very Bad Thing. Here’s why. Lazy Tropes Asking software engineers to perform a particular task such as writing an algorithm to generate factorials (a very common one) or to sort a [singly|doubly] linked list can be easily memorised beforehand and offers no insight into a candidate’s skill other than their strength of rote memorisation. You may as well ask the ASCII code of the character ‘A’. The detailed solutions to many such exercises are widely available online in various reference materials and, in many cases, in books that contain both algorithmic and specific program language implementations to all of the common interview coding questions. Whilst working at one company I was talking with a colleague about the detailed interview process they were working through with a major hedge fund. Everything technical they asked he had carefully memorised from a widely available interview questions and answers book that a current employee had passed on as a source to all interview questions. Luckily, he was a skilled engineer, but had consented to go through this frankly monotonous and mundane exercise to secure the position. He shouldn’t have had to do that — not only was it a waste of his valuable time but it also gave the hiring company nothing in terms of ascertaining his ability. He left a year later, in any case, tired of their low technical bar for hiring and continual ineffective project management practices… Use of Memory The same reasoning goes for coding an algorithm in a specific programming language. No software engineer operating in the real world would write a section of code without either some kind of syntax checking aid (such as an editor’s built-in code completion), without referring to some technical documentation, or without just copying a pre-implemented solution where applicable. There’s no sense in reinventing the wheel. I would wager that much code running the world’s systems today originated as an answer on Stack Overflow. In all practicality, working with the syntax of a particular programming language comes from use and familiarity. Whilst an interviewer may think that testing a candidate on the syntactical nuances of a particular language is a gauge of their understanding, I, for example, can state categorically that although I have been using the C language for nigh on thirty years that I still regularly get the syntax wrong. In fact, as my software engineering career has evolved and I have become more familiar with the languages of my own interest, I regularly get confused between syntactical nuances of say, C and C++, or Objective-C. This isn’t because I’m a terrible software engineer (though some may disagree…) but because there’s only so much knowledge you can hold in your head and have direct recall of at any one time. A good software engineer often doesn’t know the answer to a specific question off the top of their head, but will definitely know where to look to find the answer. Perhaps consider asking the best place to find such information as an interview question? Common Tasks Something I touched on briefly earlier on is the maxim of not reinventing the wheel. For example, if you’re working in C and need a serial port routine then you don’t have to rewrite one from scratch unless the situation specifically demands it. Perhaps you need a JSON parser, a very common requirement — unless you’re coding on a limited resource embedded board, on a satellite in geostationary orbit, or in Malbolg then perhaps you should just pull in a pre-written one from a library. Chances are it’s been in use for a long time, has been fully tested, and has detailed (and correct) documentation. Solid. It’s unlikely in modern software engineering to come across a common task that hasn’t either already been automated in a pre-written library or whose implementation isn’t widely available in algorithmic form. If you’re like me, and in the game primarily because of the love of the subject, then you’ll probably be actively seeking out those roles where you’re implementing things that have been implemented before. Seeking out strange, new worlds, new life, new civilisations, … In fact, the concept of software engineers in the far future has more than once been likened to code archeologists where they primarily reuse existing code and spend relatively little time designing and coding new and novel algorithms. Discussion Discussion Discussion I do fully endorse finding out where or not the person you’re interviewing ‘knows their stuff’ but using any of the above methods in, in my opinion, utterly worthless. Say it like it is. For example, a simple discussion on the coding paradigms used in modern software engineering, whether a particular language would be a good choice for a specific implementation, or whether or not a particular software engineering methodology (I’m looking at you, agile) is relevant are far more rewarding and poignant subjects to discuss. Lead the discussion to highlight general areas, see what insight the candidate has into new problems and perhaps alternative novel methods to tackle older ones. How do they see things evolving, how would they start to address something. Keep open ended, stay away from specifics and minutiae. And the key there is, discuss. Ascertaining value is not just about ticking boxes and it continually surprises me that many companies that are considered ‘up and coming’ and ‘leaders in their field’ still fall back to outdated, monotonous, and utterly predictable hiring practices that show little value in gauging actual technical acumen. It’s often said that the interviewee should be interviewing the company just as much as the company is interviewing them. I’m fully behind this one. Being interviewed by someone with a list of precise technical questions is a pretty much always red flag, particularly when they don’t wish to prolong discussion on any one point. It often shows that the interviewer may not fully understand what they’re asking and any answer that doesn’t precisely match up with what’s written on their script will be classified as incorrect. The Bottom Line Some companies have changed to better methods, others, well, fall well short of the mark. This is where I urge you, fellow software engineers, to not engage with companies that follow outdated hiring practices and insist on programming tests and exercises. Especially prolonged ones! I’ve heard stories of companies asking for projects to be completed on the candidate’s time, often taking several days. Others have generic ‘aptitude tests’ for specific languages, multiple choice, where a hint of brain fog within the limited allotted time equals game over! If you’re new in the game then perhaps you’re not in a position to turn down interviews and I fully understand this, but do see it as a learning experience. Go through the motions, get the experience, learn as much as you can, and if the job does disappoint then just move on. As you move on your confidence will grow along with your knowledge and experience. After all, the company benefits from you, so you must equally benefit from the company. If you’re an older experienced sort, as I am, then hiring companies — just talk to me. I’ve been around, I’ve seen things and done things, the qualifications are all on the wall and on the C.V., and I resent being channelled down some generic hiring pipeline and repeatedly tested on my ability, If you think you’re a decent employer and you can’t understand why seemingly excellent candidates keep pulling out then you should take a real long look at your hiring practices.
https://medium.com/swlh/why-you-should-never-consent-to-a-coding-test-in-an-interview-8e22f5078c7f
['Dr Stuart Woolley']
2020-12-29 02:10:38.065000+00:00
['Programming', 'Work', 'Jobs', 'Interview', 'Software Engineering']
Lily James, Dominic West, and Netflix’s Curious ‘Rebecca’ Remake
The funny thing is that the controversy of Rebecca is far from over. We’re still waiting to see how the whole Lily James and Dominic West fiasco pans out, but for now, it looks like they may be fined by the Italian government just for riding that scooter together. Current coronavirus laws in the country restrict people from sharing scooters. It’s currently under investigation according to local councilor Stefano Marin. Yet that’s not all. Did you know that the writer of Rebecca has long been accused of plagiarism? Author Daphne du Maurier died in 1989, but both of her stories which Alfred Hitchcock used — Rebecca and The Birds — have been singled out as blatant copies of other writers' work. With Rebecca, there have been three charges of plagiarism. First, critics and readers thought the plot was a poor rip-off of Jane Eyre. Later, a woman named Edwina MacDonald sued du Maurier, claiming Rebecca was a copy of her story, Blind Windows. The New York judge dismissed the case because nobody could prove that du Maurier ever read MacDonald’s work. But there’s another writer, who first accused du Maurier of plagiarism — Carolina Nabuco. Nabuco was a Brazilian writer who wrote a book called A Sucessora, published 4 years before Rebecca. According to The New York Times Book Review, “So numerous are the parallels, that one may find them on almost each page.” It’s also been said that “the key differences between A Sucessora and Rebecca is that the former was written in Portuguese and is set in Rio de Janeiro in the 1920s; the latter was written in English and is set in England’s West Country in the 1930s.” Nabuco never sued du Maurier, but in her memoirs wrote that when the Hitchcock adaptation of Rebecca came to Brazil, United Artists asked Nabuco to sign a document stating that similarities between her book and the movie were a coincidence, which she refused. Author Frank Baker also claimed du Maurier plagiarized his work and considered bringing a suit against Universal Studios when Hitchcock’s The Birds came out based on du Maurier’s short story of the same name. Baker wrote a novel called The Birds which was published 16 years before du Maurier’s short story. However, Baker’s novel, which was published by du Maurier’s cousin Peter Davies, was unsuccessful, and only sold about 350 copies in 26 years. Du Maurier maintained that she never plagiarized anyone and that any similarities were purely coincidental. And to a certain extent, it seems true — there are universal themes in du Maurier’s stories which seem to keep coming readers coming back for more. But at the same time, the sheer popularity of Rebecca seems to have stunted its growth. The more I look at du Maurier’s life and her infamous story about love, pride, obsession, and jealousy, I can’t help but wonder if the real story isn’t the one we have to read between the lines. As a child, du Maurier seemed to prefer her male alter ego she called Eric Avon. When she was about 10, she cut her hair short and wore boys’ clothing. At age 18, she fell in love with a 30-year-old headmistress. Although she married a man in 1932, she became obsessed with her husband’s ex-fiancée, who some believe inspired the character of Rebecca. Du Maurier followed her husband’s ex from afar with newspaper clippings until her shocking suicide in 1944. Some people believe that this woman knew she was the inspiration for Rebecca, partly because she invited du Maurier’s sister to her wedding. But considering how hated the character of Rebecca was in the novel, it’s tempting to speculate that this ex-lover might have been negatively impacted by society’s view of Rebecca/herself. It’s worth noting, however, that the reader never actually gets to meet Rebecca. While the book’s narrator isn't even called by name until she becomes Mrs. de Winter, Rebecca’s name is constantly uttered by faulty witnesses. Yet nobody remotely trustworthy ever tells us anything about Rebecca and who she really was. That makes it easy to label her a monster or master manipulator, but that doesn’t mean it’s true. The narrator herself is an imperfect judge of character — like so many Gothic heroines she’s got her head in the clouds and jumps to absurd conclusions. For decades, readers have been thinking that they’re supposed to hate Rebecca. The love between Maximilian and the narrator is put upon a strange pedestal as if some murders of supposedly bad wives are justified and can be instantly forgiven. Somehow, I think there’s more to be gleaned from the text. There’s even room for an interpretation where Rebecca was a woman of agency in a time and place where women had very little power. In fact, that’s something Lily James hinted at herself when speaking about the story: And regarding the infamous character of Rebecca, she said: She's not wrong. So, on the one side, I’m pretty damn disappointed in Netflix’s Rebecca, but perhaps it’s not all for naught. Lily James, Dominic West, and Daphne du Maurier have all given us something to talk about. Maybe, just maybe, those conversations are worth one lackluster film.
https://medium.com/honestly-yours/lily-james-dominic-west-and-netflixs-curious-rebecca-remake-a20f2011ffc5
['Shannon Ashley']
2020-10-25 03:37:29.069000+00:00
['Women', 'Culture', 'Film', 'Relationships', 'Books']
Today Can Be Whatever You Decide It Should Be
Every single morning you have a choice. Face your day with trepidation, or excitement. Be enthusiastic, or lethargic. Look forward to what lies ahead, or dread what’s coming. This is a choice. You might not always feel as if that’s the case, but it’s the truth. You get to choose how to begin your day, and where to take it from that point. If you decide to approach your day from a negative perspective, chances are you will draw in a negative day. Conversely, if you begin your day from a positive approach, odds are you will draw in a positive day. Yes, this is a generalization. A great many of the things you encounter on any given day are completely neutral. Neither good nor bad, they just are. However, if you are feeling down that will impact your overall experience. This is why mindfulness matters as much as it does. Awareness of your mindset is telling When you are aware of what you are thinking and what and how you are feeling, you gain the power to influence and control them. Often, the basis of both thought and feeling is rooted in your subconscious. What that means is that everything you have encountered previously, as well as anticipation or anxiety about upcoming matters, will create your mood. Mindfulness is a product of the now. Being aware of your thoughts and feelings, in the now, helps you to really know them, and work with them. A lot of the time, because you have routines and schedules to keep and errands to run, the day goes by with your thoughts and feelings driven subconsciously. When you do that, you actually lose mindfulness of what you are thinking and what and how you are feeling. Without that present mindfulness, the past and future overwhelm the subconscious. Why does being present in the now matter? Last night, when I went to bed, a bunch of different things were running through my head. Concerns about my finances, my weight, and trying not to disappoint the people who care about me. When I woke up this morning I felt as if someone was sitting on my chest. Not a physical discomfort, it was a sinking feeling, an intangible dis-ease, and I just was not sure what to do with it. I was thinking about disappointment, both from myself and those I care about. My feelings were down, low, a mix of unworthiness, concern, and other negative emotions. I found myself lamenting mistakes of the past and anxious about the future. I have to make a choice here. Continue to allow my thoughts and feelings to run subconsciously outside of my influence and control…or stop, analyze what’s going on inside my head, and be present in the now. Once I am present, I can explore what has caused me to think and feel this way. By bringing this to my consciousness, rather than leaving it in my subconscious, I am now aware, in the here-and-now, of what I am thinking and what and how I am feeling. The why is not as important as the what and the how of the subconscious mindset, at least not immediately. Getting to the why in time, though, helps you understand the point of origin for such a mindset, and work to change that in the future. The point of this is that I am no more or less worthy and deserving of choosing to have a good day than you are. I have a choice, and so do you. This can be a bit of a challenge, and it can be super uncomfortable — but it is utterly worth it. Being present can make or break your day When you allow your subconscious behind the wheel, it can and will take you to places you’d rather not visit. Shady neighborhoods, vast and empty deserts, and any other metaphor you can think of for an experience you’d like to avoid or not have at all. Because your subconscious is almost entirely based on a combination of past thoughts and future concerns, it has the potential to drive your new day into negative places. Why? Because if you are anything like me when you are not present in the here-and-now, conscious of what you are thinking and what and how you are feeling, you tend to revisit past experiences and “what if?” future happenings. Does this seem familiar? You look back at things you did in the past — maybe only yesterday — and feel disappointment, annoyance, embarrassment, or some other negative emotion related to a thing that happened, and worries about it. Then, with that worry, you feel anxious about making the same mistake again, or going forward making some new error or another experience that will go badly. When your subconscious takes you there, THIS day, as in today, is affected. Maybe you are fortunate enough to largely experience positive thoughts of the past and excitement for the future. That’s great if you can. I am looking to do that myself. The best way to do that is through mindfulness in the here and now. As Lao Tzu said: “If you are depressed you are living in the past. If you are anxious you are living in the future. If you are at peace you are living in the present.” This is why being mindful of the here and now is so important to your day.
https://mjblehart.medium.com/today-can-be-whatever-you-decide-it-should-be-4bccca85af8d
['Mj Blehart']
2019-07-02 14:11:48.895000+00:00
['Self Improvement', 'Mental Health', 'Mindfulness', 'Personal Development', 'Life']
Here’s How to Build Your Own DIY MBA in Digital Marketing
Author: Sujan Patel / Source: Entrepreneur Image credit: Jannis Tobias Werner / Shutterstock.com Life moves fast in the world of digital marketing. In fact, since 2013, digital media consumption in the United States has increased by 49 percent , according to comScore. So, considering a career in this field might be wise. But, when you’re trying to build a career in digital marketing, you may find it difficult to keep up with the industry’s ever-increasing rate of change. So, is it even worth it to invest in a formal education? The answer is that, while most employers like to see at a least a four-year degree, what you learn about digital marketing at a university won’t be the same as what you learn in an actual digital marketing job. Instead, to become successful in your career, you need to be prepared to learn on the job and gain skills in the field. Why schools don’t teach digital marketing Because digital marketing changes so fast, schools struggle to keep up. Richard Geasey, an internet marketing consultant and lecturer at the University of Washington, wrote in Inc. that, “Most schools are staffed by instructors who know nothing of internet marketing. The field is so fast and quickly changing they have no chance to learn anything useful and present it to students.” Most of those instructors, moreover, often have very little practical experience in digital marketing. They may have studied marketing for years, but if they don’t have real-world experience to share, they won’t be able to properly teach the subject. So, instead, what instructors teach is the basics of traditional marketing, which does provide a strong marketing foundation; but it doesn’t prepare students for the practicalities of working in the field itself. There are no classes on social media management and none on marketing automation, email marketing or the myriad other topics you’re bound to come across in your career. These things are learned from working in the field. Writing for Marketing Land, Travis Wright, host of MarTech Talks, wrote, “I’ve spoken at several business schools, including the University of Chicago’s, Booth School of Business and the University of Utah, David Eccles School of Business. Each time after I’m done presenting, students approach me feeling scared — due to the overwhelming lack of knowledge and job readiness they have. I let them know what they didn’t know that they need to know.” The need to self-educate … Click here to read more
https://medium.com/oneqube/heres-how-to-build-your-own-diy-mba-in-digital-marketing-4b9055b00c3b
[]
2018-03-01 17:05:01.034000+00:00
['Digital Marketing', 'Marketing', 'Media Consumption']
A reading list in the wake of the killing of George Floyd
Try one of these: Waking Up White (Debby Irving): A little bit “racism for beginners,” this book takes on a narrative form and walks through systemic racism and how it benefits the white author in a clear, accessible manner. Between the World and Me (Ta-Nehisi Coates): Coates attempts to answer the questions of racism in a letter to his son. He takes the biggest concerns of racist American history and frames them through personal stories of his racial awakening. White Fragility (Robin Diangelo): Robin Diangelo explains that racial segregation is set up to protect white people from the discomfort experienced when presented with inequity and challenges to white norms. If you identify as progressive or liberal, this is for you. Diangelo spells out how white progressives are responsible for the perpetuation of inequity. The New Jim Crow (Michelle Alexander): Alexander explains how the United States criminal justice system functions as a system of racial control. By targeting black men through the War on Drugs and other movements that have decimated communities of color, millions of people are permanently relegated to second-class status by a system that formally follows principles of colorblindness. The Color of Law (Richard Rothstein): Rothstein methodically analyzes laws that have maintained and further facilitated racial segregation and inequity. Evicted (Matthew Desmond): Based on years of fieldwork, this book follows eight families in Milwaukee and illustrates how our housing system perpetuates economic exploitation that disproportionately impacts communities of color. Homeward (Bruce Western): Western depicts life upon prison release as former prisoners attempt to reenter society by describing the lives of the formerly incarcerated and demonstrating how poverty, racial inequity, and lack of social support lead to cycles of vulnerability. Thick: And Other Essays (Tressie McMillan Cottom): Eight essays that blend the personal with political and turn narrative stories into analyses of whiteness, black misogyny, beauty culture, and status-signaling as a means of survival for black women. Eloquent Rage (Brittney Cooper): Cooper answers the question, “So what if it’s true that Black women are angry,” spells out why Black women have the right to be, and explains just why anger is a powerful source of energy.
https://medium.com/the-open-bookshelf/a-reading-list-in-the-wake-of-the-killing-of-george-floyd-f44eb7630763
['Laurie Hahn Ganser']
2020-05-28 18:51:58.607000+00:00
['Book Recommendations', 'Social Justice', 'Racism', 'Books', 'Anti Racism']
Orbs R&D Update July 2019
Highlights This segment was contributed by @OdedWx and @talkol Hi community, this has been a busy month since the last R&D update! The biggest thing happening was the first-ever rewards distribution. As you know, the rewards are given and accounted for on an ongoing basis (on every election), but the tokens themselves are distributed over Ethereum in bulk after 3 months. We’ve had our first successful distribution which included the first 27 election periods (1 through 27)! Some statistics: 18,018,788 ORBS were distributed to 1,448 addresses. The distribution took place by a smart contract on Ethereum, to make sure we have a third party external verification of the entire process. There’s a very nice architecture that requires only a single transaction with commitments for the entire distribution event. Following the commitment transaction, any account may send the distribution transactions relying on the commitments. This means that the process is transparent and easy to review and also very efficient (50 distributions per transaction). See this post describing the entire process. Another area the contributors have been focusing on this month is production network fixes. The network is up in production since March, and is constantly monitored for issues and improvements. For example, the lean helix threading model was improved, preventing corner cases that may cause a delay in a block creation. Another example is improvements to the gossip threading model. There’s a lot of excitement in the community for working with the system and making this process easy for developers. One very exciting project I want to point out is the Orbs Playground — an online IDE for smart contract developers that lets them experiment and develop smart contracts on Orbs directly from their web browser — without downloading and installing any tools! This very cool tool started as a hackathon project, but got such a good feedback that it became a standalone project that is now actively maintained. Also look forward to seeing it embedded more in more in the various Orbs websites. Blockchain Core This segment was contributed by @IdoZilberberg Last month marked the completion of the first release since the Orbs platform was launched in March. While several patches have already been released earlier, June signified the first full release with actual features. Updates include improvements to Gamma, various stability and monitoring updates, design & implementation work on the Lean Helix consensus algorithm goroutine model and more. @itamararjuan was working on a feature that intends to increase confidence against the introduction of bugs or regressions, and help measure improvement or degradation in performance, when changing the codebase. This entails introducing a new step to the CI process, where, upon, the creation of a new pull-request on the orbs-network-go GitHub repo, provisions a new virtual chain (network) and runs the e2e (end-to-end) test suite against it. @itamararjuan upgraded Nebula — Orbs’ node deployment tool — to support Terraform 0.12. BTW, @itamararjuan also managed to complete a difficult bicycle tour of northern Italy during the same time, so great job and thank you :) @ronnno and @electricmonk have been busy converting integrative tests that span Orbs and Ethereum into Javascript. Check out the results in the subrepo Psilo. @ronnno wrote the contracts of rewards distribution, along with extensive testing. The work is in PR110 and you can read more about it in this post. Also, thanks to @gilamran for adding rewards history to the Rewards page. @ronnno and @IdoZilberberg have been modifying the goroutine model of the Lean Helix consensus algorithm of the Orbs platform. Presently, the algorithm suffers from less-than-ideal performance when the rate of incoming transactions is low, due to having a single goroutine for waiting for new transactions, and for handling external events such as Node Sync and Leader Election in Lean Helix. Following several proof-of-concept iterations, a new model was decided upon, and development is underway! This work is expected to complete during July, more details will be available in the next update (stay tuned). @electricmonk completed an update of the Gossip goroutine threading model in PR1121. Before this change, DirectTransport had a goroutine per connection, and handling a message occurred on that goroutine. This effectively blocked further communication from that peer for the duration it takes to handle the message. Some Gossip consumers, such as the LeanHelix consensus algo, may block for a long time, if — for instance — it is waiting for a new block to be produced. This PR creates a goroutine per Gossip topic, writing from the connection goroutines to the topic goroutines via a buffered channel. Essentially this serializes all messages from all peers to a single goroutine, but frees the connection goroutines to handle subsequent messages, and guarantees QoS per topic. In addition, the code now creates a one-off goroutine per Block Sync request, so that scanning blocks or reading chunks from disk will not block the Block Sync topic goroutine. Another PR, #1193, upgrades the Orbs platform to compile using Golang 1.12.6 (previously 1.11.x was used). It is expected that the Lean Helix topic will not be blocked, as it will have a goroutine that deals with reading from the topic. @noambergIL and @IdoZilberberg updated code in PR49, PR1202 to support Sign() becoming an external service — to that end, a cancellable Go context was added as a parameter to this method. More PRs by @noambergIL: PR1186 — To fix rewards & double delegate state PR99 — For election review created a script to show every election breakdown More PRs by @ronnno:
https://medium.com/orbs-network/orbs-r-d-update-july-2019-14ab4f3e9308
['Nate Simantov']
2019-07-18 15:19:25.745000+00:00
['Ethereum', 'Blockchain', 'Cryptocurrency', 'Updates', 'Development']
How to create a blank Word DOCX document in Python
Having the functionality to create DOCX files from within your app or website can open up quite a range of possibilities for your project. I’m not suggesting you set this up from the ground up, as that is actually a lot of work. What I am suggesting is that implementing this important feature can be done very easily if you know the right API to use. Let’s take a look. Step one, install the client: pip install cloudmersive-convert-api-client Step the second, call the function: from __future__ import print_function import time import cloudmersive_convert_api_client from cloudmersive_convert_api_client.rest import ApiException from pprint import pprint # Configure API key authorization: Apikey configuration = cloudmersive_convert_api_client.Configuration() configuration.api_key['Apikey'] = 'YOUR_API_KEY' # Uncomment below to setup prefix (e.g. Bearer) for API key, if needed # configuration.api_key_prefix['Apikey'] = 'Bearer' # create an instance of the API class api_instance = cloudmersive_convert_api_client.EditDocumentApi(cloudmersive_convert_api_client.ApiClient(configuration)) input = cloudmersive_convert_api_client.CreateBlankDocxRequest() # CreateBlankDocxRequest | Document input request try: # Create a blank Word DOCX document api_response = api_instance.edit_document_docx_create_blank_document(input) pprint(api_response) except ApiException as e: print("Exception when calling EditDocumentApi->edit_document_docx_create_blank_document: %s " % e) And that, ladies and gentlemen, is how you get things done. That’s right, send in your request and moments later your DOCX will be ready to go. From there, you can use some of our other API functions to fill it in, such as edit_document_docx_insert_paragraph, edit_document_docx_insert_table, and edit_document_docx_insert_image. Another similar function is edit_document_xlsx_create_blank_spreadsheet, which will allow you to create blank Excel spreadsheets.
https://cloudmersive.medium.com/how-to-create-a-blank-word-docx-document-in-python-bb379d8296cf
[]
2020-05-05 05:15:29.202000+00:00
['New', 'Docx', 'Microsoft Word', 'Python', 'Create']
Set up an ETL Data Pipeline and Workflow Using Python & Google Cloud Platform (COVID-19 Dashboard)
SETUP YOUR CLOUD PROJECT Possibly the most challenging part of this project is to understand how everything works together and what’s the best way to link the services and resources in a way that is efficient. Let’s start with Google Cloud Authentication. If you’re a new customer, sign up for the Free Tier offer by Google and set up using that email. After that: Create a project ( covid-jul25 ) and specify a region ( us- west3-a ) where your code will live. Interacting with the Cloud Console through CLI will require those information so keep them handy. ) and specify a region ( ) where your code will live. Interacting with the Cloud Console through CLI will require those information so keep them handy. Create a bucket inside the project that will be used for deployment & take note of this bucket Make sure to enable APIs for the following services: BigQuery, Storage and DataProc Service accounts should be set up for these services: BigQuery, Storage and Compute Engine (Compute Engine should already be set up by default) For LOCAL DEVELOPMENT ONLY, download the API json keys for BigQuery & Storage and store them in the same folder as your Jupyter notebook NOTES FOR API KEYS: These are very important so please don’t post them anywhere. If you’re working in a public repository, add these json file names to .gitignore before committing. JUPYTER NOTEBOOK SETUP To authenticate using the downloaded json API keys and set the environment in the Jupyter notebook, use the following: #Set credentials for bigquery LOCAL ONLY os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="{API-KEY-NAME}.json" bigquery_client = bigquery.Client() # Instantiates a client #Set credentials for cloud storage os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="{API-KEY-NAME}.json" storage_client = storage.Client() # Instantiates a client After that, your API keys should be authenticated and ready to go. A rundown of a couple of functions you could use with the APIs that you set up: #Write to a BigQuery table pd.to_gbq('table_name',if_exists='param') #Read from a BigQuery table using legacy syntax pd.read_gbq(sql, dialect='legacy') #Run queries on BigQuery directly from Jupyter query_job = bigquery_client.query("""[SQL CODE]""") results = query_job.result() Some tables are more efficient to create in BigQuery while others are easier to transform using Python so you can pick your poison if you’re comfortable with one versus the other. With the BigQuery API however, extract, transform and load become easier. Personally, I use Python for row operations (transpose, string replace, adding calculated columns etc.) while SQL for joining/create/update tables. In the daily_update script, you will notice that there are SQL codes to DROP &CREATE TABLE in the same code block with DELETE FROM/INSERT INTO statements, this is because some of the tables’ schema needs to be predetermined before they can be imported into Data Studio for visualizing. There are also temp tables that hold data that will then be used to update ‘static’ tables that are linked directly to Data Studio. The idea is that you don’t want to delete tables directly linked to certain visuals on the dashboard. In the IMPORT & SETUP section of the daily_update notebook, toggle the deployment to ‘local’ or ‘cloud’ : deployment = 'local' #local or cloud if deployment == 'cloud': from pyspark.sql import SparkSession #ONlY FOR CLOUD DEPLOYMENT #Start spark session spark = SparkSession \ .builder \ .config("spark.jars.packages", "com.google.cloud.spark:spark-bigquery-with-dependencies_2.11:0.17.0")\ .master('yarn') \ .appName('spark-bigquery-ryder') \ .getOrCreate() #Instantiate BigQuery client bigquery_client = bigquery.Client() # Instantiates a client #Instantiate Storage client storage_client = storage.Client() # Instantiates a client else: #Set credentials for bigquery !FOR LOCAL ONLY, DON'T COPY TO PYSPARK os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="covid-jul25-**************.json" bigquery_client = bigquery.Client() # Instantiates a client #Set credentials for cloud storage os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="covid-jul25-**************.json" storage_client = storage.Client() # Instantiates a client Set another code block to set up your cloud working environment (note to change zone and name of bucket accordingly) #Set working environment PROJECT_ID='covid-jul25' REGION='us-west3' ZONE='us-west3-a' BUCKET_LINK='gs://us-west3-{BUCKET_NAME}' BUCKET='us-west3-{BUCKET_NAME}' BIGQUERY SETUP To run the script, datasets need to be set up in BigQuery. It’s important for the purpose of this project to set the Data location to US since we’ll be joining data from BigQuery Public Data that lives in the US location. ALL tables that don’t yet exist in the dataset need to be created. For example, the script below will update the rt_results table in the usprojections dataset: query_job = bigquery_client.query( """ DELETE FROM `covid-jul25.usprojections.rt_results` WHERE True; INSERT INTO `covid-jul25.usprojections.rt_results` SELECT * except(date), cast(date as date) as date FROM `covid-jul25.usprojections.temp_rt`; """) results = query_job.result() To create the rt_results table, first run the below code then proceed block by block before running on the cloud. query_job = bigquery_client.query( """ DROP TABLE IF EXISTS `covid-jul25.usprojections.rt_results`; CREATE TABLE `covid-jul25.usprojections.rt_results` AS SELECT * except(date), cast(date as date) as date FROM `covid-jul25.usprojections.temp_rt`; """) results = query_job.result() SET UP WORKFLOW TEMPLATE TO BE RUN ON DATAPROC After successful local testing, open Cloud Shell on your Cloud Console and set the below specs for cloud deployment: * Set working environment (replace the bucket_link & bucket with your bucket name) export PROJECT_ID='covid-jul25' gcloud config set project $PROJECT_ID export REGION=us-west3 export ZONE=us-west3-a export BUCKET_LINK=gs://us-west3-{BUCKET_NAME} export BUCKET=us-west3-covid-{BUCKET_NAME} * Create workflow template export TEMPLATE_ID=daily_update_template export cluster_name=covid-cluster gcloud dataproc workflow-templates create \ $TEMPLATE_ID --region $REGION * Set the managed cluster attached to the template gcloud dataproc workflow-templates set-managed-cluster \ $TEMPLATE_ID \ --region $REGION \ --zone $ZONE \ --cluster-name $cluster_name \ --optional-components=ANACONDA \ --master-machine-type n1-standard-4 \ --master-boot-disk-size 20 \ --worker-machine-type n1-standard-4 \ --worker-boot-disk-size 20 \ --num-workers 2 \ --image-version 1.4 \ --metadata='PIP_PACKAGES=pandas google.cloud pandas-gbq' \ --initialization-actions gs://us-west3-{BUCKET_NAME}/pip-install.sh Note: Optional-components includes Anaconda because when you spin up the cluster, the environment doesn’t have it by default. Moreover, initialization-actions are scripts to be run when the cluster spins up. In this case, we need pip to be installed on our cluster. Next, metadata includes some extra sauce like pandas, google.cloud, and pandas-gbq. For initialization-actions, copy the pip-install.sh file from the linked repo to your working bucket or use gsutil cp on Google Shell. Masters and workers specs can be modified according to your needs. *Add task(s) to the workflow template export STEP_ID=daily_update gcloud dataproc workflow-templates add-job pyspark \ $BUCKET_LINK/daily_update.py \ --step-id $STEP_ID \ --workflow-template $TEMPLATE_ID \ --region $REGION Step ID names the step within the workflow and file daily_update.py is store in the BUCKET_LINK folder specified above. *Run and time the workflow template time gcloud dataproc workflow-templates instantiate \ $TEMPLATE_ID --region $REGION #--async *Update on August 28th, 2020: There will be times when Google Cloud Platform is down. Follow all the steps outlined above but for a new region/zone (note that you would have to create a new bucket in the new region/zone!). For example, use region us-west2 instead of us-west3 & save the same working script in the new location then run the daily template per usual from the new folder. The quickest way to move files is to use the gsutil cp command: gsutil cp [old directory] [new directory] VISUALIZE ON GOOGLE DATA STUDIO With tables created in BigQuery, we are ready to beautify the data using the connections provided by Google Data Studio. As you can see, there are many ways to connect to different sources from Data Studio. To correctly display data on the dashboard, table schema and structures are important for data blending. As shown on the left, in order to blend data, you need a common field between different tables (in this instance, region code like ‘US-CA’ for California). Note that this blend will also act like a left outer join so be careful when you aggregate data to display on the dashboard (sum vs. avg) LESSONS LEARNED Try your best to see what kind of data are out there but don’t get hung up on trying to incorporate all of them Process optimization comes with experience so don’t sweat it if later you find out what used to take half an hour can now take 5 minutes Data visualization should be user-friendly and so your back-end data and tables should be revised based on user’s feedback and the interface should be self-explainable Large amount of data can increase loading time (page 2 of the report) so optimization needs to be done Table structures and schema are important for blending data and need to be designed before incorporating into the workflow (with a lot of deleting and recreate tables in the process) NEXT STEPS
https://medium.com/swlh/how-to-set-up-a-covid-19-workflow-and-dashboard-using-the-google-cloud-platform-b0e5165333e5
['Ryder Nguyen']
2020-09-08 17:27:11.906000+00:00
['Python', 'Google Cloud Platform', 'Cloud Services', 'Bigquery', 'Etl']
The Pain Of A Mother’s Legacy
Photo by Xavier Mouton Photographie on Unsplash Am I to blame for my son’s mental illness? The best times of my life are when my oldest son spends the night at my house. He is a grown man living in another city in a serious relationship with his girlfriend and just graduated from college. When I first asked him to come over, I worried he might feel too old to stay overnight with his mother, but we have tons of fun cooking dinner, watching movies, and playing video games. After missing out on parts of his childhood, I treasure every second we’re together. When I grow up someday, I want to be just like my son. He made Eagle Scout back in high school and finished four years of college in three years. He’s smart and funny and kind to everybody, including total strangers. I’m infinitely proud of him. I’ve always said that if I ever get stuck in an emergency, he would be the first person I called. He once went camping with his friends armed only with a pocket knife and a pack of matches. If I had to describe him in one word, I’d say he’s strong, not just physically but emotionally. My boy has been a rock for me, always the wiser of the two of us. The last time he spent the night with me, I was standing next to him as he was unpacking his bag. He took out a bottle of pills with a prescription label on it. I didn’t want to be nosy, but I asked out of concern. “Honey, what are those?” “Oh,” he answered. “This is just medication I take for anxiety and depression.” My jaw dropped in shock, and a wave of tremendous guilt washed over me. “Oh my God,” I blurted out. “I’m so sorry.” “It’s okay, Mom. They’re making me feel a lot better.” I hugged him tight then, holding him in my arms even though he’s twice my size now. Depression and anxiety were the last things I wanted for my beloved boy, even though I knew all along there was a chance he would struggle with mental illness because of me. I had a diagnosis of bipolar disorder given to me 20 years ago, and my mother had severe clinical depression her whole life. She inherited it from my grandfather, who had episodes so bad he shut himself in the basement for weeks at a time. I’m not sure if it goes back farther than that, but it’s very likely.
https://medium.com/publishous/the-pain-of-a-mothers-legacy-de43c0251826
['Glenna Gill']
2019-05-28 22:01:45.520000+00:00
['Life Lessons', 'Parenting', 'Mental Health', 'Mothers', 'Life']
The 4 Types of Career Planners
Group 3 — Chameleon Group: Those Who Don’t Know What They Want Photo by Nandhu Kumar on Unsplash. As a manager and a mentor, I’ve had a few situations where I asked others what they want and received a blank stare in response. When prompted, I often received the answer, “Oh, I don’t know what I want. Can you tell me?” Earlier in my career, I’d think to myself, “How can I possibly help someone if they don’t even know what they want for themselves?” But as I’ve gotten older, I’ve also become wiser. I’ve learned that we can help people in this group if we take enough time to know and understand them and if they have the right mindset. Asking the right questions, showing them opportunities, and giving them examples can all help identify what they might want for their career. This may seem like hard work, but it can actually be quite rewarding to be a mentor or manager of people in this group. If you belong to this group, try to seek advice from a good manager, mentor, or even consider hiring a career coach.
https://medium.com/better-programming/the-4-types-of-career-planners-1d67464bc172
['Isabel Nyo']
2020-08-17 14:57:19.731000+00:00
['Startup', 'Career Development', 'Careers', 'Work', 'Programming']
Naive Bayes Classifier in Machine Learning
Naive Bayes Classifier in Machine Learning Mathematical explanation and python implementation using sklearn Photo by fotografierende from Pexels Naive Bayes Classifier Naive Bayes Classifiers are probabilistic models that are used for the classification task. It is based on the Bayes theorem with an assumption of independence among predictors. In the real-world, the independence assumption may or may not be true, but still, Naive Bayes performs well. Topics covered in this story Image by Author Why is it named Naive Bayes? Naive → It is called naive because it assumes that all features in the dataset are mutually independent. Bayes, → It is based on Bayes Theorem. Bayes Theorem First, let's learn about probability. Probability A probability is a number that reflects the chance or likelihood that a particular event will occur. Event → In probability, an event is an outcome of a random experiment. P(A)=n(A)/n(S) P(A) → Probability of an event A n(A) →Number of favorable outcomes n(S) →Total number of possible outcomes Example P(A) → Probability of drawing a king P(B) →Probability of drawing a red card. P(A) =4/52 P(B)=26/52 Image by Author Types of probability Joint probability Conditional probability 1. Joint Probability A joint probability is the probability of two events occurring simultaneously. P(A∩B) →Probability of drawing a king, which is red. P(A∩B)=P(A)*P(B)=(4/52)*(26/52)=(1/13)*(1/2)=1/26 Image by Author 2. Conditional Probability Conditional probability is the probability of one event occurring in the presence of a second event. Probability of drawing a king given red → P(A|B) Image by Author Probability of drawing a red card given king P(B|A) P(B|A) =P(A∩B)/P(A) Image by Author Derivation of Bayes Theorem Image By Author Naive Bayes Classifier Example Bayes theorem is an extension of conditional probability. By using Bayes theorem, we have to use one conditional probability to calculate another one. To calculate P(A|B), we have to calculate P(B|A) first. Example: If you want to predict if a person has diabetes, given the conditions? P(A|B) Diabetes → Class → A Conditions → Independent attributes → B To calculate this using Naive Bayes, First, calculate P(B|A) → which means from the dataset find out how many of the diabetic patient(A) has these conditions(B). This is called likelihood ratio P(B|A) Then multiply with P(A) →Prior probability →Probability of anybody in the population having diabetes. Then divide by P(B) → Evidence. This is the current event that occurred. Given this event has occurred, we are calculating the probability of another event that will also occur. This concept is known as the Naive Bayes algorithm. P(B|A) → Likelihood Ratio P(A) → Prior Probability P(A|B) → Posterior Probability P(B) → Evidence
https://medium.com/towards-artificial-intelligence/naive-bayes-classifier-in-machine-learning-b0201684607c
['Indhumathy Chelliah']
2020-12-17 04:46:28.015000+00:00
['Programming', 'Data Science', 'Python3', 'Artificial Intelligence', 'Machine Learning']
Beyond Good and Evil: Animism and the films of Hayao Miyazaki
Animism is the belief that gods and spirits — and the anima (“breath”) that they exude— inhabit things both living and inert. Eurocentric anthropologists originally used the term negatively, believing animism to be a stage in the evolution of religion from primitive belief to more “advanced” monotheism. This view should be rejected. I am definitely an animist. I don’t mean that as some kind of poetic characterization of my love for nature. No, I definitely believe nature — rock, animal, and tree — is animated by more than just the material sum of its parts. My lad and Mr Oak. Worcestershire, England. Source: Ronan McLaverty-Head. Take these hills and trees near my home. Should I get a good death, they may be the last thing I see as I lay dying at home as an old man. The hill is North Hill in Malvern, Worcestershire, and it looms both over the house I was born into and the house in which I now live. I believe that this hill is more than its granite and grass. It is “North Hill,” the animation of that granite and grass. It — and the things that live in it and around it — has, to borrow a Japanese idea, kami-nature (more on kami below). The same goes for this old oak tree. I touch him every time I see him. I greet him as an old friend. My kids swing on him. He was here before me and will be here after me. He is beautiful and he is definitely “Mr Oak” and not just “oak tree.” Is this Age of Aquarius madness? I don’t think so. Consider Japan. Japanese animism is not some modern neo-pagan, pantheistic revival but a deeply embedded consciousness of the yaoyorozu no kami, the “8,000,000 gods.” 70% of the Japanese may not be part of institutional religion, but there remains a deep spirituality in Japanese culture nonetheless.
https://medium.com/spiritual-tree/beyond-good-and-evil-animism-and-the-films-of-hayao-miyazaki-d85234f86983
['Ronan Mclaverty-Head']
2020-05-28 12:21:00.870000+00:00
['Environment', 'Japan', 'Animism', 'Spirituality', 'Anime']
How to Build an Industry-Leading Cryptocurrency Security Company
Before founding what is now one of the most successful companies in the cryptocurrency industry — Ledger, Eric Larchevêque and his co-founder opened La Maison du Bitcoin, a Bitcoin education center and co-working space in the second district of Paris. It was through chance encounters in this space that Ledger was born in the basement of 35 rue du Caire, where the company also operated a Bitcoin exchange desk and a handful of crypto miners. Eric joined us on Epicenter to discuss the rise of Ledger from a ten-person team to now nearly 150 people across three continents. How did Eric Larchevêque’s fascination with Bitcoin and digital currencies led him to an unlikely path — starting a hardware company? The French entrepreneur understood that the security of Bitcoin was based on the security of the private keys — and the only way to keep them safe was through secure hardware. And while there wasn’t any good alternative to a hardware wallet, at that time no one really recognized the need and importance for such a solution — one that’s hard to scale and not even on your potential customer’s mind. That’s why Eric Larchevêque’s key entrepreneurial takeaway from the early days of Ledger was the importance of resilience, which was key to capture the attention of initial investors and customers. The perception around hardware wallets change drastically about a year later, mid-2017, when the interest in Nano S, Ledger’s first product, skyrocketed with the onset of cryptocurrency boom worldwide. That year the company sold a million hardware wallets, significantly scaled the team and operations, and decided to raise another funding round to meet the unstoppable demand for their products. In January 2018 Ledger raised a 75 million Series B round with the goal to scale their R&D, ramp up internationalisation efforts and expand into enterprise-side level of business by building Vault, a security solution for financial institutions wanting to secure their share of the pie in the cryptoeconomy. “We need to move forward very quickly, our ambition is to build very large technological company who will provide solutions for cryptocurrency and blockchain applications”, explains Eric. What are the biggest challenges related to building cryptocurrency products? First of all, there’s a need to deal with strong communities with strong opinions — while maintaining neutrality and not taking sides. And while Ledger is a true believer in decentralisation, Bitcoin, the vision of Satoshi Nakamoto, catering to all those competing crypto communities can take a lot of energy. Secondly, Ledger puts a lot of effort into educating its customers on how to secure their digital assets and why is it important. “You need to explain to people that they can lose everything they can have, and that’s not easy”, explains Eric. The new version of wallet software that Ledger is about to release should help newbies to enter into the cryptocurrency space securely. This year the company has been focused on expanding their business focus to serve financial institutions — investment banks, asset management companies, LPs or hedge funds — who really do want to move forward into crypto, but lack secure infrastructure. “The biggest problem these institutions have is how to hold cryptoassets and keep them secure”, explains Eric. Ledger Vault, a fully managed SaaS solution addressing the need to safeguard very large amount of multiple cryptocurrencies while mitigating both technological and physical risks, is the company’s answer to that. Ledger is also continually working on improving the security of its products in a dedicated lab, where security experts are continuously trying to hack their devices, as well as through hardware bounty programs. “Security is always a game of cat and mouse. There’s not such thing as ultimate, bulletproof security”, explains Eric. “But our objective is always to stead ahead of the game. And it’s always important to stay humble”. Watch the full episode on Epicenter, and don’t forget to subscribe to the show on iTunes, YouTube & SoundCloud. Drop by our Gitter community channel to discuss the show and leave some feedback.
https://medium.com/epicenterpodcast/how-to-build-an-industry-leading-cryptocurrency-security-company-b9f1d5420db2
['Ola Kohut']
2018-07-04 13:02:10.384000+00:00
['Bitcoin', 'Crypto', 'Cryptocurrency', 'Entrepreneurship', 'Blockchain Technology']
Dark Towers Review — Timely Insights on Who Owns Donald J. Trump
Dark Towers Review — Timely Insights on Who Owns Donald J. Trump Whether he pays up is a separate question. Dark Towers by David Enrich What did an ordinary German lender have to do to realize its American investment banking dreams? Monetize the scraps. In order to compete with the biggest and most profitable firms on Wall Street starting in the 1980s, Deutsche Bank had to excel in the business that established firms didn’t want. One of those scraps was to be the future President of the United States of America, Donald J. Trump. After Trump burned most of the big banks following his wave of loan defaults from his Atlantic City casinos, nobody on Wall Street would touch him. Except Deutsche. Dark Towers: Deutsche Bank, Donald Trump, and an Epic Trail of Destruction tells the story of this tenuous financial marriage. The author David Enrich narrates the tale through some of the key characters who were instrumental in the process. While he paints Deutsche as a dysfunctional criminal enterprise (which it is), he neglects to assess the industry holistically. There are a number of problem childs, as illustrated in another book I recently reviewed, Billion Dollar Whale (which describes Goldman Sachs’ disastrous dealings with 1MDB, the Malaysian sovereign wealth fund). Deutsche may be the leader of the pack, but the point is — there is a pack. Apart from the narrow assessment of the industry and at times hyperbolic targeting of Deutsche, there is oftentimes too much attention paid to Val Broeksmit, the troubled son of a former senior Deutsche employee who committed suicide. Val may have been one of the main sources for Enrich when writing the book, but there was unnecessary focus placed on his role, which in reality was more amateur private investigator than protagonist. What the book does get right is its highlighting of the enablers who made the Trump real estate empire possible, even when he was notorious for defaulting on loans and not paying his business partners or contractors. The book gives a vivid view into Trump’s financial situation and makes clear who owns him — Deutsche Bank. Trump has personally guaranteed to his biggest creditor $340 million, which comes due in 2023 and 2024. Overall, the book provides timely insights into a financial firm with American investment banking ambitions that was permitted by a culture of noncompliance to do business with a politically radioactive client infamous for not paying his bills or keeping his promises.
https://medium.com/curious/dark-towers-review-timely-insights-on-who-owns-donald-j-trump-a1c50a70e087
['Sebastian Stone']
2020-11-30 17:22:37.095000+00:00
['Politics', 'Books', 'Finance', 'Reading', 'Donald Trump']
Engineering for Equity
Engineering for Equity Software Engineering at Google Editor’s Note: In order to be an exceptional engineer, you need to build products that drive positive outcomes for the broadest base of people. In this piece, Demma Rodriguez, Head of Equity Engineering at Google, discusses the unique responsibilities involved in designing products for a broad base of users. We’d love to hear from you about what you think about this piece. In this piece, we’ll discuss the unique responsibilities of an engineer when designing products for a broad base of users. Further, we evaluate how an organization, by embracing diversity, can design systems that work for everyone, and avoid perpetuating harm against our users. As new as the field of software engineering is, we’re newer still at understanding the impact it has on underrepresented people and diverse societies. We did not write this piece because we know all the answers. We do not. In fact, understanding how to engineer products that empower and respect all our users is still something Google is learning to do. We have had many public failures in protecting our most vulnerable users, and so we are writing this piece because the path forward to more equitable products begins with evaluating our own failures and encouraging growth. We are also writing this piece because of the increasing imbalance of power between those who make development decisions that impact the world and those who simply must accept and live with those decisions that sometimes disadvantage already marginalized communities globally. It is important to share and reflect on what we’ve learned so far with the next generation of software engineers. It is even more important that we help influence the next generation of engineers to be better than we are today. Just reading this piece means that you likely aspire to be an exceptional engineer. You want to solve problems. You aspire to build products that drive positive outcomes for the broadest base of people, including people who are the most difficult to reach. To do this, you will need to consider how the tools you build will be leveraged to change the trajectory of humanity, hopefully for the better. Bias Is the Default When engineers do not focus on users of different nationalities, ethnicities, races, genders, ages, socioeconomic statuses, abilities, and belief systems, even the most talented staff will inadvertently fail their users. Such failures are often unintentional; all people have certain biases, and social scientists have recognized over the past several decades that most people exhibit unconscious bias, enforcing and promulgating existing stereotypes. Unconscious bias is insidious and often more difficult to mitigate than intentional acts of exclusion. Even when we want to do the right thing, we might not recognize our own biases. By the same token, our organizations must also recognize that such bias exists and work to address it in their workforces, product development, and user outreach. Because of bias, Google has at times failed to represent users equitably within their products, with launches over the past several years that did not focus enough on underrepresented groups. Many users attribute our lack of awareness in these cases to the fact that our engineering population is mostly male, mostly White or Asian, and certainly not representative of all the communities that use our products. The lack of representation of such users in our workforce¹ means that we often do not have the requisite diversity to understand how the use of our products can affect underrepresented or vulnerable users. Case Study: Google Misses the Mark on Racial Inclusion In 2015, software engineer Jacky Alciné pointed out² that the image recognition algorithms in Google Photos were classifying his black friends as “gorillas.” Google was slow to respond to these mistakes and incomplete in addressing them. What caused such a monumental failure? Several things: Image recognition algorithms depend on being supplied a “proper” (often meaning “complete”) dataset. The photo data fed into Google’s image recognition algorithm was clearly incomplete. In short, the data did not represent the population. Google itself (and the tech industry in general) did not (and does not) have much black representation,³ and that affects decisions subjective in the design of such algorithms and the collection of such datasets. The unconscious bias of the organization itself likely led to a more representative product being left on the table. Google’s target market for image recognition did not adequately include such underrepresented groups. Google’s tests did not catch these mistakes; as a result, our users did, which both embarrassed Google and harmed our users. As late as 2018, Google still had not adequately addressed the underlying problem.⁴ In this example, our product was inadequately designed and executed, failing to properly consider all racial groups, and as a result, failed our users and caused Google bad press. Other technology suffers from similar failures: autocomplete can return offensive or racist results. Google’s Ad system could be manipulated to show racist or offensive ads. YouTube might not catch hate speech, though it is technically outlawed on that platform. In all of these cases, the technology itself is not really to blame. Autocomplete, for example, was not designed to target users or to discriminate. But it was also not resilient enough in its design to exclude discriminatory language that is considered hate speech. As a result, the algorithm returned results that caused harm to our users. The harm to Google itself should also be obvious: reduced user trust and engagement with the company. For example, Black, Latinx, and Jewish applicants could lose faith in Google as a platform or even as an inclusive environment itself, therefore undermining Google’s goal of improving representation in hiring. How could this happen? After all, Google hires technologists with impeccable education and/or professional experience — exceptional programmers who write the best code and test their work. “Build for everyone” is a Google brand statement, but the truth is that we still have a long way to go before we can claim that we do. One way to address these problems is to help the software engineering organization itself look like the populations for whom we build products. Understanding the Need for Diversity At Google, we believe that being an exceptional engineer requires that you also focus on bringing diverse perspectives into product design and implementation. It also means that Googlers responsible for hiring or interviewing other engineers must contribute to building a more representative workforce. For example, if you interview other engineers for positions at your company, it is important to learn how biased outcomes happen in hiring. There are significant prerequisites for understanding how to anticipate harm and prevent it. To get to the point where we can build for everyone, we first must understand our representative populations. We need to encourage engineers to have a wider scope of educational training. The first order of business is to disrupt the notion that as a person with a computer science degree and/or work experience, you have all the skills you need to become an exceptional engineer. A computer science degree is often a necessary foundation. However, the degree alone (even when coupled with work experience) will not make you an engineer. It is also important to disrupt the idea that only people with computer science degrees can design and build products. Today, most programmers do have a computer science degree; they are successful at building code, establishing theories of change, and applying methodologies for problem solving. However, as the aforementioned examples demonstrate, this approach is insufficient for inclusive and equitable engineering. Engineers should begin by focusing all work within the framing of the complete ecosystem they seek to influence. At minimum, they need to understand the population demographics of their users. Engineers should focus on people who are different than themselves, especially people who might attempt to use their products to cause harm. The most difficult users to consider are those who are disenfranchised by the processes and the environment in which they access technology. To address this challenge, engineering teams need to be representative of their existing and future users. In the absence of diverse representation on engineering teams, individual engineers need to learn how to build for all users. Building Multicultural Capacity One mark of an exceptional engineer is the ability to understand how products can advantage and disadvantage different groups of human beings. Engineers are expected to have technical aptitude, but they should also have the discernment to know when to build something and when not to. Discernment includes building the capacity to identify and reject features or products that drive adverse outcomes. This is a lofty and difficult goal, because there is an enormous amount of individualism that goes into being a high-performing engineer. Yet to succeed, we must extend our focus beyond our own communities to the next billion users or to current users who might be disenfranchised or left behind by our products. Over time, you might build tools that billions of people use daily — tools that influence how people think about the value of human lives, tools that monitor human activity, and tools that capture and persist sensitive data, such as images of their children and loved ones, as well as other types of sensitive data. As an engineer, you might wield more power than you realize: the power to literally change society. It’s critical that on your journey to becoming an exceptional engineer, you understand the innate responsibility needed to exercise power without causing harm. The first step is to recognize the default state of your bias caused by many societal and educational factors. After you recognize this, you’ll be able to consider the often-forgotten use cases or users who can benefit or be harmed by the products you build. The industry continues to move forward, building new use cases for artificial intelligence (AI) and machine learning at an ever-increasing speed. To stay competitive, we drive toward scale and efficacy in building a high-talent engineering and technology workforce. Yet we need to pause and consider the fact that today, some people have the ability to design the future of technology and others do not. We need to understand whether the software systems we build will eliminate the potential for entire populations to experience shared prosperity and provide equal access to technology. Historically, companies faced with a decision between completing a strategic objective that drives market dominance and revenue and one that potentially slows momentum toward that goal have opted for speed and shareholder value. This tendency is exacerbated by the fact that many companies value individual performance and excellence, yet often fail to effectively drive accountability on product equity across all areas. Focusing on underrepresented users is a clear opportunity to promote equity. To continue to be competitive in the technology sector, we need to learn to engineer for global equity. Today, we worry when companies design technology to scan, capture, and identify people walking down the street. We worry about privacy and how governments might use this information now and in the future. Yet most technologists do not have the requisite perspective of underrepresented groups to understand the impact of racial variance in facial recognition or to understand how applying AI can drive harmful and inaccurate results. Currently, AI-driven facial-recognition software continues to disadvantage people of color or ethnic minorities. Our research is not comprehensive enough and does not include a wide enough range of different skin tones. We cannot expect the output to be valid if both the training data and those creating the software represent only a small subsection of people. In those cases, we should be willing to delay development in favor of trying to get more complete and accurate data, and a more comprehensive and inclusive product. Data science itself is challenging for humans to evaluate, however. Even when we do have representation, a training set can still be biased and produce invalid results. A study completed in 2016 found that more than 117 million American adults are in a law enforcement facial recognition database.⁵ Due to the disproportionate policing of Black communities and disparate outcomes in arrests, there could be racially biased error rates in utilizing such a database in facial recognition. Although the software is being developed and deployed at ever-increasing rates, the independent testing is not. To correct for this egregious misstep, we need to have the integrity to slow down and ensure that our inputs contain as little bias as possible. Google now offers statistical training within the context of AI to help ensure that datasets are not intrinsically biased. Therefore, shifting the focus of your industry experience to include more comprehensive, multicultural, race and gender studies education is not only your responsibility, but also the responsibility of your employer. Technology companies must ensure that their employees are continually receiving professional development and that this development is comprehensive and multidisciplinary. The requirement is not that one individual take it upon themselves to learn about other cultures or other demographics alone. Change requires that each of us, individually or as leaders of teams, invest in continuous professional development that builds not just our software development and leadership skills, but also our capacity to understand the diverse experiences throughout humanity. Making Diversity Actionable Systemic equity and fairness are attainable if we are willing to accept that we are all accountable for the systemic discrimination we see in the technology sector. We are accountable for the failures in the system. Deferring or abstracting away personal accountability is ineffective, and depending on your role, it could be irresponsible. It is also irresponsible to fully attribute dynamics at your specific company or within your team to the larger societal issues that contribute to inequity. A favorite line among diversity proponents and detractors alike goes something like this: “We are working hard to fix (insert systemic discrimination topic), but accountability is hard. How do we combat (insert hundreds of years) of historical discrimination?” This line of inquiry is a detour to a more philosophical or academic conversation and away from focused efforts to improve work conditions or outcomes. Part of building multicultural capacity requires a more comprehensive understanding of how systems of inequality in society impact the workplace, especially in the technology sector. If you are an engineering manager working on hiring more people from underrepresented groups, deferring to the historical impact of discrimination in the world is a useful academic exercise. However, it is critical to move beyond the academic conversation to a focus on quantifiable and actionable steps that you can take to drive equity and fairness. For example, as a hiring software engineer manager, you’re accountable for ensuring that your candidate slates are balanced. Are there women or other underrepresented groups in the pool of candidates’ reviews? After you hire someone, what opportunities for growth have you provided, and is the distribution of opportunities equitable? Every technology lead or software engineering manager has the means to augment equity on their teams. It is important that we acknowledge that, although there are significant systemic challenges, we are all part of the system. It is our problem to fix. Reject Singular Approaches We cannot perpetuate solutions that present a single philosophy or methodology for fixing inequity in the technology sector. Our problems are complex and multifactorial. Therefore, we must disrupt singular approaches to advancing representation in the workplace, even if they are promoted by people we admire or who have institutional power. One singular narrative held dear in the technology industry is that lack of representation in the workforce can be addressed solely by fixing the hiring pipelines. Yes, that is a fundamental step, but that is not the immediate issue we need to fix. We need to recognize systemic inequity in progression and retention while simultaneously focusing on more representative hiring and educational disparities across lines of race, gender, and socioeconomic and immigration status, for example. In the technology industry, many people from underrepresented groups are passed over daily for opportunities and advancement. Attrition among Black+ Google employees outpaces attrition from all other groups and confounds progress on representation goals. If we want to drive change and increase representation, we need to evaluate whether we’re creating an ecosystem in which all aspiring engineers and other technology professionals can thrive. Fully understanding an entire problem space is critical to determining how to fix it. This holds true for everything from a critical data migration to the hiring of a representative workforce. For example, if you are an engineering manager who wants to hire more women, don’t just focus on building a pipeline. Focus on other aspects of the hiring, retention, and progression ecosystem and how inclusive it might or might not be to women. Consider whether your recruiters are demonstrating the ability to identify strong candidates who are women as well as men. If you manage a diverse engineering team, focus on psychological safety and invest in increasing multicultural capacity on the team so that new team members feel welcome. A common methodology today is to build for the majority use case first, leaving improvements and features that address edge cases for later. But this approach is flawed; it gives users who are already advantaged in access to technology a head start, which increases inequity. Relegating the consideration of all user groups to the point when design has been nearly completed is to lower the bar of what it means to be an excellent engineer. Instead, by building in inclusive design from the start and raising development standards for development to make tools delightful and accessible for people who struggle to access technology, we enhance the experience for all users. Designing for the user who is least like you is not just wise, it’s a best practice. There are pragmatic and immediate next steps that all technologists, regardless of domain, should consider when developing products that avoid disadvantaging or underrepresenting users. It begins with more comprehensive user-experience research. This research should be done with user groups that are multilingual and multicultural and that span multiple countries, socioeconomic class, abilities, and age ranges. Focus on the most difficult or least represented use case first. Challenge Established Processes Challenging yourself to build more equitable systems goes beyond designing more inclusive product specifications. Building equitable systems sometimes means challenging established processes that drive invalid results. Consider a recent case evaluated for equity implications. At Google, several engineering teams worked to build a global hiring requisition system. The system supports both external hiring and internal mobility. The engineers and product managers involved did a great job of listening to the requests of what they considered to be their core user group: recruiters. The recruiters were focused on minimizing wasted time for hiring managers and applicants, and they presented the development team with use cases focused on scale and efficiency for those people. To drive efficiency, the recruiters asked the engineering team to include a feature that would highlight performance ratings — specifically lower ratings — to the hiring manager and recruiter as soon as an internal transfer expressed interest in a job. On its face, expediting the evaluation process and helping jobseekers save time is a great goal. So where is the potential equity concern? The following equity questions were raised: Are developmental assessments a predictive measure of performance? Are the performance assessments being presented to prospective managers free of individual bias? Are performance assessment scores standardized across organizations? If the answer to any of these questions is “no,” presenting performance ratings could still drive inequitable, and therefore invalid, results. When an exceptional engineer questioned whether past performance was in fact predictive of future performance, the reviewing team decided to conduct a thorough review. In the end, it was determined that candidates who had received a poor performance rating were likely to overcome the poor rating if they found a new team. In fact, they were just as likely to receive a satisfactory or exemplary performance rating as candidates who had never received a poor rating. In short, performance ratings are indicative only of how a person is performing in their given role at the time they are being evaluated. Ratings, although an important way to measure performance during a specific period, are not predictive of future performance and should not be used to gauge readiness for a future role or qualify an internal candidate for a different team. (They can, however, be used to evaluate whether an employee is properly or improperly slotted on their current team; therefore, they can provide an opportunity to evaluate how to better support an internal candidate moving forward.) This analysis definitely took up significant project time, but the positive trade-off was a more equitable internal mobility process. Values Versus Outcomes Google has a strong track record of investing in hiring. As the previous example illustrates, we also continually evaluate our processes in order to improve equity and inclusion. More broadly, our core values are based on respect and an unwavering commitment to a diverse and inclusive workforce. Yet, year after year, we have also missed our mark on hiring a representative workforce that reflects our users around the globe. The struggle to improve our equitable outcomes persists despite the policies and programs in place to help support inclusion initiatives and promote excellence in hiring and progression. The failure point is not in the values, intentions, or investments of the company, but rather in the application of those policies at the implementation level. Old habits are hard to break. The users you might be used to designing for today — the ones you are used to getting feedback from — might not be representative of all the users you need to reach. We see this play out frequently across all kinds of products, from wearables that do not work for women’s bodies to video-conferencing software that does not work well for people with darker skin tones. So, what’s the way out? 1. Take a hard look in the mirror. At Google, we have the brand slogan, “Build For Everyone.” How can we build for everyone when we do not have a representative workforce or engagement model that centralizes community feedback first? We can’t. The truth is that we have at times very publicly failed to protect our most vulnerable users from racist, antisemitic, and homophobic content. 2. Don’t build for everyone. Build with everyone. We are not building for everyone yet. That work does not happen in a vacuum, and it certainly doesn’t happen when the technology is still not representative of the population as a whole. That said, we can’t pack up and go home. So how do we build for everyone? We build with our users. We need to engage our users across the spectrum of humanity and be intentional about putting the most vulnerable communities at the center of our design. They should not be an afterthought. 3. Design for the user who will have the most difficulty using your product. Building for those with additional challenges will make the product better for everyone. Another way of thinking about this is: don’t trade equity for short-term velocity. 4. Don’t assume equity; measure equity throughout your systems. Recognize that decision makers are also subject to bias and might be undereducated about the causes of inequity. You might not have the expertise to identify or measure the scope of an equity issue. Catering to a single userbase might mean disenfranchising another; these trade-offs can be difficult to spot and impossible to reverse. Partner with individuals or teams that are subject matter experts in diversity, equity, and inclusion. 5. Change is possible. The problems we’re facing with technology today, from surveillance to disinformation to online harassment, are genuinely overwhelming. We can’t solve these with the failed approaches of the past or with just the skills we already have. We need to change. Stay Curious, Push Forward The path to equity is long and complex. However, we can and should transition from simply building tools and services to growing our understanding of how the products we engineer impact humanity. Challenging our education, influencing our teams and managers, and doing more comprehensive user research are all ways to make progress. Although change is uncomfortable and the path to high performance can be painful, it is possible through collaboration and creativity. Lastly, as future exceptional engineers, we should focus first on the users most impacted by bias and discrimination. Together, we can work to accelerate progress by focusing on Continuous Improvement and owning our failures. Becoming an engineer is an involved and continual process. The goal is to make changes that push humanity forward without further disenfranchising the disadvantaged. As future exceptional engineers, we have faith that we can prevent future failures in the system. Conclusion Developing software, and developing a software organization, is a team effort. As a software organization scales, it must respond and adequately design for its user base, which in the interconnected world of computing today involves everyone, locally and around the world. More effort must be made to make both the development teams that design software and the products that they produce reflect the values of such a diverse and encompassing set of users. And, if an engineering organization wants to scale, it cannot ignore underrepresented groups; not only do such engineers from these groups augment the organization itself, they provide unique and necessary perspectives for the design and implementation of software that is truly useful to the world at large. Footnotes [1]: Google’s 2019 Diversity Report. [2]: @jackyalcine. 2015. “Google Photos, Y’all Fucked up. My Friend’s Not a Gorilla.” Twitter, June 29, 2015. https://twitter.com/jackyalcine/status/615329515909156865. [3]: Many reports in 2018–2019 pointed to a lack of diversity across tech. Some notables include the National Center for Women & Information Technology, and Diversity in Tech. [4]: Tom Simonite, “When It Comes to Gorillas, Google Photos Remains Blind,” Wired, January 11, 2018. [5]: Stephen Gaines and Sara Williams. “The Perpetual Lineup: Unregulated Police Face Recognition in America.” Center on Privacy & Technology at Georgetown Law, October 18, 2016.
https://medium.com/oreillymedia/engineering-for-equity-7e3aca2cdc38
["O'Reilly Media"]
2020-12-08 16:05:22.836000+00:00
['Equity', 'Software Engineering']
How Recommender systems works (Python code — example film Recommender)
Recommender Systems Nowadays we hear very often the words “Recommender systems” and mainly it’s because they are quite often used by companies for different purposes, such as to increase sales (items’ suggestion while purchasing → Amazon: users that have bought this, have also bought this) or in suggestions to customers to give them a better customer experience (film suggestion → Netflix) or also in advertising to target the right people based on preferences similarities. The recommender systems are basically systems that can recommend things to people based on what everybody else did. Here there is an example of film suggestion taken from an online course. I want to thank Frank Kane for this very useful course on Data Science and Machine Learning with Python. Here there is the course’s link in case you would like to go deeper with Data Science. We’ll make an example taking the database provided in the course, because it’s not too big and this will help with speed of calculus. In any case online there are a lot of resources, such as MovieLens Database with 20M ratings, 465k tag, 27k movies and 138k users. How does Recommender System works? Recommender Systems, as we said earlier, are useful to recommend items to users. We have 2 kind of Recommender systems: User-based : the model find similarities between users : the model find similarities between users Item-based: the model find similarities between items There are PRO and CONS for both of them, here there is an article if you want to read further about this topic. These systems are based on similarities, so the calculation of the correlation between data, so between users for the first case and items in the second case. The correlation is a numerical values between -1 and 1 that indicates how much two variables are related to each other. Correlation = 0 means no correlation, while >0 is positive correlation and <0 is negative correlation. Here a graphical visualisation (font wikipedia)of the respective correlation coefficient of 2 variables (x,y): Several sets of (x, y) points, with the Pearson correlation coefficient of x and y for each set. (font: Wikipedia) We can see that correlation =1 or -1 do not refer to the slope of the data but just to how the data are related between each other. There are different methods to calculate the correlation coefficient, one of them is Pearson method: Formula of Pearson Correlation (font: Wikipedia) So the correlation is the Covariance between two variables, X and Y , and the multiplication of their Variance. There are also other methods to do it as Scatter diagram, or Spearman’s Rank Correlation coefficient, or method of Least squares. In our model we’ll use the item based because we are considering that a user based system could be influenced by the change of film taste in the time by people and also because having less films than items, will fasten our calculations. Let’s start importing our dataset. Our starting point will be a merged dataset (let’s see just the first 2 rows with the “.head()”): Dataset import As we see the dataset has 100k rows that correspond to the ratings we have. The informations in the table are the: movie_id title user_id rating Before calculate all correlations and prepare our data for it, let’s make quick considerations: with this model we do not have a mathematical way to calculate the accuracy of the model, but we can try to use the common sense and intuition. For example one thing that we can do is to consider if we have in our list films that we know already can be correlated. For example we can think of Star Wars or Star Trek Series. The idea is that if someone has watched an episode of the series and the rating was also high, I would expect that he also like the other film of the series. So let’s check how many film of Star Trek we have in our dataset. For this we can use a function of pandas that let us to find string of text in the columns: List of films containing “Star Trek” Before starting with the correlation calculation, we need to have all ratings of a film in columns, the rows will represent the users and the data in the table will be the ratings. For this we can use the function pivot_table of pandas as below: Pivot with title as columns and user_id as rows Once we have this new table, we can calculate the correlation of the Star Trek column with all others and for this we can use the corrwith function X is the column of df (pivot table calculated before) df is the pivot we calculated before corr : result of the “corrwith” calculation We can see clearly that something went wrong with this result, considering that we expected to find other Start Trek films. So probably what’s wrong is that we are considering all the films, even those that have just 1 rating and this do not give to the model consistency. Let’s try to filter the films with count of ratings>100 and let’s see what happen: corr result filtered with rating_count>100 Now the result looks more realistic as wee see that there are other episodes of Star Trek series in the result. So, we could also try to do some other tests with other film, but let’s consider that the result is good and let’s implement it now on all the dataset. Pandas makes it very easy for us, considering that we’ll use also a shorter function than before :) corr, instead to corrwith. We’ll use the filter min_periods=100, this will do the work for us and we’ll not need to filter anymore; we can also specify which correlation function to use, and in this case we’ll use the Pearson formula. This is the result: Correlation Matrix So we have calculated the Correlation Matrix for all film, having as result a 1664x1664 matrix where both columns and rows are films and the diagonal of the matrix will be all 1 because every film is related with itself or Nan in case the film has been filtered out with 100 ratings. Now that we have the Correlation Matrix comes the fun part, where we have to suggest to the user which are the films (output of our system) that best match with his previous preferences (that will be the input of our system). So starting from the Correlation Matrix, we’ll consider all the columns corresponding to the film the user already watched, for each column, we’ll drop the Nan Values. Once we have the values, we can consider to multiply each value for the rating considering it as weight (we’ll increase correlation, that will not be anymore between -1 and 1, for the film that user liked with higher rating) and after we’ll append all the values of all the columns considered in a Series “user_corr”. In the Series we need to do few other operations: groupby title summing the correlation value (this is why we could have the same film appearing more than once) drop all the film that the user has already watched Once we have the final Series, we can ordered the values in descending order (ascending=False) and suggest the first 5 films or how many films we want. Let’s see the steps with coding applying what we said for the user 0. This is the list of the film the user watched: Select the titles the user 0 watched We create now the list of all film with all correlations multiplied by ratings (integers from 1 to 5). Let’s create the Series with all correlation weighted We make the groupby in order to not have duplicate films and we also sum their rating: We are making now groupby title We create a list of the film we have seen (checking before if they are in the series of all correlations) and than we drop them: We now create a list of title of film watched to drop (if contained in our Series) and than we drop with the last line of code Once we have the final list ordered we can print the result to our user hoping he will like the suggestions :) We can now print the output for the user where we show the results Here it is the output for the film watched We have seen an example on how we can suggest a list of film for existing user or also for any other user just giving some input to the system as title of the film watched and our rating. The more data we will have and the better consistency the system will have. We can also play with the system trying to changing the parameters as the filter to 100 rating or also the method for calculate the correlation or you can also to consider the impact of ratings in a different way in the system. As you can see there isn’t a just a system but it’s possible to try different options giving different solutions, finding also a way for the improvements. Consider also that we can apply this method to all other possible data, for example also to suggest to the customer which item he would like to buy. At this point I’m happy if you are arrived till the end and follow me in case you have found it interesting or useful in someway! Enjoy data science! And if you would like to support me to write other articles like this, buy me a coffee :)
https://medium.com/coinmonks/how-recommender-systems-works-python-code-850a770a656b
['Luigi Bungaro']
2020-09-01 17:16:19.621000+00:00
['Machine Learning', 'Python', 'Correlation', 'Recommender Systems', 'Data Science']
How Retailers Can Benefit From The IoT
Retail organizations and app development companies have been working alongside one another for some time now. App development companies are instrumental when it comes to unlocking a retailer’s true potential. Now that the Internet of Things has become so prevalent, those who are looking to make a name for themselves are asking app development companies how they can benefit. The IoT is especially useful to clients who have hundreds or even thousands of locations to look after. With so many devices that are running simultaneously, it can be hard for clients to make sense of it all. App development companies are incredibly helpful at times like these. They have the experience necessary and they give their clientele the playbook that will lead to true success. There are certain challenges that face retailers who are looking to implement the Internet of Things. Some retailers may not have the necessary IT staff on hand to handle implementation. Others may not be able to allocate the proper portion of their budget. However, a retailer must solve these problems before it is too late if they are going to be able to remain truly competitive over the long haul. The latest technology can easily be incorporated into the infrastructure on hand, with the help of experienced app development companies. When new applications are created, information can be processed far more easily than ever before. Mission-critical data must be stored and processed in the proper manner if a business is going to fully realize the advantages that the IoT has to offer. To take full advantage, data is going to have to be processed locally. Cloud storage is being relied upon by more and more retailers but those who need access to real-time processing are not going to experience the same level of access. Corporate data centers are relied upon in these instances but they are the furthest thing from foolproof. These issues are compounded by the fact that retailers have their own day to day concerns to address. For example, there are few retailers that are able to withstand the problems that are associated with unwanted downtime. All it takes is one period of unexpected downtime to ruin a retailer’s credibility. Unfortunately, the credibility that a retailer strives to earn can be squandered in one fell swoop. While no retailer wants to place themselves in a position where their customers are steadily losing faith in them, the challenges listed above are easy enough to overcome. With the help of experienced app development companies, businesses have the chance to sidestep the usual issues. These companies can lean on their past experiences to offer the sort of insight that a retailer cannot receive elsewhere. The IoT offers virtualization techniques that are designed to bring retailers forward. If a retailer needs to run a wide range of servers at the same time, their ability to do so is often hindered. Physical servers are expensive and they consume a great deal of space. With the IoT, a retailer has the chance to save the time and space that is being squandered. Budgeting becomes much easier when a retailer does not have to spend the same amount on their servers. When the marketing department is looking to provide a retailer with access to a new idea, the retailer does not have to implement a new platform. The IoT also offers untold amounts of flexibility. Regulatory mandates and budgetary constraints often leave retailers feeling boxed on. The Internet of Things offers these companies the chance to step outside of their usual box. Future tasks and initiatives can be deployed without the typical drawbacks. Retailers benefit immensely from having the chance to conduct transactions outside of their data centers. The transactions are still conducted within close proximity of the data. This benefits the retailer because the distance that the data has to travel is significantly reduced. Every device that is connected with the usage of the IoT is going to be affected. A retailer that is looking to remain relevant over the long term must prize simplicity. Luckily, this is one of the primary reasons for using the Internet of Things in the first place. The practical usages that are associated with the IoT are essentially endless. Let’s say that a business is looking to improve upon their current signage. Without the Internet of Things, the task would be left to their IT staffers. While the IT staff can handle this task with ease, there is more to it than simply buying the new signs and plugging them in. Anytime a decision like this is made, IT staffers must also procure new computers and servers. Companies that deploy the IoT do not have to worry about such things. All they need to do is create the new virtual server that is responsible for handling the task and the rest takes care of itself. Even the biggest retailers can benefit from these tactics. Updates can now be made at little to no cost at all. By taking the time to learn more about the benefits of the IoT and proper implementation, clients are able to reap the full advantages. A modern retailer that is not already looking into the Internet of Things is missing out on a whole new way to do business. Thanks to the IoT, a company can allocate their resources in a different way and avoid the issues that plague other retailers who are not forward thinkers.
https://medium.com/datadriveninvestor/how-retailers-can-benefit-from-the-iot-30843b1363b9
['Melissa Crooks']
2019-10-25 15:12:00.854000+00:00
['Retail Technology', 'Mobile App Development', 'IoT', 'Retail', 'Internet of Things']
О Cистеме простыми словами
The Bitbon System is a large-scale infrastructure project, which represents a decentralized platform for Contributing. https://www.bitbon.space/en/home Follow
https://medium.com/bitbon/%D0%BE-c%D0%B8%D1%81%D1%82%D0%B5%D0%BC%D0%B5-%D0%BF%D1%80%D0%BE%D1%81%D1%82%D1%8B%D0%BC%D0%B8-%D1%81%D0%BB%D0%BE%D0%B2%D0%B0%D0%BC%D0%B8-e374b958c8da
['Bitbon System']
2018-08-08 14:44:29.755000+00:00
['Investment', 'Systems Thinking', 'Smart Contracts', 'Business Intelligence']
My Greatest Fear as a Writer
It’s a lifelong dream of mine to be able to write for my career someday, so I’ve been reading up on writing. I started with Jeff Goins’s Real Artists Don’t Starve. Then I moved on to Ryan Holiday’s Perennial Seller. Next on the list is Michael Hyatt’s Platform. As much as I’ve been enjoying it all (and learning a lot!), one very unsettling thing has surfaced for me: my greatest fear. My greatest fear as a writer is what George Orwell listed as one of his reasons for writing: sheer egoism (from his essay entitled “Why I Write”). Amidst all of the self-promotion, self-affirmation, and self-study that comes with being an independent writer these days, what if it all becomes about self? I write because it’s my passion and calling, but my ego is always looking over my shoulder with a coy smile, always whispering, “You know, this is really all about you.” To be clear, none of the books I’ve read about writing advocate for egoism in any sense. And I should say I’ve been profoundly helped and encouraged by what I’ve read. But the threat of egoism remains. I’m always questioning my motives. “Did I write this for my readers, or did I write it so that my readers would praise me? Is it more important that this essay or article or book gets out there and helps someone, or that I build my self-image as a result?” My head offers answers that are at odds with my heart. Pride and egoism, you see, cannot be written off so easily for writers. I’ve read that self-promotion as a writer is really only selfish and egotistical if the product you’re trying to promote is not something that you truly believe in. In other words, if you honestly believe that your writing is going to help people, then it would be selfish not to promote it and not to market your work. While I think there is truth here, I’m also tormented by the deceit that’s intertwined with our pride. It’s quite easy for us to deceive ourselves into thinking that what we have written is so important that people all over the globe simply must read it. (Are we really that brilliant?) John Owen once wrote that deceit is where sin begins. “Sin proceeds only when deception goes before it” (Owen, 1983, p. 36). And what is egoism if not sin? My greatest fear as a writer is that egoism will creep into my heart under the guise of service to readers. Sure, I can say that I’m trying to serve readers, but is that really my motive? Is that why I’m writing? Why Do You Write . . . Really? The issue comes back, again and again, to motive. Why do you write? Why are you writing? Why will you write? Authors can have very different answers to these questions. Below, for the sake of comparison, is a table contrasting the answers to that question for an atheist (George Orwell) and for a contemporary Christian (John Piper). I’ve taken these from Orwell’s essay, “Why I Write,” and from John Piper’s essay, “Is There Christian Eloquence?” There is certainly overlap between these two authors, but note how stark the contrast is! Clearly, we can have many different reasons for writing, but beneath all of them, for me, is one of three things: genuine concern for readers; genuine concern for self (egoism); some mixture of the two. When I inspect my heart, I find that the third option is usually the most accurate (if not the second). Now, that’s not to say that we should never write unless we have pure motives. If that were the case, I don’t think people could write anything! Our motives are seldom, if ever, pure. God uses writers, I believe, in the same way that he uses preachers: the person carrying the message is flawed, stubborn, and perverted. But God uses us where we are, developing our hearts and minds in the process. As writers, we must constantly ask ourselves why we are writing, even when we know that the answer might be the same as Orwell’s at times: sheer egoism. But that doesn’t mean we just accept it as an inevitable part of being a writer (as I’ve seen many authors do). I believe it means we express egoism so that we can kill it. The next question, then, is how do you kill your ego? I’ll be writing about that in another article. Let me return to the beginning. Can I avoid my greatest fear as a writer? I don’t think so. I think egoism is always lurking in the shadows of our conscience. I hate it, and I can wish it away, but it keeps coming back. That’s why, as much as I’m excited to dream about writing full-time someday, I’m also terrified. A life lived for self is a pitiful thing. Egoism, for me, is one of the most embarrassing human behaviors — not simply because it’s ugly, but because it’s ugly and false. None of us gets where we want to go by our own efforts, no matter what the rest of the world says. And if I look back on my life and see that I’ve suggested we can pull ourselves up by our proverbial bootstraps, I will be so . . . embarrassed. That, my friends, is my greatest fear as a writer. Sources Owen, John. Sin & Temptation. Classics of Faith and Devotion. Edited by James M. Houston. Portland, OR: Multnomah, 1983. Orwell, George. Essays. New York: Everyman’s Library, 1996. Piper, John. “Is There Christian Eloquence?” In The Power of Words and the Wonder of God, edited by John Piper and Justin Taylor, 67–80. Wheaton, IL: Crossway, 2009. For more articles on theology, language, and life, and to receive the author’s free ebook, In Divine Company: Growing Closer to the God Who Speaks, visit and subscribe to wordsfortheologians.org.
https://pthibbs.medium.com/my-greatest-fear-as-a-writer-980a26491b68
['Pierce Taylor Hibbs']
2018-09-06 19:12:44.688000+00:00
['Publishing', 'Christianity', 'Writing']
Finding Impactful Engineering — A Case Study of my Summer at Strava
Introduction: My Strava Story Hey there! I’m Daniel, a web engineer intern on Strava’s growth team. I’m an incoming senior Computer Science major at UC Berkeley and a Bay Area native. My own Strava story started a few summers ago back in high school. It was back in 10th grade when I bought my first road bike with no background in endurance sports, and first started using Strava to track my rides. I think many of us have a Strava segment that we care about more than we’d like to admit to. For me, it was one of our local Peninsula Bay Area hill climbs, King’s Mountain. In the summer of 11th grade, I rode the hell out of this hillclimb. I remember comparing all my segment times up the climb after each ride, looking through my pacing and power profile for the segment each time. I loved shaving time off my segment PR’s and I used Strava as an extension of my ambitions to drop time on my segments. Preface At the beginning of the summer, I told my manager, Will, that one of the long-term goals I had for the summer was to make sure that I’m not a close-minded engineer. As an undergrad student, I still have an entire professional career ahead of me in my life, and I want to enter my career going in with the right perspective and the right mindset so that I won’t hinder myself by having the wrong approach. I felt that it was important to both understand and be conscious about how the rest of a company works, the different job roles, and how software engineers fit into the picture. In turn, understanding this would let me perform my job as a developer more effectively to the benefit of my team, the product and our company. At my internship, I was designated to be a front end web intern, but in this post I’ll explain how my internship extended much further than our front end web stack. I’ll be reviewing some of the most important things I’ve learned and worked on this summer in the rest of this post. The technical portion of this post won’t focus on implementation details or go into depth about the technical architecture and design of Strava’s tech stack, but aims to shine some light on the bigger picture that my work contributed to and to focus on the ‘why’ and impact of my work. I’ll then review some of the meetings that Will set up for me after the conversation I had with him that I described earlier. These meetings focused on discussions I had with non-engineers at Strava and why I’ve come to see the importance of how the bigger picture matters even for engineers. Technical Frontend web development Much of my front end work this summer involved digging a few levels deeper than just the frontend portion of the codebase. Oftentimes, I’ll need to hook up parts of a new web page or feature to Rails models, controller methods, and more. Working through other parts of the web stack, beyond just writing templates and index.js files all the time, helped me better understand Strava’s web architecture, study the design patterns within our codebase and learn how different pieces of code interact with each other to serve the pages on our site. At Strava Jams, our internal hackathon event at Strava, my project first started as a way to embed a route in an iframe, so that we can use it in our Strava Blogs, and in 3rd party sites like race signup sites, personal blogs and articles. One of the issues I had with this was that I’d be creating a new endpoint, but it would be using an older version of our Mapbox map and elevation chart. I felt that it wouldn’t be right to create something new just so that it would use old technology. Instead, I thought, it would be more useful to our team if I refactored the page as a React app, before importing specific components to create an endpoint for embedding. This refactoring was more in line with the direction that our front end teams are moving towards, and lays the foundation for future use of these shared React components for different pages and apps. What I ended up with after my Jams project was a React app that includes updated components that can be imported into an embed endpoint. This app provides the groundwork for future React development, which would provide value for our team even if my entire hackathon project wasn’t shipped to production. An updated Route view page created in a React app SEO Search Engine Optimization is a subtle art. The concept of it is simple, but in practice, involves a lot of fine tuning and is hard to nail down for this reason. The usefulness of implementing foundational SEO features for your site is that it provides huge gains in visibility and traffic to your site for a relatively small amount of work. One of our current goals is to expand the SEO features included in Strava Local, a site within Strava where people can find curated running and cycling routes in popular areas with interactive maps. Strava Local is a good target for SEO because it covers an important area of search traffic, and between all the different countries, cities, and routes, has many areas of opportunities for adding SEO features. When I was developing a new view for Strava Local, one of my conscious efforts was to keep in mind that I was including the right SEO strategies, and to create my webpage with purpose. Adding keywords, internal linking, alternative tags, and relevant content to the page so that you improve the quality of your page content for crawlers to see is key. One of the more important SEO features to add are breadcrumbs. For example, if we were presenting a page about cycling routes in London, the breadcrumb preview on a Google search result with our link would look like “United Kingdom > London > Cycling” to help give both Google and web users context for web pages. Breadcrumbs are an important feature because it provides extra contextualization on your web page in a page full of search results that helps your site stand out and becoming more engaging to the user, and to help users know that they’re clicking on the right content. Example of breadcrumbs A website filled with awesome image and video content will go unnoticed in the eyes of Google. No matter how good of a website you built, it’s hard to optimize your site to rank high on search results if you don’t consciously keep it in mind during your development efforts. SEO is critical to gaining exposure for your site, and is one of the most important ways a developer can promote their site online. Thanks to the SEO features that I’ve implemented in Strava Local, we increased traffic to Strava Local by 20%, and increased the overall CTR to Strava from Google search results by 1%. Parallelization in Cypress Testing Our front end QA test suite is handled in Cypress, and it’s a critical point of our web workflow. This test suite, however, is prone to flakes, where tests that should pass will fail. When a test flakes, we have to rerun that worker’s test suite, which could turn a 10-minute testing process into a 20 or even 30 minute testing process. This creates a large bottleneck for engineers since all deploys in our web development need to run through this suite before changes can be deployed to production. One of my Guild Week projects was to identify problem points within our test suite and to speed up the testing process. We began by recording tests within our suite that would cause frequent flakes, so that we can prioritize the most unreliable tests to fix first. To greatly improve the speed of the test suite, I tripled the number of workers running the test suite. Because parallelism is limited by the law of diminishing returns, we want to find a balance between achieving good speedup while being reasonable in resource use. After my changes, the test suite runs in around half the time that it used to, five minutes instead of over ten minutes previously. This fix provides important value to our web team since it significantly improves on a portion of our daily engineering workflow. Non-technical Like I touched on earlier, my manager set up various meetings for me with product managers and managers throughout my internship. These meetings gave me context for how software engineers fit into the structure of a company, both at Strava specifically and at any company, from the perspective of managers. In the following section, I’ll be reviewing some of the questions I had and some of the other things I learned about during my conversations. What’s one of the crucial jobs that PM’s do, and why do they do this; what does it provide to the team? After talking to our resident product managers on the Growth team, what I’ve learned is that much of a PM’s job in their day-to-day is interfacing with other teams so the rest of their team doesn’t have to. They help progress the current team goals by facilitating the rest of their team. They have so many meetings so that they can do all the interfacing for the rest of your team, so you don’t have to be at those meetings. They save time for their engineers, designers, and the rest of the team so they can work on the actual implementation and nitty gritty details. Of course, their jobs are much more involved than just going to meetings, but this itself is a huge help for the rest of the PM’s team. Do engineers need to be conscious of product direction, vision, and reasoning behind building features? Coming into the meeting that I had with Jason, one of the engineering managers in Growth, I truly thought that engineers didn’t have to be aware of this. My thinking is that if both product managers and managers are fulfilling their role completely, that an engineer will always receive the right work to work on with respect to the team’s goals and the current product project roadmap. As long as these developers complete the tasks delegated to them, that they can be just as good as an engineer who puts thought about the kinds of tasks that they work on before working on. The answer I got from Jason, however, was a completely different one. He told me that engineers absolutely need to think about the product in their daily engineering work. For example, my mentor as responsible for much of the front end for the website. As such, he has lots of areas of ownership throughout the site, and people are always approaching him with ideas, bug fixes, and features to implement or build on. His job is to consider the importance of all these tasks presented to him, the work he already has delegated to him from sprints and more. Whether he explicitly expresses it or not, he, as an engineer, is constantly thinking about the vision of the product, our current goals, and the impact of this proposed work. Developers are almost always faced with a backlog of features and requests presented to them, and I was no exception to this this summer. It’s up to me to decide what’s most important to work on, and we as engineers need to use a product-minded mentality when prioritizing the work that we’re given. Constantly context switching whenever we’re given a new task is both confusing and ineffective for producing meaningful work. I certainly won’t be a senior engineering manager anytime soon, but talking to your managers about managing your team from both their perspective and your own perspective is important. It’s important to both understand the perspective of the managers that you work with and to know where you fit in the picture of the team and product. Team members should be on the same page as their leaders, as it allows a team to align and work as one cohesive entity, without any one member straying from the direction that the team is moving towards. Conclusion When software developers label themselves as code monkeys, they’re both selling themselves short and implying a lack of responsibility and leadership in driving a team and product towards their goal, when engineers have a foundational role in progressing towards a team’s goals. What I’ve learned over the summer is that this doesn’t mean I constantly add features wherever features can be added. We need to make sure that the result of our work is impactful. We should have concrete reasoning backing up why we are adding this feature, and what it will do for us. In addition to adding features to existing pages, I added new web pages that includes relevant and quality content. This helps with the search rankings of our site, which I’ll expand on later. Being an effective developer means working on tasks that provide value and impact, and being conscious of how much value each of your tasks provide, the difficulty of each task, and the urgency of the task. Balancing all the work that’s been given to you, and prioritizing your work given to you will make you the most effective engineer you can be, regardless of whether you’re an experienced or novice developer. I also could have made it through my whole internship without sitting down with any of our PMs or managers. At the same time, I think my time at Strava would have been so much less meaningful if I hadn’t taken this opportunity to take advice and learn from the rest of our team. An engineering internship should be a lot more than just about building out features. Working at a company is a huge opportunity for an undergrad student to get a glimpse of how companies made of many moving pieces of employees operate. With all the help I got from my manager with setting up meetings and checking in on what I’ve learned from talking to different engineers and managers, I was able to leverage this opportunity to be more thoughtful about my work and its impact in my day-to-day development. These types of lessons can’t be found in a lecture hall at university, or in a course textbook, but rather need to be experienced or realized through conversation and spending time working at a company. Working on this type of professional maturity is something that I’ve been eager to do, and I’m grateful that my interest in this idea was so welcomed by my manager and everyone I talked to at Strava who set aside time in their day to meet with me. When an engineer understands their purpose at a company, they become more effective developers, find both personal and professional growth more quickly, and ultimately become more valuable assets to their company.
https://medium.com/strava-engineering/impactful-engineering-a-case-study-through-my-summer-at-strava-65a9aab6fa8a
['Daniel Ho']
2019-11-01 16:01:02.171000+00:00
['Growth', 'Internships', 'Startup', 'Rails', 'SEO']
Q&A with 2014 Summer Pinterns: Lucas and Nicole
As part of the Q&A with Pinterns series, Pinterest interns share their experiences working on projects and features with our engineers. Here, 2014 Summer Pinterns Lucas and Nicole talk about what they’ve learned and built over the past few months. What did you focus on this summer? Nicole: As an intern on the Growth team I worked on acquisition and activation, my projects focused on increasing signups as well as app installs. I created an app install banner on logout, implemented a full redesign of the pinterest.com unauthenticated landing page, and worked on a more streamlined signup flow throughout the rest of the unauth pages. Lucas: During my internship at Pinterest, my work was mostly focused on our international growth. My first project was to create a brand-new admin system for Pinterest’s community managers to curate and contact some of our influential Pinners (Pinfluencers). This effort allowed our team to easily whitelist Pinners from different countries and I could then use those lists with curated Pinfluencers metadata to algorithmically recommend local Pinfluencer accounts to Pinners with similar interests. Those Pinfluencers are now featured in three areas of the website: NUX, categories pages and in a homefeed carousel. This last one was one of my favorite projects, since I got to work on it all the way from selecting the Pinfluencers on our backend to implementing the design and making it responsive. Describe the team you worked with. What is the culture like at Pinterest? Nicole: I worked on the Growth team where I met the most incredible people this summer. It was a diverse team–everyone brought something really unique to the table and as a result I learned even more. My mentor was especially motivating and supportive, giving me the resources to get my work done as well as encouraging me to take more ownership of my projects and come up with new ideas. I’ve never felt more comfortable and welcomed in a workplace environment as I did at Pinterest, the culture is very collaborative. Everyone I talked to, both on and off of my team was always willing to stop what they were doing to answer a question. Additionally, everyone is authentic and cares so much about the product. Lucas: This summer I worked concurrently in two teams: Growth and International. I had the privilege of working with talented and passionate people who recognize each other’s skills and use their unique expertises to work together (knit) and make Pinterest a better product everyday. Besides getting exceptional advice from my mentor, project manager and teammates, I also got help and learned a lot from people in other teams such as Web, Interests, Writing, Design, Recruiting, and others. Pinterest has this amazing culture of knitting to solve challenges as well and fast as possible and this kind of habit not only made me a better and faster developer, but also taught me new skills and more than anything, made me end up making many new friends. Pinterest feels like family and people know and care about each other. One of my favorite things about Pinterest is how crafty and creative our team is and how much support you get from Pinterest when you have a DIY craving. We have DIY stations in the office and display our crafts all around. This summer, for instance, some Pinterest friends and I decided to make Pinterest a logo made of Rubik’s cubes. Three days later, our Workplace team had already ordered and delivered 60 Rubik’s cubes to us and we now display it all over our office: What kind of impact did you have? Nicole: One of the best parts about being on the Growth team is I can see the direct impact of all of my work on the company’s revenue, which is unique as an intern. I saw all the numbers and know how many people downloaded the app because of my upsell, and how many more additional signups we got from the redesign of the signup flow. As much of an impact as I had on Pinterest, without a doubt Pinterest had even more of an impact on me. I was so lucky to work with the people I worked with and learned more than I ever thought I would. Lucas: It was really validating to see how much impact my work at Pinterest had in other people’s lives. By creating an admin system for Pinfluencer management, for instance, I allowed our community managers to work several times faster and whitelist thousands of Pinfluencers throughout the summer. By working closely with them, I was also able to identify opportunities of improvement to my work and develop or re-tailor features to optimize their work. On the Pinners side, I was able to run several A/B experiments and try different features and revamps to our product in order to improve Pinner experience. Those experiments were never limited to my main project and my mentor and project manager always gave me the freedom of working on my own ideas for Pinterest and taking ownership of my projects. What surprised you about this internship? Nicole: Definitely the biggest surprise for me was how much responsibility I had and how much I was able to grow as a person throughout the internship. I came in expecting to do a lot of cool work but that was pretty much it. I had no idea that I would be so challenged, not only from an engineering perspective but also to take complete ownership of my work and to push myself to accomplish more during my 12 weeks. Lucas: I was really surprised and delighted to see how much code I developed and how much I learned during my internship. Pinterest challenged me to think critically about problems and develop scalable and maintainable solutions to several projects that I could take ownership of and develop them from the first design until the final product. What advice do you have for fellow CS students? Nicole: Work at Pinterest!! Besides that though, when looking for internships I would definitely say to find a place where you can make an impact and work in a challenging environment. In school don’t ever think that something is too difficult or too hard, because that becomes self fulfilling. The most important thing is believing everything you can accomplish and then working hard to get there. Don’t be afraid to fail either, its one of the best ways to learn. Lucas: When looking for a company to work at, look for a place where you will be challenged to work on interesting and creative problems and that has a culture that makes you happy. I found that at Pinterest and if you think that Pinterest could be that for you too, be sure to apply!
https://medium.com/pinterest-engineering/q-amp-a-with-2014-summer-pinterns-lucas-and-nicole-ddce9b089e47
['Pinterest Engineering']
2017-02-15 22:47:43.674000+00:00
['Startup', 'Internships']
How to Start Writing For an Audience (And Not Yourself)
When you first hear about starting a blog, you become fascinated with the fact that you can potentially make money writing about your life. While this is a dream job for some writers it can be a misleading idea. People are constantly worried about their lives and how they can live a better life. What many beginner bloggers (including myself) fail to realize is that not all of our blog posts can be about us. We have to find that bridge between where we can write for other people and share some of our personal experiences. It can be easy to write about yourself because you are the one that’s living your life. No one has a more realistic view of your life than you do. There is always the idea of writing about ourselves and receiving great feedback. This idea can become true but is not realistic for most beginner writers. You may think that your personal stories need to be heard by millions of people but truth be told that they don’t. An experience that you have might spark up some special feelings for you, these same feelings may not spark the interest of the reader. If anything, it might steer them away from your writing. Readers don't like to read about someone who is so egotistical in their writing. I will admit that I have some stories of my own that are special to me, but I’m trying my best to save those stories for times that I can teach the reader something from that story. As a writer, your audience is something that should matter a lot to you. It’s ok to have your personal reasons for writing, but how much more fulfilling would your writing be if it helped somebody else out from time to time. When you write for your audience, you build a significant relationship with them that will also help them to stick around for your next story as well. Even with my early experience with writing, I have figured out a few ways to start writing more for my audience. These tips should help you to stay away from making your blog posts a personal diary. “Stop running side races and wonder why the finish line doesn’t have anyone at the end of it clapping for you.” ― Richie Norton Minimize How Many Times You Use “I” Statements When engaging with the infamous “I” statements try to only use them a few times throughout your article. If you’ve ever read a self-improvement book, you might have realized how little the author uses these personal statements. The author tries their best to talk to you and focus on how they can make your life better, not theirs. People are looking for life advice 24/7, they are naturally drawn to blog posts that guide them through their problems. This will not be possible if they are consuming information that doesn’t involve them. Readers will feel thought about during their day after reading an article that was made for them. If you are going to use “I” statements throughout your article, be sure to use them when trying to teach a lesson. This should be the only time where your experience should come into play with your writing. It will also come to the reader as a surprise. When this method is used correctly, readers will gain more from your experience than you would’ve thought they had. If the reader is looking at a well-organized article that is directed towards them, then they will be more open to hearing what you have to say about your thoughts and actions. A great way to let your readers know that you are thinking of them is by using the words “you” and “we” a lot. As a writer, when you are talking to the reader as if you were their mentor, you will feel more in control of your writing. Your readers will have someone to look up to and will have more respect for you when reading. If you don’t want to come off as too demanding then you should resort to using “we” at times. This lets the readers know that they are not alone in their journey to self-improvement. Make these simple changes in your writing and watch how much your audience grows.
https://medium.com/swlh/how-to-start-writing-for-an-audience-and-not-yourself-466d5e012e34
['Brandon Bell']
2020-06-19 13:04:59.890000+00:00
['How To', 'Writing Tips', 'Writing Life', 'Blogging', 'Writing']
Need for high margin cryptonative service.
Bitcoin with PoW and Nakamoto consensus is the first censorship resistant form of money. Yet, money is useful only as far as you can spend it on something useful. Most of the useful things in the world are sold currently only for fiat money. Fiat money is controlled by state that have exactly zero or even less interest in promoting censorship resistant money. Thus state agencies are intrinsically motivated to perform Choke Point operations on any on- and off-ramps from fiat to crypto. See https://en.wikipedia.org/wiki/Operation_Choke_Point for details. For the same reason, state agencies are motivated to spread FUD about crypto, increase exchange rate volatility, and introduce naked speculation instruments (e.g. cash-settled ETFs) This cannot be solved by a mere promotion of cryptocurrencies. Someone has to provide a high-demand scalable product or service. Yet, most business and business models require significant input in form of resources and/or services, which brings on/off ramp choke points vulnerability back again. Thus we need online native services sold only for crypto. But low margin online-native services (e.g. hosting) won’t be enough since any possible savings due to crypto can be easily blocked by increased complexities of getting into crypto. This plagues most ICO products and services: they can’t offer service with sufficiently high margin to do a tenfold overkill for the problems of getting into crypto. Thus, to convert people to use censorship resistant money, we have to offer a scalable, high margin, popular, and defensible service that would be available only for crypto. “Software is eating the world” Marc Andreessen said. But outsourced software development services usually are considered low margin high body count businesses. Only if you’d be able to replace or augment humans, you’d be able to create it in a scalable way. For the last 2 years, I’ve been working on DevNull.AI project to achieve this goal in practice. This post marks the end of the stealth period and announces the availability of a clear roadmap to the AI, developing software, that would help and even replace junior developers in teams. To achieve censorship resistance, it would run on Pandora Boxchain with some custom juice. Originally that was a tweetstorm at https://twitter.com/akhavr/status/1031448291039477760
https://medium.com/devnull-ai/need-for-high-margin-cryptonative-service-fc9a5d42a1e6
[]
2018-08-22 17:07:16.460000+00:00
['AI', 'Bitcoin', 'Software Development', 'Cryptocurrency']
Where Does Hope Even Come From?
When it isn’t trying to kill me, my brain has this peculiar habit of handing over to curiosity and letting it wander at will. Hope is always unexpected and manifests in the most defiant ways, in situations that would normally preclude it. Hope is always unexpected and flies in the face of evidence seeking to undermine it, too, because it is generous and benevolent by nature. Hope wants to believe that the human animal is good and capable of so much more than we give ourselves credit for, especially against all odds. Hope transcends the possible and this is the reason why we love hero narratives so much and why it is the measure of our proudest achievements. Hope is a puppy, loyal, loving, and a paragon of indefatigable enthusiasm that appears out of nowhere the minute you were about to give up on it. You thought hope had run away but it found the way home and brought your heart back, too.
https://asingularstory.medium.com/where-does-hope-even-come-from-1ca19095e0c9
['A Singular Story']
2020-05-15 10:18:10.062000+00:00
['Life Lessons', 'Mental Health', 'Self', 'Culture', 'Philosophy']
Using the Command Line to Install Packages from GitHub
Installing Package from GitHub Now we are ready to install packages directly from GitHub. In this example, we are going to install the MetaFlow package from Netflix. Here the package description: Metaflow helps you design your workflow, run it at scale, and deploy it to production. It versions and tracks all your experiments and data automatically. It allows you to inspect results easily in notebooks. Pretty cool ;) So first we will grab the URL for the master branch. In this example, the URL for the master branch of the project MetaFlow is: Then we need to remove https: and add .git to the end of this URL like so: //github.com/Netflix/metaflow.git To install the MetaFlow run the following pip command: pip install git+git://github.com/Netflix/metaflow.git Figure 6 — Installing MetaFlow directly from GitHub. That’s it! To test it out, open Spyder by running the following command in the terminal: spyder And import Metaflow with: import metaflow If you get no ModuleNotFoundError then you are good to go.
https://medium.com/i-want-to-be-the-very-best/installing-packages-from-github-with-conda-commands-ebf10de396f4
['Frank Ceballos']
2020-01-02 23:51:05.818000+00:00
['Spyder', 'Github', 'Anaconda', 'Python', 'Pip']
Living Autopsy
Living Autopsy Photo by Ivan Babydov from Pexels I will cut you deep, no, deeper, much deeper than your jagged rancor gashed my naivety, deeper than your sensitive nerve endings, deeper than the tentative ending served by you, deeper to sever where marrow serves up new bad blood. As for the old bad blood, well I’ve bled all that away onto ancient battlegrounds, nourishing worms, beetles, wildflowers with rusted, self-serving archaic thoughts like you are the fucking worst. Do not misunderstand; you are the fucking worst, but I no longer seek to hurt you as you did me, but I will cut you deep enough to make your ancestors cry out, belying the nature of your suffering. Tit-for-tat is your lullaby, but I will hum my own tune for your living autopsy; I know better now, and to know you better requires more than butchery, it requires steady hands flayed layers and deeper cuts. Slashing through malice using sharper malice, like bone-on-bone only dulls the blade. I am far more interested in exploratory surgery. The blade I wield forged in empathy rather than animosity will labor carefully through each incision. Until I find the real you. I know I cannot hope to find and rescue the child you keep locked inside, But if maybe together we get a glimpse of them, you may find the courage to save them yourself someday. Now lie still, for this may cause severe and sustained discomfort. Neither of us will enjoy this gruesome process, but observe the pain and perhaps together, but most likely separately we might find out for ourselves where all that hurt comes from.
https://medium.com/the-rebel-poets-society/living-autopsy-506f14259e7d
['Barry Dawson Iv']
2020-11-18 23:09:39.563000+00:00
['Inner Peace', 'Storytelling', 'Pain', 'A Rebels Prompt', 'The Rebel Poets Society']
Trust in News Sources: The Great Political Divide
Americans may be more polarized than ever when it comes to news about politics. While that may not be a shocker, we now have hard data to show just how it plays out and the differences between party affiliation and trust in news organizations. The research from Pew Research Center tested 30 news organizations as part of the study and then sorted results by political affiliation. Who do you trust? “…evidence suggests that partisan polarization in the use and trust of media sources has widened in the past five years. A comparison to a similar study by the Center of web-using U.S. adults in 2014 finds that Republicans have grown increasingly alienated from most of the more established sources, while Democrats’ confidence in them remains stable, and in some cases, has strengthened.” — Pew Research Center You can read the full report here.
https://medium.com/digital-vault/trust-in-news-sources-the-great-political-divide-298f05be7449
['Paul Dughi']
2020-02-25 16:02:31.267000+00:00
['Trends', 'Politics', 'Trust', 'News', 'Popular']
The Silent Voice
The silent voice of our heart, the conscious ocean inside all life and all experience of life, the invisible and inexpressible bringing everything visible and all expression into the world, is a cosmic consciousness throughout the Universe, a soundless resonance inside all space everywhere, it breathes the breath of the invisible into our heart, and weaves us into the fabric of a conscious Universe, joining us into each other. It’s the synchronistic intelligence that orchestrates the Universe, the silent voice of intuition.
https://medium.com/spiritual-tree/the-silent-voice-9fd5ac0c9ea2
['Paul Mulliner']
2020-12-27 09:37:12.809000+00:00
['Poetry', 'Spirituality', 'Self-awareness', 'Yoga', 'Mindfulness']
An Honest Review of “The Game Changers”
“Have you noticed anything change physically since you went vegan? I’m realizing we never really talked about it, not fully. It’s six months ago now.” “Well, like they were talking about in the movie, my recovery time got shorter, definitely. I also noticed I was building muscle more easily, like they talked about too. I had more energy to go the gym. I’d go work out and drive home, and then feel like I could go again, which was different.” I nod, chowing down after Game Changers on a BimBap bowl (sauces on the side, please) at Luanne’s Wild Ginger in Fort Greene. Gael eats scallion pancakes and Udon noodles doused in all the delicious oils. “Did the IBS stuff go away? I assume it did since you haven’t mentioned it.” “Ya, pretty much a couple weeks after I went vegan. They called me for a follow-up referral and I was like, ‘Oh, I don’t need that anymore.’” “Were you worried when you switched that you’d lose weight?” I ask, munching on some broccoli. “Definitely. But I seem to stay at more or less the same weight no matter what I do or eat.” “Must be nice.” “Mostly the difference is also emotional, feeling good about what I’m doing. I noticed maybe my relationship with the cats in the house changed, too. Like they were more comfortable around me — but I think that’s more about my changing how I relate to them. Like I’m letting them be themselves and not trying to be like, I want to pet you and you better like it.” “Ya, the same thing happened to me. You start respecting animals’ bodies and their consent more.” “Oh, by the way — I saw you laughed in the movie at what they said gladiators used to be called. I wanted to ask, what was it?” Gael is fluent in English, but sometimes he’ll still miss a phrase. “They said their nickname translated to ‘Bean and Barley Munchers,’ since they were mostly vegetarians.” “Ah, OK.” “And then I laughed because I thought how you and I are both bean and barley munchers and carpet munchers.’” I love his humble dimples, and my drink is making me feel armchair-philosophical. “I mean, that’s the thing with this movie. Like, on the one hand, I’m glad they’re telling people gladiators were vegetarian, but on the other hand — ” “People being made to be gladiators was terrible and violent and they were often slaves.” “Exactly! And I felt that sort of conflict the whole way through. Like, I want us to eliminate the most suffering possible, and this is an emergency for the animals and the planet, and so a part of me is like, whatever gets people there is great. And this movie is sleeker than any vegan doc I’ve seen. So I’m really, really glad it exists.” “But then the other part of you?” “The other part is wondering, what other oppressive systems are we reinforcing here — showing ‘the world’s strongest man’ throw a car over his head as proof you can still ‘be a man’ and be vegan? Like, what if that car could have helped someone!? And why does being a man equal being strong and in some way aggressive — or sexual? Like the scene proving men have more frequent and harder erections after eating just one vegan meal — what did you think of that?” “I thought it was funny.” “Totally, but then I wondered how someone who has erectile dysfunction would feel when that doctor said what makes a real ‘manly-man’ is sexual virility and fertility.” Gael shrugs in that way someone who’s resigned to a certain expectation about their gender shrugs. “Like, I enjoyed seeing James Wilks realize he’s fitter than ever after going vegan — but not so much watching him train US military to fight on a plant-based diet.” “Or what about The Miami Dolphins team they showed that made it to the playoffs for the first time in years after they went plant-based? That was cool, but also it’s really easy for them because one of the player’s wives is a chef who cooks vegan meals for them every day,” Gael adds to our list of vegan nitpicking. “Exactly! I kept thinking, what about the people who don’t have a personal chef/gorgeous and cool wife, or a background in vegan nutrition, let alone access to fresh produce? How will those people go vegan? Not that one film can get at all these things in time, but it felt like class wasn’t really touched. We should actually watch The Invisible Vegan on Vimeo — it’s good and gets into all that and more.” “I was thinking about that too, about my family in Peru. It’s not so easy for everyone.” But for us, right now, it is. Gael and I take bites of our desserts: a raw Yuzu cheesecake and a decadent brownie topped with coconut milk ice cream. Deprived we certainly are not. I sigh tipsily, filled with that familiar cocktail of hope, guilt, and despair. “I just want to see a world where we all leave each other’s bodies alone. Where no one has to train the military to fight, and we all just agree to be stewards of the environment and animals instead of trying to dominate the planet and each other. A passifict return to Eden but sex positive and all-inclusive, basically.” “Yeah, that would be nice.” “But that doesn’t seem to be human nature,” I say, taking another bite of cake. “It’s like how bonobos and chimps are our two closest genetic relatives — equally so. Bonobos are matriarchal, bisexual, mostly peaceful hedonists; chimps are patriarchal, fight, commit infanticide and rape. And you see in humans that pull, I think. Those two sides of our instincts always at war.” “Which side do you think will win?” “What do you think?” I don’t eat as much of the dessert as I would if I were Gael. A certain voice in my head still warns me not to. You should keep your desire in check if you want to maintain your lovability, it whispers. The rules are different for you than for him. I talk back to the voice when I can, try to at least recognize its vacuity. Either way, I don’t wait for a prince to save me anymore. But sometimes I summon one anyway. This one, perhaps the most tender yet, has magic hands. His palms are lightly calloused from healing strangers’ wounds, not making them. To me, this is awesome strength. Recognizing it as such, also a game-changer.
https://medium.com/tenderlymag/an-honest-review-of-the-game-changers-34f7f2397657
['Rachel Krantz']
2019-10-17 17:29:40.216000+00:00
['The Game Changers', 'Fitness', 'Film', 'Personal Essay', 'Vegan']
Invisible Design: Co-designing with machines
The machine was, and still is, my constant partner. I need her in order to translate the creative thoughts in my head into tangible ideas I can share with the world. Transitioning to design from a modern dance career in my twenties, I never thought a machine would be my accomplice for innovation. Machines have rapidly developed intelligence in this generation and their capabilities are changing the products we design. The process in which they are designed will also need to evolve. This article is the start of a conversation about co-designing with machines and what I’m calling Invisible Design — a process and design language for product designers working with artificial intelligence and technologies like machine learning. I believe these processes and tools are seeds for the future of product design. Invisible Design — a process and design language for product designers working with artificial intelligence and technologies like machine learning. Math and science are invisible forces that reveal themselves in more discernible ways when we take the time to observe and analyze them. Take, for example, an English gentleman strolling through his garden in the 18th century. He observes an apple fall from a tree and wonders why it didn’t fall sideways or upwards from the ground. How is this possible? What are the forces at play? What are they made of? Does the same effect apply to something as small as an apple and as large as a wagon? Sir Isaac Newton continued to grapple with these questions for over twenty years in what would become his law of universal gravitation. He was able to describe an invisible force that has tangible effects in our everyday lives. The influence invisible forces have on our lives can be unexpected. I was recently perusing my Facebook feed when I noticed that several of my friends liked Simply Framed, a company that allows you to create and order custom frames for posters and artwork online. I started to think about all the unframed work in my closet and tapped through to check it out. What made me want to try that recommendation? What caught my eye? What kind of information was needed in order to personalize that post? The invisible forces of science and math here are not gravity, but Facebook’s algorithms. Advertising software is just a zygote when it comes to the power of machine learning and where these products are headed in the next five to ten years. Machines will increasingly be making decisions within user experiences, and co-designing with them is an essential partnership for the future of product design. As in any craft, there are individual components to the creation process — understanding, tools and interpretation. I began to get the idea for developing Invisible Design while going through this process at Airbnb on a couple of data intensive product launches. I want to share some of my thoughts on what I have observed in my own work. You have to truly understand a thing to design a thing. You have to truly understand a thing to design a thing. Imagine trying to design a plane, but not knowing anything about aerodynamics, or designing a glove without knowing what environment it will be used in or the anatomy of a hand. You have to understand what something can do in order to design a product well. Last year, my product partnership team was having a conversation about a machine learning model for a new pricing tool we wanted to build for our hosts. We were trying to create a model that would answer the question, “What will the booked price of a listing be on any given day in the future?” Answering this question was no small feat. I was trying to keep up as my data science partner described the regression model they were building. The words he was using were alphas and betas, and while he was showing me charts that I could follow, the language was foreign to my design background. I sat down with him afterwards and asked him to sketch a diagram of the model and talk me through it. This was an eye-opening experience. When he started speaking the language I knew — sketches and diagramming — I understood the model and what it was trying to achieve immediately. This was my light bulb moment. I understood what the machine could do for the product and how to integrate the information into the experience. We both were excited by this understanding, and once our language barrier was broken and we could speak fluidly about where the product could go, we could really begin to take the product thinking to the next level. Smart Pricing regression model next to a visualization explaining the model is made of three parts that vary per host I realized that this conversation did not have to be an isolated incident, but could have a larger impact on our teams. The discussion we had was a bite-size form of storytelling, just like what designers do when they quickly sketch out screens in a notebook. I learned from my colleague that the story of a product isn’t limited to the screens that the user can touch and see, it can also describe what’s happening behind the scenes. In the initial phases of product creation, an overarching story of how the experience will impact the end user is often created to help everyone understand what the product will look and feel like. These can take many forms, from storyboards to prototypes, strategy decks and diagrams. These presentations are created for many reasons, and one very important reason is to create a shared understanding of a product vision. Understanding empowers teams. Building a shared knowledge allows innovation to happen as a step change instead of in micro steps. Understanding empowers teams. Building a shared knowledge allows innovation to happen as a step change instead of in micro steps. Visualizing the roles that data and the machine play in the discovery process is the first part of Invisible Design. I’m continuing to work with my teams to build data visualizations that tell stories along with the interfaces our users interact with. These visualizations tend to vary as much as the products we’re creating, but the outcome is always that they help to motivate, inspire and educate the broader product team. After understanding what we’re designing and how it works, we can start building the product with a variety of tools. A carpenter has a hammer. A photographer, a camera. A product designer, sketch. A software engineer, code. What’s interesting about all of the examples above is only one of them has a tool with the ability to learn, change and grow over time. Most product designers today sculpt UI with reactive tools–shapes and pixels are drawn on screen input directly from a designer. We also use these tools for designing outputs that are controlled programmatically in systems like responsive platforms and components. Our data partners in product are adept with tools that evolve over time. Physical systems, economic models and algorithms organically grow as variables shape their outcomes. Technologies based on these factors can learn and determine their own paths. In conjunction, the tools that designers, data scientists and engineers use are advantageous to each other throughout the entire product process, not just in building the final user interface. This is the next step in the evolution of product design. Invisible Design adds in data sets and algorithmic decisions into the initial stages of design– wireframing and user flows– to bring dimensionality into a typically flat and static part of the process. Take, for example, a holiday campaign for pricing tips, which was the first iteration of our Smart Pricing product. We knew from past holiday seasons that there is typically low traveler demand during the last couple weeks of December and a spike around New Year’s when folks travel a lot for the festivities. We wanted to let our host community know that if they lowered their prices during December, they could attract more travelers. In our wireframe process, we had a one size fits all module to communicate this message. What we learned from the data model is markets have varying down seasons and need differing messages and visualizations. For example, Sydney’s low season starts in November and Miami doesn’t experience a low season due to the consistent demand from vacation travelers. Our user flows and wireframes could show how the market trends and data would have an impact on the product.
https://medium.com/swlh/invisible-design-co-designing-with-machines-aea62a1e0f6d#.tulue0zbb
['Amber Cartwright']
2016-06-23 00:24:07.209000+00:00
['User Experience', 'Design', 'Leadership', 'UX', 'Product Design']
AI for Trading Series №4: Time Series Modelling
AI for Trading Series №4: Time Series Modelling Learn about advanced methods for time series analysis including ARMA, ARIMA. Photo by Isaac Smith on Unsplash In this series, we will cover the following ways to perform time-series analysis- Random Walk Moving Averages Model (MA Model) Autoregression Model (AR Model) Autoregressive Moving Averages Model (ARMA Model) Autoregressive Integrated Moving Averages (ARIMA Model) Random Walk Model The random walk hypothesis is a financial theory stating that stock market prices evolve according to a random walk and thus cannot be predicted. A Random Walk Model beleives that [1]: Changes in stock prices have the same distribution and are independent of each other. Past movement or trend of a stock price or market cannot be used to predict its future movement. It’s impossible to outperform the market without assuming additional risk. Considers technical analysis undependable because it results in chartists only buying or selling a security after a move has occurred. Considers fundamental analysis undependable due to the often-poor quality of information collected and its ability to be misinterpreted. A random walk model can be expressed as : Random Walk Equation This formula represents that location at the present time t is the sum of the previous location and noise, expressed by Z. Simulating Returns with Random Walk Importing libraries Here, we are importing important libraries needed for visualization and also for simulating random walk model. from statsmodels.graphics.tsaplots import plot_acf from statsmodels.tsa.stattools import acf import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np sns.set() plt.style.use('ggplot') plt.rcParams['figure.figsize'] = (14, 8) Now we generate 1000 random points by adding a degree of randomness to the previous point to generate the next point with 0 as starting point. # Draw samples from a standard Normal distribution (mean=0, stdev=1). points = np.random.standard_normal(1000) # making starting point as 0 points[0]=0 # Return the cumulative sum of the elements along a given axis. random_walk = np.cumsum(points) random_walk_series = pd.Series(random_walk) 2. Plotting the simulated random walk Now, lets plot our dataset. plt.figure(figsize=[10, 7.5]); # Set dimensions for figure plt.plot(random_walk) plt.title("Simulated Random Walk") plt.show() Simulated Random Walk 3. Autocorrelational Plots An autocorrelation plot is designed to show whether the elements of a time series are positively correlated, negatively correlated, or independent of each other.An autocorrelation plot shows the value of the autocorrelation function (acf) on the vertical axis. It can range from –1 to 1. We can calculate the correlation for time series observations with observations with previous time steps, called lags. Because the correlation of the time series observations is calculated with values of the same series at previous times, this is called a serial correlation, or an autocorrelation. A plot of the autocorrelation of a time series by lag is called the AutoCorrelation Function, or the acronym ACF. This plot is sometimes called a correlogram or an autocorrelation plot. random_walk_acf = acf(random_walk) acf_plot = plot_acf(random_walk_acf, lags=20) Autocorrelational Plot Looking at the corelation plot we can say that the process is not stationary. But there is a way to remove this trend. I am going to try to different ways to make this process a stationary one - Knowing that a random walk adds a random noise to the previous point, if we take the difference between each point with its previous one, we should obtain a purely random stochastic process. Taking the log return of the prices. 4. Difference between the 2 points random_walk_difference = np.diff(random_walk, n=1) plt.figure(figsize=[10, 7.5]); # Set dimensions for figure plt.plot(random_walk_difference) plt.title('Noise') plt.show() cof_plot_difference = plot_acf(random_walk_difference, lags=20); We see that this is the correlogram of a purely random process, where the autocorrelation coefficients drop at lag 1. Moving Average Model (MA Models) In MA models, we start with average mu, to get the value at time t, we add a linear combination of residuals from previous time stamps. In finance, residual refers to new unpredictable information that can’t be captured by past data points. The residuals are difference between model’s past prediction and actual values. Moving average models are defined as MA(q) where q is the lag. Representation of Moving Average Model with lag ‘q’; (Source: AI for Trading nano degree course on Udacity) Taking an example of MA model of order 3, denoted as MA(3): Representation of Moving Average Model with lag=3; MA(3) The equation above says that the position y at time t depends on the noise at time t, plus the noise at time t-1 (with a certain weight epsilon), plus some noise at time t-2 (with a certain weights), plus some noise at time t-3. from statsmodels.tsa.arima_process import ArmaProcess # start by specifying the lag ar3 = np.array([3]) # specify the weights : [1, 0.9, 0.3, -0.2] ma3 = np.array([1, 0.9, 0.3, -0.2]) # simulate the process and generate 1000 data points MA_3_process = ArmaProcess(ar3, ma3).generate_sample(nsample=1000) plt.figure(figsize=[10, 7.5]); # Set dimensions for figure plt.plot(MA_3_process) plt.title('Simulation of MA(3) Model') plt.show() plot_acf(MA_3_process, lags=20); As you can see, there is a significant correlation upto lag 3. Afterwards, the correlation is not significant anymore. This makes sense since we specified a formula with a lag of 3. Autoregression Models (AR Models) An auto-regressive models (AR Models) tries to fit in a line that is linear combination of previous values. It includes an intercept, that is indipendent of previous values. It also contains error term to represent movements that cannot be predicted by previous terms. AR Models (Source: AI for Trading nano degree course on Udacity) An AR model is defined by its lag. If an AR model uses only yesterday’s value and ignores the rest, its called AR Lag 1, if the model uses two previous days values and ignores the rest, its called AR Lag 2 and so on. AR Lag (Source: AI for Trading nano degree course on Udacity) Usually, autoregressive models are applied to stationary time series only. This constrains the range of the parameters phi. For example, an AR(1) model will constrain phi between -1 and 1. Those constraints become more complex as the order of the model increases, but they are automatically considered when modelling in Python. Simulating return series with autoregressive properties For simulating a AR(3) process, we will be using ArmaProcess. For this, let us take the same example that we used to simulate random walk model: Representation of MA(3) Model Since we are dealing with an autoregressive model of order 3, we need to define the coefficient at lag 0, 1, 2 and 3. Also, we will cancel the effect of a moving average process. Finally, we will generate 10000 data points. ar3 = np.array([1, 0.9, 0.3, -0.2]) ma = np.array([3]) simulated_ar3_points = ArmaProcess(ar3, ma).generate_sample(nsample=10000) plt.figure(figsize=[10, 7.5]); # Set dimensions for figure plt.plot(simulated_ar3_points) plt.title("Simulation of AR(3) Process") plt.show() plot_acf(simulated_ar3_points); Looking at the correlation plot, we can see that the coefficient is slowly decaying. Now lets plot the corresponding partial correlation plot. Partial Autocorrelation Plot The autocorrelation for an observation and an observation at a prior time step is comprised of both the direct correlation and indirect correlations. These indirect correlations are a linear function of the correlation of the observation, with observations at intervening time steps. It is these indirect correlations that the partial autocorrelation function seeks to remove. from statsmodels.graphics.tsaplots import plot_pacf plot_pacf(simulated_ar3_points); As you can see the coefficients are not significant after lag 3. Therefore, the partial autocorrelation plot is useful to determine the order of an AR(p) process. You can also view these values using the import statement from statsmodels.tsa.stattools import pacf from statsmodels.tsa.stattools import pacf pacf_coef_AR3 = pacf(simulated_ar3_points) print(pacf_coef_AR3) Auto Regressive Moving Average Model (ARMA) The ARMA model is defined with a p and q. p is the lag for autoregression and q is lag for moving average. Regression based training models require data to be stationary. For a non-stationary dataset, the mean, variance and co-variance may change over time. This causes difficulty in predicting future based on past. Looking back at the equation of Autoregressive Model (AR Model) : AR Model. (Source: AI for Trading nano degree course on Udacity) Looking at the equation of Moving Average Model (MA Model) : MA Model. (Source: AI for Trading nano degree course on Udacity) Equation of ARMA model is simply the combination of the two : ARMA Model Hence, this model can explain the relationship of a time series with both random noise (moving average part) and itself at a previous step (autoregressive part). Simulating ARMA(1, 1) Process Here, we will be simulating an ARMA(1, 1) model whose equation is : ar1 = np.array([1, 0.6]) ma1 = np.array([1, -0.2]) simulated_ARMA_1_1_points = ArmaProcess(ar1, ma1).generate_sample(nsample=10000) plt.figure(figsize=[15, 7.5]); # Set dimensions for figure plt.plot(simulated_ARMA_1_1_points) plt.title("Simulated ARMA(1,1) Process") plt.xlim([0, 200]) plt.show() plot_acf(simulated_ARMA_1_1_points); plot_pacf(simulated_ARMA_1_1_points); As you can see, both plots exhibit the same sinusoidal trend, which further supports the fact that both an AR(p) process and a MA(q) process is in play. Autoregressive Integrated Moving Average (ARIMA) This model is the combination of autoregression, a moving average model and differencing. In this context, integration is the opposite of differentiation. Differentiation is useful to remove the trend in a time series and make it stationary. It simply involves subtracting a point a t-1 from time t. Mathematically, the ARIMA(p,d,q) now requires three parameters: p: the order of the autoregressive process d: the degree of differentiation (number of times it was differenced) q: the order of the moving average process The equation can be expressed as follows: Representation of ARIMA model np.random.seed(200) ar_params = np.array([1, -0.4]) ma_params = np.array([1, -0.8]) returns = ArmaProcess(ar_params, ma_params).generate_sample(nsample=1000) returns = pd.Series(returns) drift = 100 price = pd.Series(np.cumsum(returns)) + drift returns.plot(figsize=(15,6), color=sns.xkcd_rgb["orange"], title="simulated return series") plt.show() price.plot(figsize=(15,6), color=sns.xkcd_rgb["baby blue"], title="simulated price series") plt.show() Extracting Stationary Data One way to get stationary time-series is by taking difference between points in time-series. This time difference is called rate of change. rate_of_change = current_price / previous_price The corresponding log return will become : log_returns = log(current_price) - log(previous_price) log_return = np.log(price) - np.log(price.shift(1)) log_return = log_return[1:] _ = plot_acf(log_return,lags=10, title='log return autocorrelation') _ = plot_pacf(log_return, lags=10, title='log return Partial Autocorrelation', color=sns.xkcd_rgb["crimson"])
https://medium.com/analytics-vidhya/time-series-modelling-d6531c9a6338
['Purva Singh']
2020-12-10 16:10:51.339000+00:00
['Artificial Intelligence', 'Ai For Trading', 'Time Series Analysis', 'Finance']
Android: What is LiveEvent LiveData?
Example For example, let’s consider a simple TV show app where there is an activity that has two fragments. FragmentShowsList displays the list of shows, and upon the click of the show in FragmentShowsList , we will be navigating to FragmentShowDetails , which displays the show details. But what if there was a common button action in both fragments (e.g. SUBSCRIBE ) where, upon the click of that button, we hit the subscription API and post the result using LiveData? As the same LiveData instance is being observed in both fragments upon receiving the result, we need to show a success dialog in the case of FragmentShowsList and toast message in the case of FragmentShowDetails . But the problem comes as we are observing a single ViewModel instance. If FragmentShowsList is on top, it receives the result and shows dialog. But later if we navigate to FragmentShowDetails , it will show the toast message without clicking on the button because it’s observing the LiveData that contains a value. Our activity will look like this: Its XML looks like this: Now let's create the fragment_shows_list layout file: Next, let’s create FragmentShowsList : Next, let’s create FragmentShowDetails : And fragment_shows_details : Now let us run and check the output: Things to observe I clicked the subscribe button on FragmentShowList . On the result, it showed the success dialog acknowledgment, which is expected. When I clicked on the navigation button, it navigated to FragmentShowDetails but unexpectedly also displayed a toast message that was not expected or required. When I clicked BackPress on FragmentShowDetails , it was replaced with FragmentShowList but showed a success dialog that was not required or a case of duplication. To communicate between two different fragments inside an activity, we commonly use the shared ViewModel of an activity to eliminate boilerplate code with an interface. But scenarios like the one above cause unexpected behaviors to arise if not dealt with correctly. Now let's add the LiveEvent class to our code, change two lines of code in the ViewModel, and see the magic: There was nothing to change in the code for the activity or fragments — just a simple change on two lines of code in the ViewModel:
https://medium.com/better-programming/what-is-liveevent-livedata-7270a64736b3
['Satya Pavan Kantamani']
2020-02-24 18:43:40.973000+00:00
['Android App Development', 'Programming', 'Java', 'Android', 'Kotlin']
3D graphics using the python standard library
3D graphics have become an important part of every aspect of design nowadays. From game development, to web development, to animations, to data representation, it can be found everywhere. Because of this, it would be great to have an graphics engine in python, an easy to work with language, to develop other projects with. All the code in this article and more can be found in my github repository: https://github.com/hnhaefliger. In this article, we will be working using python3 and its standard library. To begin, we will create a new .py file and import tkinter, the gui library and math, for geometric functions such as sine and cosine: The next step is to create our engine class and initialise the display window: So lets walk through this code. We are creating a class called ‘Engine’, which will initialise with a height, a width, a distance, a scale, points and triangles. We will ignore the last two for now. The height and width represent the size of the window we create in pixels, the distance represents the distance between the viewer and the object and the scale is the size of the object we generate. We then create a new Tk window and give it the name “3D Graphics”. Finally we create a canvas in that window, on which we can draw our shapes. If we look at a 3D coordinate, it has the shape of (X, Y, Z), however, we can only display a point in 2D space, that is why we need to write a function in our class to flatten the coordinates: This code uses two formulas to generate x and y coordinates from 3D x,y,z coordinates using the distance and scale of the object. Next, we write a function to draw a triangle between 3 points, this is the method generally used in 3D graphics as it enables us to link points with a single shape. This code creates a triangle between three points on our canvas using the create_polygon method. Now we can draw our cube: If we say that our self.points array contains a list of the coordinates of the cube’s vertices and the self.triangles array a list of points to link we can see that this code creates our 2D coordinates from our points and then links them with the triangle. Now we can test our program like below: And we should get an output like this: Which is our cube. To summarise, we managed to create an engine which, from a set of 3D points creates a displayable model. In a future article, we will discuss how to play with different simple animations such as rotations on our cube. If you check out my github repository, you will find sets of coordinates you can use to generate models like the ones below: Since writing this article, I have made significant changes to my code, however, the core functionality is still the same.
https://medium.com/quick-code/3d-graphics-using-the-python-standard-library-99914447760c
['Henry Haefliger']
2019-11-19 20:48:55.784000+00:00
['Programming', 'Python3', 'Graphics', 'Python', '3d']